The numbers speak for themselves. Sixty-one percent of marketers have used AI in their marketing activities, and the global market revenues of AI in marketing are expected to grow from $ 27 billion in 2023 to $107 billion in 2028. AI is set to become one of the most transformational technologies the marketing industry has seen. However, the use of AI in marketing is not without challenges. Foremost among these is the issue of AI and privacy. Unless this challenge can be solved, much of the hype surrounding AI may amount to just a lot of hot air.
Around the world, strict regulations have emerged in recent years to protect the privacy of web users, with the GDPR arguably the most important and influential of these. However, as experts from law firm Greenspan Marder point out: “As [AI] continues to expand…it creates a labyrinth of privacy concerns, thereby challenging traditional norms of personal data protection.”
Most prominent AI models in use today leverage machine learning. The models’ algorithms learn from huge volumes of structured and unstructured data to fulfil their use case. The more data the better. When looking at large language models (LLM) such as ChatGPT, the data in question is essentially the whole of the internet. In the chatbot’s own words, ChatGPT uses “a mixture of licensed data, data created by human trainers, publicly available data, and other data sources that comply with their terms of service. This text data can include websites, books, articles, and other forms of written text to help the model understand and generate human-like text across various topics and styles.”
This is where the AI privacy challenge arises. With so much data in play, how can model developers be sure that this data is compliant with GDPR and other such laws? The sheer scale of the data being used makes it highly likely that the personal data of web users is included. There is no way of knowing for sure, but one thing it clear: if personal data is consumed by an AI model it will be unlikely that the data has been consented for use. There are also no easy ways for people to access the data and check, which is in itself problematic from a privacy perspective.
Regulators are already sharpening their focus on AI applications. This April, for example, privacy watchdogs in the EU established a task force to look at issues around ChatGPT. This followed the Italian regulator temporarily taking ChatGPT offline, deeming it to be in breach of GDPR. Privacy issues raised by the Italian regulator included the possible misuse of personal data in training the algorithm and the lack of a clear legal basis for the collection of personal data used. The regulator later allowed ChatGPT to return to Italy after it made some changes.
However, the legal difficulties of OpenAI (the company behind ChatGPT) and other AI model owners are only just getting started. Regulators worldwide are investigating how models collect and process personal data, along with the potential misuse of AI in misinformation campaigns. At least three counties — Germany, France, and Spain — have launched new investigations into ChatGPT.
Despite the huge opportunities to make marketing more efficient and scalable using AI tools, brands and their agencies must tread carefully. If they use tools which later turn out to breach privacy regulations, brands could suffer significant reputational harm and may also be liable for fines for regulatory non-compliance. This is a something many companies are aware of, with 65% of professionals in one survey saying that privacy and security concerns are the number one risk of using generative AI.
The solution is clear. Marketers should seek out tools and services that place a premium on privacy and compliance with privacy laws. Key to this is finding models that have been trained on consented data only. AI service providers should also provide complete transparency and data lineage to provide peace of mind. They also need to enable data owners to access their data and withdraw consent at any time.
AI promises a great deal. It will help marketers spin up creative faster, select the right messages, improve real-time engagement, improve personalisation, and much more. However, these benefits can only be unlocked if privacy is placed front and centre. The digital world is trending towards a more ethical and consumer-focused operating model. AI will need to fit in with this approach.