Virginie Liebermann leads the Media, Data, Technologies & IP practice as Avocat à la Cour at Molitor, offering advisory and litigation support to clients across diverse sectors such as retail, automotive, health, and finance. With a focus on copyright, information technology, privacy law, trademarks, and regulatory clearance for media and advertising projects, she brings extensive expertise in distribution networks and combatting unfair competition practices.
In this interview with IAA Benelux, Liebermann delves into the complex interconnection between AI innovation and regulatory frameworks within the advertising industry.
IAA Benelux: How have you seen the recent evolution of AI change the advertising industry?
Liebermann: Generative AI has definitively become the number one technology on everyone’s minds and will keep growing. As far as the advertising industry is concerned, I believe that the use of generative AI will be a game-changer. A recent study found that 141 well-known brands used AI-generated content in their online ad campaigns. This technology has already proven to be efficient and quick. It provides for greater creativity and scalability. You can simply generate more content, highly targeted campaigns with fewer costs.
IAA Benelux: What do you see as the main risks of using AI in the advertising industry, and do you think the EU regulatory regime sufficiently addresses these risks?
Liebermann: From my experience, the challenges we face with AI are mainly twofold.
A misuse of AI can:
- drive the spread of misinformation, low-quality content, or even fake news
- lead to hallucinations providing wrong results, especially during the AI training phase, where enormous volumes of data are processed.
I believe that the challenge lies in learning how to use it, in a way that ensures quality, respect ethics and copyright as well as data protection. In my opinion, this will still require human oversight. Generative AI should serve as an “additional tool”, not as a new, unique way of thinking. It will also require new tasks and job positions to train correctly generative AI solutions from the beginning and human proper oversight to mitigate the risks.
The current legal framework is designed to ensure that AI is trustworthy i.e.:
- control the risks, at least those currently known
- force greater transparency and
- protect personal data and data subjects, users and the most vulnerable ones like minors.
It’s however still too early to know whether our current legal framework would be sufficient or not.
IAA Benelux: What do you see as the main opportunities for using AI in the advertising industry? Do you think the EU regulatory regime fosters or hinders such innovation?
Liebermann: Generative AI solutions used in the advertising industry will give highly personalized and targeted advertising experiences.
However, we are witnessing a superposition of regulations where compliance could in practice may become very complex, potentially hindering the widespread use of AI.
IAA Benelux: With new regulations such as the DSA and the AI Act, market players will need to be more transparent on the use of AI as a whole, and also in the use of AI for advertising. For example, under the DSA, platforms need to provide meaningful information on the parameters or their algorithms used for advertising to individuals. How do you think this requirement will work out in practice, given the complexity of AI and the use of algorithms?
Liebermann: AI transparency is challenging, particularly in explaining complex AI functions and keeping them up-to-date. Solutions like explainable AI, standardized disclosures, and using AI to understand AI decisions can help. Overcoming complexities may also involve educating people about AI, using more comprehensible AI systems, regular checks, and focusing on ethical rules to avoid for example dark patterns practices i.e. manipulating users into making decisions that they may not have otherwise made.
IAA Benelux: If you were a lawmaker, is there anything you would add or change to existing and upcoming laws and regulations (such as copyright laws, the GDPR, DSA or the AI Act) to foster responsible use of AI by companies active in the advertising industry?
Liebermann: This is a very challenging question! If I were a lawmaker, I’d dream of designing a single, consolidated legal framework that would anticipate and encompass the evolution of technology.
Without wishing to be pessimistic, I am afraid this is not possible.
I believe, however, that it would be important and useful to have regulations that consider the realities of sectors, industries, small or large companies to adapt the applicable legal framework and ensure therefore a common acceptable minimum level of compliance.
IAA Benelux: Do you feel that self-regulation on AI could be an effective tool for the advertising industry?
Liebermann: Self-regulation in AI for the advertising industry has the potential to be effective due to its agility and industry-specific insights. It can help reaching a certain level of transparency, the protection of minors as I said previously because a robust self-regulatory approach could encompass sector-specific standards, ethical guidelines, routine audits and frameworks for transparency.
The main concern regarding self-regulation could be to ensure an equivalent level of expectations and controls, in particular at national level.