Martin Ulbrich is Policy Officer at the European Commission and Head of Unit A2 at DG Connect, “Artificial Intelligence Policy Development and Coordination,” which is the unit in charge of the AI Act.
An economist by training, Ulbrich has been dealing with digital issues for more than twenty years in the Commission from different angles. He joined the AI policy team in 2018, contributing to the drafting of the White Paper on AI and the impact assessment of the AI Regulatory Act proposal. Before joining the AI unit in 2018, he focused most recently on the economics of networks, on geoblocking and the impact of digitization on labor markets. Previously, he worked in its industrial policy and transport departments as well as in the Joint Research Centre.
In our comprehensive interview with Ulbrich, he highlights the nuanced approach of the AI Act, emphasizing the potential for voluntary Codes of Conduct to supplement legal obligations. He also underscores the necessity for transparency in AI algorithms, especially concerning targeted advertising, and emphasizes the importance of adhering to fundamental ethical principles outlined in the AI Act.
IAA Benelux: What do you see as the main risks of AI used in the advertising industry, and how would you call upon the advertising industry to mitigate those risks?
Ulbrich: The risk analysis of the AI Act is not based on a sectoral approach, but focuses on specific AI applications. Advertising would be concerned in so far as it uses such applications. Thus, in theory there the prohibition on manipulative and subliminal techniques could be applicable but only if such were used. More relevant is the high-risk case of recruitment including the placing of targeted job advertisements. However, as advertising is a very think-out-of-the-box industry, it is very difficult to forecast where such risks might occur.
IAA Benelux: What do you see as the main opportunities for using AI in the advertising industry, and what concrete actions has the European Commission taken to keep fostering innovation within the advertising industry?
Ulbrich: AI allows a market understanding that is both based on more and better data and more sophisticated in its analysis. It also significantly speeds up the feedback loop. As a result, advertising has the opportunity to base its offerings on a deeper and more up-to-date picture of consumer reactions, which should in principle increase its value to companies. In this context, on 24 January 2024 the Commission has issued a “Communication on boosting startups and innovation in trustworthy artificial intelligence” that focuses on generative AI, which is of particular relevance for advertising and other creative industries.
IAA Benelux: With new regulations such as the DSA and the AI Act, market players need to be more transparent on the use of AI as a whole, and also in the use of AI for advertising. For example, under the DSA, platforms – that are key players in advertising – need to provide meaningful information on the parameters used for advertising to individuals. How would you like to see transparency to work out in practice, given the complexity of AI and the use of algorithms (e.g., black boxes with potential lack of accountability)?
Ulbrich: Article 26(1) of the DSA requires that providers of platforms present certain information about each specific advertisement that they show to a user. This includes meaningful information on the main parameters used to determine which user will see the particular advertisement. Such information should truly reflect what is actually most important in selecting the user who will be shown the advertisement. It is not enough to just refer to the fact that AI systems or other algorithmic systems are used for the targeting.
IAA Benelux: Do you feel that, next to the current rules & regulations applicable to the use of AI in the advertising industry (e.g., GDPR, DSA, AI Act) self-regulation on AI could be an effective tool for the advertising industry? If so, why? If not, why not?
Ulbrich: The AI Act expressly foresees the possibility of adopting Codes of Conduct for applications that do not fall under the legal obligations, based on the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved and taking into account the available technical solutions. It also encourages providers and, as appropriate, deployers of all AI systems, to apply on a voluntary basis additional requirements. Taking into account the specifics of advertising, a combination of both elements – voluntarily applying the legal obligations as appropriate and additional tailor-made requirements – would undoubtedly constitute an effective tool.
IAA Benelux: Is there anything else you would like to see the advertising industry take account of in terms of using AI responsibly?
Ulbrich: The potential and the variability of AI is so great that it makes littles sense, beyond the narrowly focused AI regulation, to try to draw up a concrete list of “dos” and “don’ts”, especially in a sector as creative and as dynamic as advertising. Instead, it is much more useful to keep in mind the fundamental principles that have informed the AI Act – which come from the AI High-Level Expert Group’s ethical guidelines – and apply careful judgment as to what is and what isn’t ethical.