A Conversation on AI with Hoogenraad & Haak’s Daniël Haije

Join us for an in-depth discussion with Daniël Haije, a leading expert in advertising and marketing law based in Amsterdam. With a diverse clientele spanning various industries, his expertise covers everything from cross-border clearance work to intellectual property rights enforcement.

As the Chair of the Netherlands Advertising Law Association and Global Vice-Chairman of the Global Advertising Lawyers Alliance (GALA), Haije is at the forefront of industry leadership.

In this interview with IAA Benelux, Haije explains the intersection of AI and advertising, discussing opportunities, challenges, and legal considerations.


IAA Benelux: How have you seen the recent evolution of AI change the advertising industry?

Haije: The ad industry hasn’t changed totally since Gen AI came into play. But what I’ve seen is a lot of AI anxiety. People are saying, “We should adopt it now,” and some people are saying, “No, let’s wait; let’s see how it develops.” I’ve seen a lot of push and pull between legal and business affairs and creation and ad agencies.

What I’ve also seen is that a lot of ad agency clients are asking questions like, “Why does this edit have to be so expensive,” “Why does this voice have to be so expensive,” or “this sketching or this moving storyboard because you can do this with AI, right?” So there’s a pressure on prices.

And the last thing I noticed is — and maybe it’s a little bit too early to say — but I think there’s a divide between tech savvy agencies who are really trying to promote the use of AI and trying to market themselves as being tech savvy, and other agencies that are more traditional and focus on creativity. They are not denouncing AI but not putting that much focus on it. I think that’s an interesting divide.

IAA Benelux: What do you see as the main risks of using AI in the advertising industry? And do you think the EU regulatory regime (think about GDPR, copyright laws, DSA, AI Act) sufficiently addresses these risks?

Haije: The main risks from my perspective are infringement of third-party rights, so copyrights and trademarks, but also other personality rights, so other third-party rights such as rights to publicity, etc. And in addition, in the Dutch market an imitation prompt like, “Give me something in the style of artist X,” could also be unlawful. So that’s not an infringement of IP for that matter, but it is an unlawful act. So that could be a risk: infringement of third party rights, that’s the first one.

And the second one would be ownership issues, because agency clients demand transfer of IP. They want the IP of what the agency has created for them in general. What you see is that when an agency uses generative AI output for their client deliverables, they can’t transfer the IP because it’s still unclear how the IP situation is with generative AI output. That can be a problem in client agency agreements because an agency can’t transfer something that it doesn’t have. There have to be carve outs for that, which isn’t handled yet in most Master Service Agreements (MSAs) or client services agreements. So that’s ownership.

And the third one would be confidentiality issues, because uploading proprietary or confidential information in an AI system — I wouldn’t recommend it at this point. Agency people could upload a client brief in AI systems, which wouldn’t be very sensible. There are multiple issues with generative AI tools T&Cs: what kind of usage rights will you get when you use generative AI for your deliverables to your client and the projected use of those of those advertising materials, is that covered by the usage rights that you get from Gen AI tool?

And the final thing I would say is reliability of information, but that’s common sense. You can’t trust generative AI outputs — at least not yet — and you should be very mindful of putting too much confidence in generative AI output.

In terms of how the European regulatory framework handles this, I would say the biggest issue right now is copyright. It’s not yet 100% clear how the current regulatory regime should be applied to AI, but I expect this to be clarified through EU case law in the coming years.

IAA Benelux: What do you see as the main opportunities for using AI in the advertising industry and what about fostering or hindering such innovation with EU laws?

Haije: Obviously there are tons of possibilities with AI for the ad industry. Just think about assistance in concepting, coming up with ideas. AI can serve as a great inspiration tool, and it could also function as a time saver for sketching, copywriting, product design, storyboards etc., and presenting your concept to the client. As long as AI is applied with caution, then the regulatory regime I would say neither fosters nor hinders such use of AI right now.

IAA Benelux: With new regulations such as DSA and the EU AI Act, organizations will need to be more transparent on the use of AI as a whole and also in the use of AI for advertising. For example, under the DSA, platforms need to provide meaningful information on the parameters or the algorithms used for advertising to individuals. How do you think this requirement or other requirements in AI will work out in practice given the complexity of AI and the use of algorithms?

Haije: Well, asking the question is like partly answering it because, in general, I believe that advertisers should just not rely on algorithms that they don’t understand. That’s like a general principle. If you’re not able to explain the algorithm that you’re using, then, well, you shouldn’t use it. It reminds me of the discussions that were surrounding GDPR and programmatic advertising. That system was and is so complex that people find it hard to explain. I would say that’s an inherent risk.

So how will these new transparency requirements work out in practice? I think some advertisers will inevitably not be able to comply because the way their AI works is just a black box actually. They don’t understand it, so they will not be able to explain. I think a lot will depend on how consumer authorities and regulators will enforce these new transparency requirements. If they adopt a harsh approach and really enforce it strictly, well, then we’re going to move towards practices where you don’t use an algorithm unless you fully understand it, so you’re able to offer the transparency that’s required by law.

IAA Benelux: Question five, my favorite question, if you were a lawmaker, is there anything you would add or change to existing and upcoming laws and regulations to foster responsible use of AI by companies active in the advertising industry?

Haije: Well, that’s an interesting question. And I would say right now it’s a bit too early to say actually. In general, I think laws and regulations should adapt to society and what’s happening in society and not the other way around. But there is an exception here with AI, because some forms of AI like social scoring, like predictive policing, applications like that are a big risk.

I think it’s good that the EU has put up a few barriers for the most risky forms of AI with the AI Act. But for more granular rules and regulations, I think lawmakers should keep a close watch on what is happening and how AI develops in the use of AI in society and then adapt — but not the other way around.

IAA Benelux: Do you feel that self regulation on AI could be an effective tool for the advertising industry?

Haije: I think maybe self regulation can be a good idea for the ad industry because it can be a great way to prevent too far reaching statutory rules. In the Netherlands self regulation is a big thing. It’s huge. We have the Dutch advertising code. It has a lot of authority. Stakeholders from the industry, they come up with rules and generally comply with those rules, so it can be a real tool.

But in what field could you use self regulation, I can give a little bit of thought to that. I think there’s an ethical component to using AI in advertising. As an example, using AI generated voices, you can generate an AI voice by combining a lot of voices, real voices, human voices and well then you get like a merged AI voice that you use for all different kinds of applications. That’s great. And some voice agencies, some of the agencies that are coming up with these voices, they choose to pay every human voice that has contributed to that to that AI voice. And others say OK well the original voices are not recognizable in the AI output voice. So why would we have to pay those voices. I think that’s an ethical issue and I would really like the industry to say, “OK we have to pay everybody fairly and pay the voices fairly.”

I think also the industry could give some consideration to self regulation to give substance to the transparency requirements. And because the rules regarding those transparency requirements are not very specific, and you could steer those transparency requirements in the right direction by making up some self regulation.

IAA Benelux: Do you have some practical tips for the use of AI applications in the advertising industry?

Haije: It’s a complicated subject, the use of AI in ad agencies, and there’s a lot of push and pull between creative and business affairs about using AI. But in general I would think it’s a good idea to draw up and use guidelines for the use of AI. And what you could do in those guidelines is to set some rules about the use of AI output in client facing materials — for instance for concepting, presenting an idea to your client — and then managing the expectations. You should be transparent about using AI, the client should know that you’re actually using AI, that AI is used for like concept purposes only and that the client doesn’t fall in love with like the AI generated outputs and then says OK, we want to use this. You want to prevent that. So be transparent towards your client.

The second thing that those guidelines could address is the use of AI output in final materials. So no use of images, for instance AI generated images in public facing materials unless you were really sure that there are usage rights associated with that image that are going to cover your use but that at this time it’s very unsure, it’s very uncertain that you can do that.

The third thing is setting up some standards about importing proprietary or confidential information in AI tools. Basically saying don’t do it, don’t upload a client brief and say OK I’m wondering what the AI tool is going to make of this.

That will be a few subjects that you could know address in guidelines.

So the first step, make guidelines to revisit your MSAs with your clients because your client agency agreements don’t address AI. And for instance in an IP transfer clause you should make a carve out for AI generated materials. Dig them up from your drawer and see what’s in it on IP specifically.

And my third tip would be well have legal have your legal department or your business affairs person or your outside counsel check the T&Cs of AI tools that you’re using for the first time because you really should know what those AI tools T&Cs say about usage rights about for instance, indemnities.

Have a list of like the AI tools that you know your agency is allowed to use, because you shouldn’t use AI tools unless those TNCs are checked thoroughly by legal or business affairs.