In our first episode of Creative Futures, IAA Benelux Executive Director Stephanie Manning sits down with Mark Polyak, Chief Product & Technology Officer at Mint.ai, to discuss the IAA AI Ethics Manifesto. They explore how AI can enhance – not replace – human creativity, and what ethical responsibility looks like in the modern advertising industry.
Here’s the full transcript.
Stephanie:
OK, so we’re going to start with the questions for the Section 2, and that is the manifesto part where it says we believe that AI tools should empower human creativity, not replace it. Human oversight, editing and final approval should be integral to all AI generate content. And so the first question is how does your company use AI to enhance creativity rather than replace human input?
Mark:
That’s a great question, Stephanie. I think in a lot of ways what we do, is we allow AI to provide humans with additional variations or conceptual variations either on from standpoint of, for example, optimizing budget or from standpoint of thinking through which type of channels to use for pushing a different type of creative, creative assets. So, from that standpoint, AI gives you a lot more options as compared to the options which could have been provided maybe by one expert. It gives you, if you will, it gives you an entire team of experts at your disposal very, very, very quickly.
Stephanie:
Great. How about #2 how do AI tools influence brainstorming and the ideation processes and advertising?
Mark:
So, when it comes to brainstorming, typically you’re limited by the creativity of your team. You’re also limited by the experience of what you have done, what you have done before. One of the things that AI in general allows you to do is to go to a well of deep human knowledge. Because alet’s be honest here, if you think about large language models or even small language models, they are a compilation of lots and lots of human knowledge brought together. From that standpoint, think of it as the old age-old concept of wisdom over crowds. You’re literally going to wisdom of crowds and asking, what else is there? What else can I look at? Ultimately, AI gives you options from which you can cherry pick with your team or without your team. And you never have a situation where you’re relying on just an idea or assumptions of any individuals. So AI can give you a whole variation of, for example, creative assets to pick from. AI can give you variations on different types of channels to pick. AI, especially the generator of AI can also have an option to provide interpretable AI where you can see what type of results did the AI pick for you. And then at the same time ask the same model or multitudes of model to examine the choices and interpret them. And give you additional ways to think about it that you may not have necessarily thought of. So from that standpoint, you as a human can even spark the creative process, or you can get AI to give you various options to come up with additional ideas. But at the end of the day, the human is still very much in the loop and making the decisions.
Stephanie:
And that’s a great segue to the next question, which is what role you think that human intuition and storytelling should play in AI assisted advertising.
Mark:
I don’t think human intuition is gone anytime or anywhere. At the end of the day, everything starts with a spark and that spark of an idea will continue to come out of the human gut and human intuition. Intuition as much as AI is able to come up with ideas. As I mentioned before, a lot of it is based on past performance or past activity versus a human is what I would call at the end of the day, human is an ancient computer which has been evolving for millions of years. And one of the great things about us, which computers still don’t have, is our ability to come up with something new. And that new irrational, if you will, activity is coming from the gut. And up to now, we don’t really see the ability of AI to have a kind of radical vision, radical ability to divert from the past body of knowledge and create something that is completely and ultimately new.
Stephanie:
So do you think that AI will lead to more standardized formulaic content or will it unlock new creative possibilities?
Mark:
I really think it’s going to unlock new possibilities. I think as much as AI will come up with new things again, if the things such as standardized, remember you still have that role of a human that will continue to cherry pick and try new variations, another after another, another until and then fuse them together in new ways that are not inherent to the original model. Where over time with a human in a loop, the AI will create something novel.
Stephanie:
Exactly. So how do you ensure the human in the loop in the advertising resource management process?
Mark:
You basically make sure there is this feedback loop between the algorithms and AI every in every decision that needs to take place, whether it’s a decision to optimize budget, whether it’s a decision to think through creative assets to use, whether it’s in decision on how to properly activate a particular campaign or how to plan it. There are thousands of decision points where humans can continue to be in the loop, similar to a pilot being in the cockpit of a fighter plane. As much as a fighter plane is an amazing billion dollar machine, it still takes a human to perform the ultimate mission.
Stephanie:
So with all those thousands of decisions from an observability standpoint, what steps are possible to monitor every step of AI in the workflow process?
Mark:
So when it comes to AI, you have to really think of two types of AI. There’s a predictive AI which is based on classical machine learning. With predictive AI you can request full observability because at the end of the day it is based on rules. It is a rules-based approach where rules are applied to past where on past knowledge to understand the forecast or to explain new knowledge. In this case you can request it. When it comes to the new generation of AI which is generative AI or agent based large language models, what you have to do there is you have to ask for interpretation rather than for full observability. It is super hard to be able to understand what exactly is taking place. What exactly is taking place inside LLMs? Only recently, we saw an attempt by a company called Anthropic to be able to follow their large language models in real time, if you will, on safari hunting to see exactly how they hunt. But even then, it is very hard to fully understand what they’re doing. However, what you can do is you can use a large language model to provide an interpretation of every step. Either the model can do it itself, or alternatively you can ask other large language models that are not working with that particular large language model on decision process to provide alternative interpretations of what is going on. This way, the LLM is being judged or being interpreted by its own peers and provides that interpretation to a human who, if they see some kind of adversarial or bad intent, can stop a model from taking that next step.
Stephanie:
That’s super interesting. Gen AI is still a very much unproven technology. So how do you ensure accuracy and reliability of the agents within advertising resource management?
Mark:
In essence, if you think about accuracy and how the models are working right now, some of the best models that you see, they’re based on the combination of both predictive analytics, basically past knowledge that is applied to new knowledge, and also generative AI or new knowledge which is generated from huge tons of human or machine based knowledge stores. This is one of the ways that it currently exists to improve accuracy because again, past knowledge is really a good indication of what will happen in the future and to avoid hallucinations for which some of the generative AI systems are known to do. You should also ask for, as we discussed earlier, the interpretation. Interpretation along each part of a step. And then in the interpretation process, as I mentioned earlier, large language models can themselves call something out and provide some level of sanity check on the accuracy. The newest phase is something we call the multi agentic systems. In multi agentic systems, each agent has their own specialization and does their own task and they do the task really, really well. At the same time, what you also have is an agent supervisor, which works as a sanity check on the work of the rest of the agents. The only thing that it does is critique the work and the quality of the output in order to continue the cycle of iteration, to make sure that there is no inconsistency between the goal that was set initially for all the agents and the accuracy of the final output. So if you will, it is very similar to a team of doctors that are examining a new patient, bringing all of their specialized knowledge and specialized equipment to a problem. And then you have a General Practitioner that looks at everything that they bring to the table and says, well, if you put all these pieces together, this is a potential diagnosis for this patient.
Stephanie:
And we’re talking about agentic AI, but maybe we should define that for some of the listeners because I think there’s a lot of new vocabulary coming out of the Gen AI environment.
Mark:
Sure, Stephanie. When you think about it, we are kind of an interesting evolution of AI. First, we started with large language models. Large language models are excellent at providing kind of passive information. You ask a question of genAI and you get some level of an answer, if you will. It’s kind of a co-pilot. All of us use it on a regular basis. When we use tools such as Claude or ChatGPT, for example. Then one of the things that you’ve seen are agents. Agents are large language models that can answer a particular question and act on it. So for example, one of the agents that exist on Chat GPT is an operator where you may ask for a scheduling task, let’s say give you a briefing on the news in the morning, schedule an appointment or review for example, your media plan. Then, at the same time, you have the latest phase, a multi agent system, like MINT, where every agent has its own specialization. For example, there may be an agent which focuses on thinking through a media plan. There may be an agent that is thinking about media optimization. There may be an agent which is a strategist that really takes a holistic view of how to run the entire media campaign. Again, the idea there is for all of these agents to work together towards a goal that has been set up by human. And at the same time, to serve as a sanity check where no agent will overwhelm the other with their specialized knowledge and specialized outputs. This is really important because what they’re really seeing is an evolution of what we are calling it as, an agentic workforce or if you will, a hybrid workforce where humans are working jointly with agents to produce results. For example, in the past when you would think about AI as a co-pilot or where you had one agent working with a person, working on one particular task, what would typically happen is, humans would try to fit the particular agent within their existing workflow. With the advent of multi agent systems, the game has changed. At this point, the humans are becoming more so managers of teams of agents and what they’re doing and the entire definition of work is also starting to change. Where humans are hopefully starting to do more elevated tasking, more focused on idea generation, creativity generation, high value task generation, rather than focused on highly manual, highly repetitive tasks that agents can do by themselves, obviously while being supervised by humans.
Stephanie:
I love that analogy. So if you look at it from a manager perspective, how much autonomy did the agents have actually? Do they make independent decisions, or can they build their own tools and should they?
Mark:
My gosh, that’s a $1 billion question. You literally see right now a whole iteration of companies that are trying to build a solution so that are somewhere in between or all the way in different extremes. For example, you see a Chinese company called Magnus that is claiming to have built a fully independent agentic workforce where the agents do the entire workflow without any kind of response or any kind of input from humans. Then obviously you see my company MINT, which is more focused on putting the human in a loop of all the critical decisions, trying to create a seamless playing ground between humans and agents. Then you also see completely dependent agent systems that are dependent on humans to make quite a few decisions before they get involved again. Some of it really depends on the sector. Obviously, you’re less likely to allow agents in healthcare to make any kind of critical decisions for patients when the lives of humans are involved. However, for example, in Shanxi province in China, a new law has recently been passed, where, for example, if two doctors disagree on the diagnosis of the patients, they’re using DeepSeek, which is Chinese LLM as an arbiter, where DeepSeek will basically provide a differential diagnosis. And that’s a diagnosis that the doctors will have to go along with. So at the end of the day, it’s about philosophy in the culture of a particular society where LLMs are being built, that is likely going to dictate which way and how the systems will be built. So it’s a deeply philosophical question that is still being still being answered and probably will continue being answered in advertising and other sectors probably for the next half a decade, if not longer.
Stephanie:
That’s actually pretty impressive that they’re using DeepSeek to be an arbiter between the two doctors. I didn’t actually hear about that.
Mark:
They are. There’s another example, which is very interesting, it’s in the city of Lisbon and also in a number of rural counties in California, hospitals are using OpenAI as the first line of triage for 911 calls. At the beginning, when I heard about this, I said to myself, my gosh, they’re using Open AI to choose who will live and who will die. Until I learned, for example, that in Portugal, during busy times it could take up to 6 minutes for someone to answer 911 call. Obviously in 6 minutes a person may bleed out. So from that standpoint, it’s a balance between willingness to take risk vs. the opportunity to save lives. So at the end of the day, some of these decisions will be made based on that kind of balancing equation.
Stephanie:
That is really fascinating. Thank you so much. We’re going to pivot now and go to the section four of our manifesto, which is talking about implementing AI bias detection tools. And so we believe in implementing AI biased tech, tech detection tools, ensuring that advertising is legal, decent, honest, truthful and created with fair, ethical and equitable objectives. So I’m gonna ask you the first question, which is which elements of the advertising process can actually be automated with AI?
Mark:
Well, first of all, I, I would say that I wholeheartedly support and believe what you said. I think at the end of the day, all tools need to as much as possible guard against biases both known and unknown. I think that is going to be critical for keeping ethical standards and ethical standards in advertising. Second of all, when it comes to guarding and to automation in particular, we definitely see a situation where for example media optimization, media planning, probably financial reconciliation and all the non-critical tasks can probably be automated in a big way. You also see some of the algorithms that are being used for audience segmentation and personalized advertising that have a potential to be automated and in fact are already automated to a degree as well. You need to implement automated systems for detecting bias but also probably still, unfortunately, need to have a human in the loop to identify any new biases that may not be potentially recognized through past patterns. Maybe because it’s new, maybe because it is so nuanced that automated bias detection systems may not be able to identify them. And as much as technology bias detection has improved over years, as much as you can ask other models to do automatic bias detection I will still sleep safer by knowing that somebody, a human is looking over it and is checking on it to make sure that nothing passes through. In some cases, you may even want to have double-blind team of experts looking at the data because as we know, our society and many other societies have become polarized. So, something which may seem as a bias to one group may look totally normal to others. So having, if you will, adversarial bias checkers is probably a good way to catch a fair amount of controversial content.
Stephanie:
That’s a great point. And my last question is, are there emerging technologies or methodologies that can help reduce AI bias in advertising other than your two checks through double blinds?
Mark:
I think some of the best ones that I’ve seen so far, is where a large language model is being put to the test by other large language models that are out there. Think of it as the output of open AI being checked live by Anthropic. So, if you will, you have systems of human knowledge that are being checked by other systems of human knowledge. That’s one way to do it. The other is the path that I mentioned to you before and a path that MINT follows where you have multi agent systems where a number of agents specifically play the role of sanity checks. Making sure that things are as accurate as possible, checking for bias in real time. So it’s a bias check, it’s an ethics check, it’s an accuracy check that is being set up by default in any kind of a multi agent AI system.
Stephanie:
I love that. Thank you so much. I think that was our last question, so I wanted to open up. Do you have any additional points you want to add to anything that you’ve mentioned so far or anything you want to close with?
Mark:
Yeah, I, I will say that when I think about AI, I often go back to kind of a history of bookmaking. When printing came to the Ottoman Empire, sultan issued a prohibition which lasted over 200 years. And the reason for it was two-fold. First, calligraphers were afraid that a typeset would basically cause them to lose their jobs. Second, when the Ottoman Sultan called his advisors to give him advice, they basically said, if you allow printing to occur, you may have unauthorized versions of Quran which may not necessarily match with each other. You may have a lot of misinformation which will be difficult to prevent. And as a result, there was almost a 200-year prohibition against the using that process inside Ottoman Empire. Now, why do I say this? This not the first time that humanity had to deal with highly disruptive, process of automation. And the choices that we make can either allow us to be highly creative and have positive impact on the society as a whole or can stifle innovation and have deep debilitating impact that will take centuries to fix. I’d rather us learn from the example of the Ottomans of how not to do things and be thoughtful and judicious. I’d rather we ask a question of how do we properly introduce and take advantage of a new technology that is in our realm?
Stephanie:
That was really thoughtful and thank you so much for the perspective.