Publications
Transparency Alone Does Not Create Trust. How Consumers Really Perceive AI-Generated Advertising - Dr. Fabian Buder in Conversation (2025). NIM Insights Research Magazin Vol. 8 - AI.Meets.Consumer.
2025
Transparency Alone Does Not Create Trust
Dr. Buder, AI is rapidly transforming marketing. How far has the technology actually made its way into practice?
FABIAN BUDER: The pace of development is breathtaking. In our study at the Nuremberg Institute for Market Decisions (NIM), 100 percent of the marketing professionals surveyed stated that they are already using generative AI—many of them intensively. The promises are clear: efficiency, speed, and cost advantages. However, the enthusiasm on the corporate side contrasts sharply with the much more cautious attitude of consumers.
What exactly causes this skepticism among consumers?
It’s a mix of ignorance, mistrust, and a sense of loss of control. Only about one-fifth of people understand how AI personalizes content or makes decisions. Even fewer trust companies or regulators to use AI in consumers’ best interests. This creates a general background noise of skepticism that becomes particularly visible in advertising. Consumers demand transparency, but for advertisers, it helps only to a limited extent.
At NIM, you and your colleagues conducted several experiments to examine how people respond to AI-generated advertising. What is the key finding?
In short: When people know that an advertisement was created by AI, they like it less. In our experiments, for example, we showed identical ads—once labeled “created by a photographer” and once “AI-generated.” The AI version was rated as less credible, less emotional, and less memorable. Even the willingness to click decreased measurably.
This shows that there is a gap between technical perfection and emotional impact. Transparency—that is, labeling content as AI-generated—does not automatically make advertising more trustworthy; in fact, it can have the opposite effect.
That sounds like a real dilemma—the EU mandates transparency, but it reduces advertising effectiveness. How can this be solved?
I think you can indeed call this a “transparency dilemma.” Open labeling is both regulatorily necessary and ethically right. But it exposes a fundamental problem: the lack of trust in AI. Transparency without a basis of trust solves nothing; it merely reveals that trust is missing. Companies therefore need to work on two fronts: First, fulfill compliance requirements; second, actively build trust through education, quality communication, and appropriate context of use.
A key recommendation is: Use AI where it makes sense in terms of content, not just where it happens to be available.
Dr. Fabian Buder, Head of Future & Trends Research at the Nuremberg Institute for Market Decisions (NIM)
What do you mean by “appropriate context”?
Our research shows that AI-generated advertising is, for example, better accepted when it fits innovative, tech-oriented products. If a smart home brand or an electric car advertises with AI support, people seem to find it more consistent. Moreover, there seems to be a difference depending on whether the product itself is in the foreground of the ad or the way it is used, which is often portrayed in an emotional way. Here, the AI label tends to act like a warning signal. Therefore, a key recommendation is: Use AI where it makes sense in terms of content, not just where it happens to be available.
But won’t this simply change over time as we get used to AI-generated content?
That’s a valid and frequently asked question—and yes, habituation will certainly play a role. Our data suggest that people with more experience and greater trust in technology are also more open to AI-generated advertising. But mere habituation is not enough. Trust does not automatically arise through repetition but through positive experiences and transparent communication. When people see that AI-driven communication is relevant, honest, and high-quality, their perception shifts. In short: Trust through habituation is not a natural law, but it can be learned—if brands actively work on it.
How can trust be built in concrete terms?
In my view, two things are crucial for building trust. First, consumers should be able to understand why AI is being used and what benefits they gain from it. Second, I would recommend emphasizing in communication that AI does not operate autonomously but under human responsibility. Content is still heavily influenced by people—“prompt to publish” doesn’t really work yet.
I would therefore recommend embedding labeling within a trust narrative, connecting innovation with human authenticity. In other words, make it clear that content is “AI-supported and designed by our creative team.” Companies should not hide behind AI but demonstrate that they use it responsibly.
Your conclusion for marketing practice?
AI is here to stay—but it only works with trust. Anyone who believes that efficiency alone is enough underestimates the emotional dimension of brand communication. The winners will be those who combine technological innovation with human credibility.
This interview was conducted by NIM
Authors
- Dr. Fabian Buder, Head of Future & Trends Research, NIM, fabian.buder@nim.org
Contact









