Publikationen

Zum Zitiervorschlag
Zitiervorschlag

Unfried, M. (2025). Overcoming Aversion to AI-Based Recommendation Systems in Innovation. NIM Insights Research Magazin Vol. 8 - AI.Meets.Consumer

Melden Sie sich für unseren Newsletter an

Jahr

2025

Autorinnen und Autoren
Dr. Matthias Unfried
Titel der Publikation
Overcoming Aversion to AI-Based Recommendation Systems in Innovation
Publikation
NIM INSIGHTS Research Magazine
Der Inhalt dieser Webseite liegt nur in englischer Sprache vor. Eine KI-gestützte Übersetzung ins Deutsche ist möglich. Der Inhalt dieser Webseite wurde KI-gestützt übersetzt. Zum englischen Originaltext zurückkehren?

Overcoming Aversion to AI-Based Recommendation Systems in Innovation

Can Machines Simulate Human Insights?

Recommendation systems based on AI can accelerate innovation and improve decision-making. Yet, reluctance remains widespread. Find out how businesses can overcome these barriers, foster acceptance and create efficient new hybrid evaluation systems that combine human and machine intelligence.

Artificial intelligence (AI) is no longer a distant prospect in innovation management—it is already reshaping how firms generate, evaluate, and select ideas. While most organizations are still experimenting with AI for idea generation (e.g., drafting product concepts or marketing messages), its role in idea evaluation—deciding which ideas are worth investing in—remains less understood. Yet evaluation is where resources are committed, risks are taken, and the future of an innovation portfolio is decided. Traditionally, companies have relied on experts to perform this screening. Experts bring valuable domain knowledge but are expensive, limited in number, and prone to biases.

To overcome these bottlenecks, many firms have turned to crowdvoting, engaging online communities to evaluate ideas on a scale. This approach taps into the “wisdom of the crowd” and has shown results comparable to expert judgment. However, crowdvoting has its own weaknesses: Participants vary in expertise, attention spans are limited, and herding or strategic voting can distort outcomes. One famous example is Lays’ “Do Us a Flavor” contest, where the brand invited consumers to propose and vote on new potato chip flavors. One of the crowd’s top choices—cappuccino-flavored chips—failed in the market. Such outcomes highlight the risk of relying solely on crowds: Popularity does not equal viability. AI-enabled evaluation systems could have flagged the flavor as unlikely to succeed, helping the company avoid a costly misstep while still engaging the crowd creatively.

Method: Comparing Real and AI-Generated Responses

This is where AI-enabled evaluation systems come in. By applying machine learning to historical data, these tools can predict which ideas are most likely to succeed, often outperforming human evaluators. They promise efficiency, scalability, and consistency in screening ideas. Yet relying solely on algorithms can also backfire. For example, AI recruiting tools, which were trained on historical hiring data and inadvertently learned to favor male candidates for technical roles, have been abandoned after producing biased results. This illustrates the opposite pitfall: Algorithms can reproduce hidden human biases when left unchecked. The key lesson from both cases is that decisions might be most effective when humans and machines complement each other.

But there is a catch: People do not automatically trust AI. Research in psychology and management shows a pattern of algorithm aversion, where decision-makers prefer their own judgment—even when the algorithm performs better. In innovation contexts, this aversion could mean that internal innovation managers or external crowdvoters ignore helpful AI tools, leaving organizations unable to fully benefit from them. In this study, we explore when and how human evaluators in the innovation funnel actually adopt AI-enabled evaluation systems in practice, as understanding this adoption behavior is critical for designing effective hybrid evaluation systems that combine human and machine intelligence.

Managerial Implications

For firms, the study offers some lessons on how to successfully introduce AI-enabled decision support systems:

  • Expect initial resistance: Adoption will not be immediate. Crowdvoters (and employees) may distrust AI until they gain experience with it.
  • Leverage incentives: Monetary bonuses can motivate initial use, but social incentives—such as highlighting other users’ successes—are more powerful and sustainable.
  • Prioritize transparency: Even simple explanations of how the AI works reduce mistrust.
  • Avoid costs for use: Fees, however small, reinforce aversion.
  • Be patient: Adoption grows naturally as users gain familiarity.

AI can enhance the efficiency and quality of idea evaluation, but adoption hinges on how it is introduced. Companies should carefully design incentive schemes, communicate transparently, and give crowdvoters time to build trust. With the right approach, AI-augmented crowdvoting can become a powerful tool for firms seeking to harness collective intelligence while ensuring robust and reliable decision-making.

KEY INSIGHTS

  • Initial hesitancy: Individuals seem to be reluctant to use AI at first, reflecting mistrust or unfamiliarity.
  • Incentives shape behavior: Bonus payments, peer success stories, and basic information about the AI system all encourage adoption.
  • Control is not decisive: Unlike in other contexts, giving voters partial control over AI recommendations does not boost adoption.
  • Learning curve: Over time, adoption rises naturally as voters experience the system’s value. Peer influence remains the most effective long-term driver, while fees consistently suppress use.

The findings are particularly relevant as firms increasingly combine human crowds with AI augmentation in innovation. By understanding adoption dynamics, companies can create more effective hybrid systems, balancing algorithmic support with human judgment.

Autorinnen und Autoren

Kontakt

Publikation teilen
Zitiervorschlag

Unfried, M. (2025). Overcoming Aversion to AI-Based Recommendation Systems in Innovation. NIM Insights Research Magazin Vol. 8 - AI.Meets.Consumer


Weitere Artikel der Ausgabe “NIM INSIGHTS Research Magazin VOL. 8”

Hier finden Sie weitere spannende Artikel dieser Ausgabe.

Zur gesamten Ausgabe

Zum Seitenanfang