Smarter Idea Selection: Turning Idea Overload into Innovation Advantage
Download
To Suggested Citation
Suggested Citation

Jason Bell, J.., Pescher, C., Tellis, G.J.. & Füller, J. (2026). Smarter Idea Selection: Turning Idea Overload into Innovation Advantage. NIM Marketing Intelligence Review, 18(1), 42-47. https://doi.org/10.2478/nimmir-2026-0007

Register for our Newsletter

NIM Marketing Intelligence Review – AI in Market Research

Smarter Idea Selection: Turning Idea Overload into Innovation Advantage

Idea Screening Creativity Crowdsourcing AI-Support

Authors

  • J. Jason Bell, Assistant Professor of Marketing, Penn State University, Smeal College of Business
  • Christian Pescher, Assistant Professor of Marketing, Universidad de los Andes, Chile
  • Gerard J. Tellis, Neely Chaired Professor of American Enterprise, Marshall School of Business, University of Southern California, Los Angeles
  • Johann Füller, CEO Hyve AG, Professor of Innovation and Entrepreneurship, University of Innsbruck, Austria, Johann.Fueller@uibk.ac.at
Download Article

Crowdsourcing works – sometimes too well. With today’s AI tools and digital platforms, companies can gather thousands of ideas in a matter of days. But the real bottleneck isn’t generating ideas – it’s evaluating them. When hundreds or thousands of submissions flood the funnel, even seasoned experts struggle to keep up. Fatigue sets in, attention wanes, and decisions become inconsistent. The question isn’t how to get more ideas; it’s how to identify the right ones quickly and reliably.
Our research provides a practical answer: Pair a simple AI screen called the Idea Screening Efficiency (ISE) curve with managerial expertise. This combination helps organizations handle idea overload systematically by cutting review time while keeping control over what managers are willing to miss.

The typical tradeoff in idea screening

At its core, idea screening is a classification problem with asymmetrical risks. Rejecting weak ideas saves time and money; rejecting a strong idea risks losing the next big thing. The more ideas you reject, the higher the risk of also rejecting strong and useful ideas. In traditional idea contests, managers tend to accept screening out 25% of all ideas without sacrificing more than 15% of good ideas or screening out 50% of all ideas without sacrificing more than 30% of good ideas. These percentages can serve as benchmarks for any AI-based screening solution.

The playbook: how to use the ISE curve – from theory to action

Earlier models dealing with idea screening struggle to mimic expert choices in the real world. Managers need a transparent rule for idea screening that scales, keeps winners and can be governed. The ISE curve is theoretically grounded but can easily be applied in practice by following the steps below. Figure 2 highlights common pitfalls.

> Define your loss function

Decide up front how many good ideas you can afford to miss in exchange for speed and cost savings. Depending on the number of available experts and time restrictions, different businesses set different tolerances. Commoditized categories with known requirements can accept a slightly higher miss rate to speed up the process. Early, ambiguous categories should be conservative because you may not want to risk losing the next big thing.

> Use word atypicality as a pragmatic signal

Word atypicality compares the vocabulary of each idea to the contest’s overall word set. Ideas with low word atypicality, i.e., ideas that are well anchored, are rated higher by experts. Ideas that rely on idiosyncratic wording with little overlap are more likely to be misaligned or thin. Atypicality is easy to compute and explain.

> Pick your operating point on the ISE curve

Use the ISE curve to set your initial cut. If time is tight and categories are mature, start around a moderate screen. If the domain is novel or reputational risk is high, start lighter.

> Track, learn and adjust

Whichever point you choose, log the decision and track the downstream effects: How much review time did we save? Log your decision and track outcomes over time – how many hours saved, how many good ideas missed. Avoid the “set-and-forget” trap. Language evolves, teams change, and contest goals differ.

How to integrate the AI screen into the entire ideation workflow

The AI screen can easily be integrated into ideation contests to save substantial resources and time (Figure 3). A steering team starts by running a crowd ideation challenge, collecting ideas for a specific domain, which might yield 3,000 ideas. Traditionally, several experts would read them all, over three weeks. Despite high costs and long durations, the ratings are likely to be inconsistent, and patterns might be missed. Instead, the team could use the suggested AI screen, clarify their loss function and choose an initial ISE cut. They choose a moderate cut on the ISE curve for core categories and a lighter cut for frontier tech. Then, they run the screen, and the system filters the initial idea pool. The resulting ideas can be assigned to focused shortlists for individual expert evaluation. Experts can be assigned to ideas matching their domain expertise. Providing standardized evaluation categories increases transparency and helps track the outcomes. With a reduced list, experts are able to spend time on deeper evaluation, debate single ideas, if necessary, and select winning ideas. This approach assures that lost good ideas stay within tolerance. The time-to-shortlist shrinks substantially, and satisfaction among experts rises because they work on the ideas.

 

Intelligent selection wins

AI doesn’t replace human judgment in ideation but refines it. The ISE curve offers a transparent, data-driven way to manage idea overload without losing sight of strategic control. By pairing simple AI tools with managerial oversight, organizations can screen faster, decide smarter and focus human expertise where it matters most: recognizing the ideas that truly move the business forward.

FURTHER READINGS

Bell, J. J., Pescher, C., Tellis, G. J., & Füller, J. (2024). Can AI help in ideation? A theory-based model for idea screening in crowdsourcing contests. Marketing Science, 43(1), 54–72. https://doi.org/10.1287/mksc.2023.1434

Authors

  • J. Jason Bell, Assistant Professor of Marketing, Penn State University, Smeal College of Business
  • Christian Pescher, Assistant Professor of Marketing, Universidad de los Andes, Chile
  • Gerard J. Tellis, Neely Chaired Professor of American Enterprise, Marshall School of Business, University of Southern California, Los Angeles
  • Johann Füller, CEO Hyve AG, Professor of Innovation and Entrepreneurship, University of Innsbruck, Austria, Johann.Fueller@uibk.ac.at
Share Publication
Suggested Citation

Jason Bell, J.., Pescher, C., Tellis, G.J.. & Füller, J. (2026). Smarter Idea Selection: Turning Idea Overload into Innovation Advantage. NIM Marketing Intelligence Review, 18(1), 42-47. https://doi.org/10.2478/nimmir-2026-0007



Other articles of the MIR issue “AI in Market Research”

Here you can find more exciting articles of this issue.

To the entire issue

Recent issues

Scroll to top