Michael Haenlein, David Ranftler and Paul Wesendonk (2026). Between Surveys and Depth: How AI Is Rewiring Qualitative Research. NIM Marketing Intelligence Review, 18(1), 54-59. https://doi.org/10.2478/nimmir-2026-0009
Between Surveys and Depth: How AI Is Rewiring Qualitative Research
AI is rapidly moving from a buzzword to an everyday tool in market research – especially in qualitative work, where “human understanding” was long seen as untouchable. Rather than replacing human researchers, startups likexelper are redefining their role, combining AI’s scalability and consistency with human interpretation and empathy. In this interview, guest editor Michael Haenlein speaks with xelper cofounders David Ranftler and Paul Wesendonk about AI-moderated interviews and AI-supported analysis. They explain why this method sits between classic in-depth interviews and surveys, where AI genuinely adds value, and where its limits remain. They share real-world cases from taste testing to churn research, address ethical concerns and look ahead to speech-to-speech AI models that might soon make qualitative interviews sound and feel entirely natural.
Michael: To start, could you briefly introduce your company and how the idea for its founding came about?
David: Paul and I founded xelper, a German-based AI company focused on qualitative market research. The idea started with the concept that an AI could moderate interviews, especially in UX research. Paul and I met years ago during a student exchange and stayed in touch. With our shared interest in startups and technology, we saw an opportunity to develop AI tools for qualitative interviews. Today, we run AI-moderated interviews, AI-based analysis of in-depth interviews and focus groups, and automated coding of open survey responses. Essentially, our solutions help researchers collect and analyze qualitative data at scale.
Michael: Many readers might wonder how AI-moderated interviews actually work. Can you compare them to traditional interviews or surveys?
David: We realized early on that AI-moderated interviews aren’t directly comparable to either classic in-depth interviews or quantitative surveys. They’re a new method sitting in between. Whereas you may only be able to do 20 to 30 in-depth interviews in a study, with AI, you can collect qualitative data from hundreds of participants. The method is ideal when a conventional survey would have more than five open-ended questions, as people are more engaged with an AI interviewer than filling in a survey. However, if a researcher needs to deeply probe with follow-ups, human-led interviews are still superior.
Michael: What about the analysis? How does AI perform compared to humans in interpreting qualitative data?
Paul: AI excels at aggregating what participants say and identifying patterns, but interpretation remains human work. For example, we can automatically locate all introductions across multiple interviews, which isn’t as easy as one might think. And on our platform, every AI output is traceable to the original quote, as transparency is key. AI helps researchers swim through their data more easily, but it can’t interpret meaning, define implications or tell a company what to do next. That still requires human judgment.
Michael: So, we’re really talking about a hybrid model – humans and AI together. Can you describe the ideal division of roles between human researchers and AI throughout a typical study?
David: I like to describe it as an “AI smiling curve.” Humans are most important at the beginning and end of a research process – defining objectives and interpreting results. In the middle, AI shines, as it can handle the heavy lifting: moderating interviews, aggregating data, coding responses. Ultimately, humans deliver the insights and recommendations. When it comes to persuasion, storytelling and decision-making, people trust human researchers or market research agencies more than AI.
Michael: Could you share some concrete success stories?
David: In analysis, the main success is efficiency. Some institutes tell us our platform cuts their reporting time by 50% to 80%. In AI-moderated interviews, we’ve seen great results in exit interviews after taste testing – for chocolate, coffee or butter, for instance. People describe not just what they taste, but the underlying jobs, needs, emotions, associations and texture expectations much more vividly in an interactive chat than they would in a survey. We’ve also integrated interviews on automotive websites to understand why visitors drop out of the buying process. And in B2B settings – for example, for telephony systems – clients have run hundreds of AI-moderated interviews instead of a few dozen traditional ones. Our measure of success is simple: Do clients come back for a second time? They do.
Paul: I can offer two other examples: We ran churn interviews to understand why users abandoned digital customer journeys. I initially doubted it would work, but we collected over 700 AI-moderated interviews, yielding anything from very short to very detailed responses that gave the client a granular picture of why users abandoned the process. Another case involved synthetic data perceptions. We interviewed 99 market researchers on their attitudes toward synthetic data, and over half attended the results workshop – a clear sign of engagement.
Michael: Speaking of synthetic data: “Silicon samples” are a hot topic. Could AI simply talk to AI and skip real people altogether?
David: Honestly, we don’t know, and we haven’t seen convincing proof that it works. For now, interviewing real humans remains essential. Even if you could generate synthetic respondents, you’d still need high-quality data to train them – and collecting that data could cost more than just asking people directly. In our experience so far, real conversations are still more efficient and reliable.
Michael: Cost and speed are often cited as benefits of AI. How do clients actually use the efficiency gains?
Paul: Often, our AI interviews are used as add-ons to existing studies. Agencies use them to extend reach or add depth without blowing the budget. Some clients reinvest the savings to increase sample size, such as doing 100 short AI interviews instead of 20 human ones. Others keep the gains as efficiency improvements. For agencies, it’s not just about cost; it’s also about differentiation. They can offer something innovative while still ensuring quality.
Michael: Are there areas where AI-moderated interviews don’t work well?
David: They generally work across industries, but there are a few exceptions. For example, we had to make adjustments to accommodate content filters around sensitive topics in the pharmaceutical sector. Very elderly participants – say, over 70 – may struggle with the technology. Then, in some highly specialized B2B contexts, like doctors, where the participants are very expensive, traditional interviews can still make more sense. Yet, AI-moderated interviews offer 24/7 flexibility and can happen any time the participant is available. And from an analysis perspective, there are also studies where how people speak matters more than what they say – for example, psychological or emotional research where pauses or tone carry meaning. AI can’t yet capture those subtleties. Some researchers feel they can “read between the lines” – notice a hesitation, a sigh, a pause – things AI won’t pick up. Those projects still need human moderation.
Michael: Let’s touch on ethics. AI interviews raise new concerns – especially as people form emotional bonds with AI bots. What ethical issues do you see?
Paul: Transparency is crucial. I once built an avatar of myself for a party that interviewed guests, and some genuinely thought they were chatting with me! That made me realize how important it is to make it crystal clear that participants are interacting with an AI. At the same time, this technology can reduce some traditional barriers. In sensitive areas such as politics, health or well-being, we see cases where participants are more comfortable sharing their views with an AI than with a human interviewer. But regardless, researchers must ensure participants understand who – or what – they’re speaking to.
Michael: When you explain your technology to clients, do they tend to overestimate or underestimate what it can do?
Paul: Both. Early on, many said, “Why do I need xelper? I can just use ChatGPT.” Then they tried it and discovered that generic tools struggle with tasks like processing large numbers of interviews systematically and securely and came back. Now clients are much more educated, but expectations evolve fast. What’s “amazing” today is standard in six weeks. Two years ago, people were amazed that an AI could chat. Now they expect it to speak flawlessly and transcribe perfectly. We’re developing at breakneck speed just to keep up, not only with technology, but with user expectations.
Michael: That sounds challenging. How do you manage those expectations?
David: It’s tricky from a UX perspective because users assume an AI can do anything they request in a chat box. Traditional software made it obvious that you needed to build a feature for each function. With AI, people expect magic. We’re working on designing interfaces and guidance that communicate boundaries clearly while keeping the experience fluid. It’s like expecting Excel to translate your entire table into Chinese just because you typed the request. We need to guide users gently back to reality.
Michael: Looking ahead one or two years, what technological development will have the biggest impact on your work?
Paul: Speech-to-speech models are very promising, especially as latency improves, but text-based interviews remain crucial for market research. Text chat tends to drive higher participation rates and more representative samples, since many respondents prefer typing over speaking – and in our studies, only a small minority choose voice. Rather than replacing text, speech will complement it. Its biggest impact will likely be in areas like website walkthroughs and usability testing, where speaking feels more natural and can yield richer, in-the-moment feedback. And of course, if a breakthrough occurs in synthetic respondents or general AI – AGI, the kind that truly understands – it would change everything. But that’s speculative. For now, our focus is on making real human-AI conversations smoother and more natural.
Michael: If you could summarize the main value AI brings to qualitative research, what would it be?
David: Two things: scale and consistency. You can reach hundreds of people with the same level of structure and tone – something impossible for human interviewers. At the same time, you get richer data than a survey, because participants write freely, engage more and stay in the flow. AI doesn’t replace human researchers; it frees them from repetitive work so they can focus on interpretation and strategy.
Michael: Finally, how do you see AI-driven research evolving as a field?
Paul: We think AI will become an integral part of research workflows, not a novelty. In two years, “AI-assisted interviews” will be as normal as online surveys are today. The key will be finding the right balance between automation and authenticity – keeping human curiosity at the center while letting AI handle the scale and speed.
Michael: Thank you both very much for this insightful conversation and for sharing your experiences, reflections and outlook for the future of market research. It has been a pleasure discussing how your work at xelper is helping to rethink what “qual at scale” can look like.
Michael Haenlein, David Ranftler and Paul Wesendonk (2026). Between Surveys and Depth: How AI Is Rewiring Qualitative Research. NIM Marketing Intelligence Review, 18(1), 54-59. https://doi.org/10.2478/nimmir-2026-0009












