Wondering is an AI-led user research platform purpose-built to help companies build better products by scaling their user research. We’ve developed a new user research methodology, AI-led user interviewing, in which an AI system moderates text and voice-based user interviews with participants and analyzes the transcriptions for insights.
In our internal benchmarking study, we compare an expert-level user researcher with Wondering's AI-led user interviewing product, focusing on their abilities to extract insights from interviews with research participants recruited from a panel. We find that our AI-led user interviewing system can capture and identify nearly as many actionable insights (97%) as an expert-level user researcher when interviewing participants about a recent experience (grocery shopping). Moreover, we find that there is significant overlap in the insights identified by the researcher and the AI system (40% of all insights identified), and that the AI system also captures and identifies additional insights (29% of all insights identified) that the user researcher misses in their analysis. Finally, we find that the AI system can complete and analyze user interviews 16x faster than a user researcher.
In our study, participants recruited from a research panel were interviewed about their most recent grocery shopping experience. Each participant was interviewed twice, once by an expert-level user researcher and once through an AI-led user interview. Half of the participants were first interviewed by the user researcher, and the other half were first interviewed through the AI-led user interview.
After completing the user interviews, the user researcher analyzed the user interviews they had conducted for insights about how the participant’s most recent grocery shopping experience could have been improved, creating a list of insights (Researcher Insights). The AI-led user interviews were similarly automatically analyzed through Wondering’s AI analysis product, creating a separate list of insights (AI Insights). Neither the user researcher nor Wondering’s AI analysis product were exposed to the contents of the user interviews conducted by the other.
A second user researcher, who hadn’t been exposed to the contents of any of the user interviews, then evaluated Researcher Insights and AI Insights to identify how many insights were identified in each. This second user researcher reviewed each insight in both the Researcher Insights and the AI Insights, and disqualified any insights it deemed not to be relevant from each list of insights. In both lists, all insights were deemed to be relevant.
Across the Researcher Insights and AI Insights 65 distinct insights were identified. The Researcher Insights contained 46 insights. The AI Insights contained 45 insights, meaning the AI system captured nearly (97%) as many insights as the user researcher.
The AI system captured 97% as many relevant insights as the user researcher.
26 of the insights (40% of all insights) were found in both the Researcher Insights and the AI Insights, meaning the AI Insights contained 56% of the insights captured in the Researcher Insights. An additional 19 of the insights (29% of all insights) were found in the AI Insights but not in the Researcher Insights. 20 of the insights (31% of all insights) were found in the Researcher Insights but not the AI Insights.
40% of all insights were captured by both the AI and the researcher, and 29% off all insights were captured only by the AI.
To better understand how long time it took to capture the Researcher Insights and the AI Insights, we compared the time-to-insight for both the researcher-led user interviews and analysis, and the AI-led user interviews and analysis. In this study we defined the time-to-insight as the time between finalising the discussion guide for the user interviews and finalising the list of Researcher Insights or AI Insights. The time-to-insight for the Researcher Insights was 72 hours, compared to 6 hours for the AI Insights. This means the time-to-insight was 16x shorter for the AI Insights, highlighting that a shortened research cycle can be achieved by leveraging AI-led user interviewing and analysis.
The time-to-insight for the AI system was 16x shorter than for the user researcher.
So, how does this impact user research more broadly? Software engineering teams spend roughly half of their time reworking avoidable product mistakes that don’t meet user acceptance. Yet, most companies don’t conduct any user research at all, and even fewer do it well. AI-led user research lowers the barriers for teams to get started incorporating user insights and testing into the product development process. At the same time, it enables expert-level user research teams to accomplish more in less time, lowering the cost and risk of product development and increasing the chances you’ll build a product customers love.
As you adopt AI-led research methods in your own research, it's worth keeping current limitations in mind. In this study we compared how AI-led user interviewing performs against a user researcher on the specific tasks of interviewing users on a predefined topic and analyzing each interview for insights. There are of course many other tasks involved in the user research process, including gathering data through other methodologies, other forms of analysis, deciding what to research and how to structure research projects, storytelling, communicating and negotiating with stakeholders and more. When adopting AI-led research methods in your user research and discovery, it’s important to remember that AI-led user interviewing is a methodology that can extend user research and product teams to help them scale their research so that they can have more impact, but not replace them.
We hope these findings inspire researchers and product development teams to integrate AI-led research methods into their ongoing research efforts to drive even more impact, regain time and scale their research and product discovery programmes. Check out Wondering for free here to try AI-led user interviews in your own user research.