Wondering is an AI-driven experience research platform purpose-built to help organizations understand their users and markets to build better users experiences. AI Answers , our newest AI-powered analysis tool, automates the process of analyzing user interviews or survey responses to directly answer specific research questions, and backs up each conclusion with evidence from participants’ responses.
To understand better how researchers can use this tool in their research, we’ve completed a benchmarking study to compare how quickly and effectively both an expert-level human researcher and AI Answers are able to answer research questions by analysing responses collected in an AI-moderated study on Wondering.
In this benchmarking study, we found that the AI Answers tool:
Produces similarly accurate and evidence-backed answers to research questions as an expert-level human researcher. Reduces “time-to-insight” (the time it takes to extract insights from response data) by over 68x compared to an expert-level human researcher. Study overview In this study, participants from Wondering’s panel were interviewed through an AI-moderated study on Wondering, producing a set of participant responses. All participants answered the same set of questions regarding their product usage or experience.
The study focused on evaluating the desirability of a new product feature for the popular running app Strava.
After the data was collected, we then asked an expert-level human qualitative researcher to:
Conduct a thorough analysis of the participant responses. Compile a written report that explicitly answers the two research questions. Provide supporting evidence for each answer from the participant responses. We also used Wondering’s AI Answers tool to review the same response data to produce answers to the research questions, by:
Automatically analyzing the same set of participant responses. Generate a written report that answers the same two research questions. Provide references and quotes from the responses as evidence to back up each conclusion. During this analysis, neither the human researcher nor the AI Answers tool was exposed to each other’s analysis.
Finally, a second human researcher (who had not been involved in either analysis) compared the two resulting reports, and looked at whether each report met the following criteria:
Answered both research questions directly. Answered both research questions accurately. Provided a clear rationale connecting evidence to conclusions. Only used relevant supporting evidence to support their answers. Captured most (more than 90%) of the relevant supporting evidence that pertained to the questions. Results: AI Answers matches human-level analysis, but is more than 68x faster Both the expert human researcher and Wondering’s AI Answers arrived at similarly valid, detailed answers to each of the two research questions, showing that both the human researcher and AI Answers tool delivered similarly accurate reports. Each final analysis report contained:
Clear, direct answers to the research questions. ****Each analysis was thorough, addressing the key questions directly and supporting conclusions with specific participant quotes or data excerpts. Both the human researcher and AI Answers identified the same major themes in the dataset and provided similar takeaways. Explanations and reasoning connecting each conclusion to the participant responses. Evidence or direct participant quotes to back up and illustrate the analysis. In short, AI Answers matched the human researcher in providing useful, well-structured evidence-backed answers to the research questions in this benchmarking study.
The time taken to complete the analysis, referred to as ”time-to-insight”, was drastically shorter for AI Answers:
Time-to-insight (human researcher): 9 hours and 45 minutes Time-to-insight (AI Answers): 8 minutes and 36 seconds AI Answers completed the analysis over 68 times faster than the human expert.
Implications for qualitative researchers This drastic reduction in effort and turnaround time has the potential to help researchers scale their research process, and deliver more impactful research that can help organizations make more informed decisions.
As with our previous study on AI-moderated user interviewing, these findings underscore how AI-driven analysis can speed up the process of conducting high-quality experience research. With AI Answers, researchers can:
Teams can scale experience research by rapidly analyzing feedback without dedicating days or weeks to manual data synthesis.Organizations can iterate on hypotheses and make data-informed decisions faster, reduceing the risk of building experiences that miss user needs.Expert researchers can get time back to focus on empowering more teams with the insights they need and championing research across their organization.Limitations & considerations It’s worth noting that this study, and we share these results to share what we learn about how researchers can effectively use AI-powered research tools like Wondering to improve their research practices. As such, this study was part of our internal benchmarking, and further larger scale and peer-reviewed studies should be conducted to verify these results.
It’s also important to note that assessing the quality and validity of the outputted analysis from both the human researcher and AI Answers tool in this benchmarking study was to a large degree a subjective assessment by the second human researcher.
The results of this study do not mean that the AI Answers tool automates the role of the human researcher, who has responsibilities far beyond just analysing study data. However, it does means AI-powered tools like AI Answers can be used by researchers to speed up their research process, without compromising on control or the quality of insights extracted.
AI Answers offers significant time savings, but human expertise remains invaluable for enacting strategic decision-making, cross-functional alignment and research maturity within an organization.
To test out AI Answers, customers on our Scale plan can now view generated AI Answers on the “Overview” page for any study.