Before we dive in, I’d like to personally thank each of you for sharing your feedback with us. Everything we build and ship is co-created through conversations with you—all the researchers, designers, and builders who use Wondering’s AI in your own customer research.
Without further ado, here are the biggest releases you’ll have seen (and some you may have missed) in your Wondering account this year:
Turn responses into insight with AI-powered thematic analysis Wondering’s new AI Analysis capabilities to help you automatically apply highlights to your response data from interviews, prototype tests, live website tests and surveys. We then analyze each of these highlights to uncover thematic groups, before further synthesizing this data into actionable findings to help you interpret what’s important in your response data:
Learn more about AI analysis here .
Create research studies faster with our AI Study Builder This year we introduced the AI Study Builder, which helps you quickly get started generating new studies based on your research goals. Simply define your research goal, and Wondering's AI will create a draft study that helps you gather user insights to get you closer to your goal.
Learn more about the AI Study Builder here .
Interview your customers in their native language with AI We’re launched a new way to interview your users in whichever language they are the most comfortable with, real-time AI-moderation in 50+ languages. We’ll automatically detect which language your participants speak based on their settings.
Each question in your AI-led interview will be displayed in the language that comes most naturally to your participant, as we’ll use state-of-the-art large language models that translate each question on the fly. This ensures that participants can provide genuine, authentic and unfiltered responses no matter how fluent they are in the English language, which is crucial for the integrity of your research.
Learn more about multilingual studies here .
Let AI moderate your prototype tests We introduced prototype testing task, designed to help product designers test high and low-fidelity prototypes in Figma. With the prototype task block in the Wondering study builder, you can now seamlessly import any Figma prototype and deploy an AI-moderated usability testing study to your own customers or our global panel of over 200K+ participants in minutes.
Whether you're looking to test specific user journeys, measure the overall usability of your prototype or gather feedback on a new product concept, it’s now easier than ever to gather actionable insights about how to improve your designs from real users in almost any language.
Learn more about AI-moderated prototype testing here .
Get feedback on any live website Towards the end of this year we also launched Live Website Tests, which allow you to show your participants any website, such as your landing pages or a competitor website. By combining Live Website Tests with other blocks in your study, you can then ask questions to your participants to better understand how they experience those websites.
Learn more about Live Website Testing here .
Test static designs with AI-moderated image tests Design Tests allow you to display static images, such as designs of your website or mobile app, and then capture feedback on your designs with our AI-led user interviews. Our AI-led moderator will then formulate questions to ask your participants based on your research goals, helping you understand their reactions and understanding of the design. As the participants leave their responses, our purpose-built AI helps you dig deeper by dynamically asking tailored follow-up questions based on what your participants answer.
Learn more about image testing here .
That’s all for this year! We’re now gearing up for an even bigger year next year, with heaps of exciting updates coming to Wondering to help you understand your customers at scale with the help of AI.
Happy researching and see you in 2025!