As product designers, builders and researchers we all care about one thing: building useful, usable and impactful products and services that people love to use. Generative AI presents an unprecedented opportunity for us to reconsider how we do this, and a chance to say goodbye to some of the habits and workflows that slow down our research and don’t serve us well. From empowering you to scale your research programme with AI-powered studies that gather data while you sleep, to multilingual translations that help you learn across language barriers, to powerful AI analysis that help you make sense of your study data, generative AI can help you understand your customers and empower your team to build products that people love.
At Wondering, we’re building the AI-first user insights platform that helps you capture this opportunity, by reimagining how we all can capture user insights and turn them into that propel our products forward.
But, like any new technology, generative AI also brings with it new considerations that we need to take into account as we build the first AI-first user insight platform. Here we’ll outline our approach to developing a safe, private and compliant AI-first platform that helps you scale your research, and some of the specific things we’re doing to make sure that you can deploy Wondering’s AI-first research methods in your research safely.
How can I be sure that AI-generated questions on Wondering are good quality? We've put safeguards in place to prevent harmful AI generations in your studies. Firstly, we've designed our product to prevent users from being able to directly manipulate the tasks that are executed by the AI models Wondering use to conduct AI-led studies.
Wondering's AI is designed to follow best practices when generating studies, asking participants questions and completing analysis. This includes, but isn't limited to, best practices like:
Asking questions in clear language. Only asking one question at a time. Avoiding leading questions. Avoiding biased questions. To make sure AI-generated questions are asked are relevant to the study, we also instruct the AI to only ask questions that are relevant to the study you create, and to not deviate from the instructions you give in each study block. You also have full control over to what depth you want Wondering to ask follow-up questions for each study block.
To further ensure that Wondering's AI-generated follow-up questions are of a good quality, we also instruct the AI to:
Always remain polite, and always formulate questions in a friendly tone. Never answer questions from the participant, or repeat what they say verbatim. Block any responses that are in conflict with OpenAI’s usage policies . It's worth noting that the AI models like the ones powering Wondering are still a new technology, and it's not always possible to guarantee that the questions it returns are formulated exactly the way you intend.
To get a good understanding of how questions are formulated in your study, we recommend that you preview your study a few times using the Preview functionality in the study builder.
Your data is not used to train AI models We use OpenAI, the market leader, as our AI provider. When you use AI in the Wondering platform, your data is not used to train OpenAI's models. For more details on how OpenAI handles your data, see their Enterprise Privacy page .
We do internal testing as part of our development practice We're committed to continuously making sure that we build AI-first research methods that are safe and that can be used in a responsible and compliant way. As part of our development process, we continually test our product to make sure that we can identify and prevent ways in which our AI system can output data that isn't safe.
We design our AI-features to make them understandable and transparent To make it easy for you to control AI-generations (such as AI-generated follow-up questions, studies and study results), we aim to design our AI-features in a way that is transparent and easily understood. This includes allowing you to guide AI-generated follow-up questions in your studies, previewing and testing how AI-generated questions appear to your participants and displaying what data points in your study data underpin the AI-generated themes we display to you in your study results.
What compliance standards does Wondering meet? Wondering is SOC2 Type II compliant. We're also committed to complying with the GDPR framework. For more details on our GDPR commitment, read this guide . You can also read more in our Trust Centre .