GroupSolver Evaluation
KJT Group is dedicated to superior quality and developing impactful insights for our clients. Before we recommend new technologies, we often try to test them in real world scenarios to better understand their strengths and limitations. We underwent a test of GroupSolver’s open-end technology. GroupSolver touts this artificial intelligence approach by highlighting 3 key benefits:
- Crowd Intelligence: Respondents answer open-ended questions in their own words, then they collaborate.
- Machine Learning: Answers are processed by a dynamic and self-calibrating algorithm.
- Advanced Statistics: The platform validates natural language answers and quantifies qualitative insights.
Natural language answers are recorded unaided. Respondents then valuate answers provided by other respondents. Answers are scored based on their overall level of support strength. We sought to understand what benefits this approach might have in a real-world application.
To test the technology, we developed a 5-minute survey instrument. Respondents evaluated two questions, one programmed as an AI OE (GroupSolver technology) and the other as a standard text box. These were randomized and the sample was split to ensure an equal number of respondents completed each question as an AI OE and as a text box.
- Question A was: “We’d like you to think about the healthcare facility where you work most often. What main challenges will your organization face in the next three years? Please be as specific as possible.”
- Question B was: “We will switch gears and ask you about your participation in market research studies. For what reasons do you participate in market research studies. Again, please be as specific as possible.”
Question A was always shown before Question B.
KJT Group recruited 100 physicians (primary care physicians, pediatricians, and OB/GYNs) from its proprietary healthcare panel to complete the survey. We sought to address 3 main questions through the survey design:
- Does GroupSolver’s AI Open-End (AI OE) collect better quality answers from physicians compared to standard text boxes (Text box)?
- Do physicians prefer using GroupSolver’s AI OE compared to Text boxes?
- Does GroupSolver’s AI OE provide better quality insights from physicians compared to Text boxes?
We analyzed verbatim length and qualitative quality. Both methodologies have a similar answer quality. AI OEs do not increase the quality of the verbatims for analysis. We hypothesize that answer quality may improve over time as physicians become more familiar with the question design; however, for a one-off question, it has no impact on answer quality.
About 60% of physicians preferred answering an AI OE compared to a Text box. Physicians who prefer the AI OE most often report this is because they can hear what others are concerned about and it’s easy decision making without having to type.
Evaluating insight quality is subjective; however, the AI OE provides a different measurement than a text box. Each has their unique purpose. AI OEs provide an ability to gauge overall support (consensus) of ideas in an area where you do not have enough information to make a fully comprehensive aided list. Text boxes measure unaided (top-of-mind) response. GroupSolver’s analytics also capture the top-of-mind response, which allows you to get the best of both approaches.
GroupSolver’s technology extends the traditional open-ended text box, which nearly all quantitative surveys include.
It is best applied to a situation where you would like to quantify the consensus around a topic for which you cannot reliably create a closed-ended list. In more exploratory quantitative research, this technology may help to provide a broader market assessment in certain areas that you may not have expected. This technology also reduces coder bias as top ideas are already defined, rather than requiring a researcher to determine which ideas fit in which categories to gauge overall support as with traditional coding. GroupSolver’s output also supports additional analysis of subgroup trends.
No technology is perfect and GroupSolver’s technology is no different. There is an increased investment required for using the platform along with increased programming complexity (KJT Group programs its surveys using Decipher and respondents would be redirected to GroupSolver’s platform). Additionally, GroupSolver has a front-end AI approach that will split multiple ideas from a verbatim into multiple verbatims for analysis; however, when the AI cannot pickup multiple ideas, the back-end system cannot separate it into discrete ideas for consensus building. In this case, GroupSolver’s technology would gauge agreement with both ideas with one rating, muddying the waters. In our testing, this occurred a handful of times but is important to keep in mind.
Overall, in a situation where exploring large sample sizes is important (e.g., subgroup evaluation or representativeness are critical), GroupSolver allows you to quantify qualitative data faster and more reliably than through other traditional methods.
For more information about our technology offerings or integrating GroupSolver technology into your survey, contact us to learn more.