We’re back from CRC with some key nuggets for you. With nearly 1,000 attendees descending on the Marriott Marquis Chicago, this was an action-packed conference.
Key takeaways:
1. Data quality can’t be assumed anymore – verify, then trust.
- According to the Global Data Quality initiative, ~20% of completes are removed for fraud or poor data quality.
- Every survey should be using secure links to eliminate “ghost” completes. Surprisingly, not everyone is doing this.
- DIY users face a lot of pressure because they have to weed out all the bad responses themselves, often with inadequate tools. “Everyone should be using third-party tech to spot fraud you can’t readily see.” – Bob Graff, Marketvision Research
VF’S TAKE:
How many DIY users are using third-party tech? Probably very few. We have a tool to detect bots and fraud before someone enters a survey, one to detect them from respondents completing a survey, extra open ends to spot fraud, and human review of each response. But not every agency does that, much less DIY users.
2. Synthetic data can approximate human respondents fairly accurately on some very basic survey questions – but it’s not clear why this is a good use case for AI.
- Qualtrics presented their experiments with training synthetic data to take surveys, showing that it’s good at basic agreement statements where the right answer is painfully obvious, but bad at multiple-select questions. Left unsaid: Why are they even doing this?
- The expert panel on synthetic data at CRC says this isn’t even the right goal—it doesn’t matter if AI can simulate human responses at a point in time; it’s only useful if it’s consistent over time.
- Synthetic survey data may not save money because “every time a new model comes out, like GPT-4 to GPT-5, you have to re-do all your training and validation.” – Matthew Seitz, Director, AI Hub for Business, University of Wisconsin-Madison
- Better uses for synthetic data are to find use cases where traditional surveys can’t be used, e.g. where sample doesn’t exist or you need to extend learnings in an area where you already have significant human data. “Find a low-stakes use case, an ‘edge case,’ or one that you need results in 2 days and this is your only option,” says Seitz.
VF’S TAKE:
Synthetic data for surveys is putting the cart before the horse—first we need to identify why this is important and for what use cases.
3. In-person qualitative research is evolving to become more immersive and participatory—perhaps as a reaction to the rise of AI moderation.
- Mattel presented a co-creation project they led with families of children with disabilities and pediatric disability experts to uncover actionable insights for inclusive toy design.
- The process was empathy-led, community-driven, and purpose-built. The resulting products, like Brave Barbie, Color Blind Uno, and Accessible RC, weren’t just inclusive; they were emotionally resonant and fresh innovation all in one.
- It was refreshing to see a compelling and immersive method that didn’t rely on AI—just real people solving real problems together. Result: A master class in how thoughtful design can unlock brand relevance and social impact.
VF’S TAKE:
We believe the future of in-person qual is immersive and participatory.

“With the rise of AI, human qual is becoming more immersive, experiential, and participatory. Rather than subjects to be studied, consumers are becoming active agents in co-creating insights and solutions.
For me, this means an even greater need for our arts-based and behavioral techniques that make qual more engaging and dynamic—powerful tools that can reveal unexpected, ‘invisible’ insights that traditional methods might miss”
– Ivey Crespo, Vice President, Qualitative
Read more about our QualLab here.
4. AI Moderation/Qual-at-scale platforms were everywhere.
- 6 platforms exhibited, and 2 stealth-exhibited (giving private demos outside the exhibit hall).
- It was a sea of sameness, making it extremely difficult to differentiate between the offerings. We still believe Listen Labs and Outset.ai are the ones to beat in this market, but we were impressed with the new platform Conveo. Overall, our impression was that these platforms are still in the early stages of educating the market on what this category even is.
VF’S TAKE:
These platforms are mostly positioned as DIY, which works for smaller usability projects, but for true qual-at-scale, we believe corporate researchers would be served best by working with an agency. We unveiled our own full-service qual-at-scale offering, NarrativeFusion, and will present it with Microsoft at The Market Research Event in Las Vegas.
5. AI Personas show promise, but are still in the pilot phase.
- DoorDash is using AI personas to hone research instruments and whittle down stimulus before doing IDIs with real-life Dashers.
- According to Priya Kothari, Head of Dasher Research, dashers by definition are difficult to reach saying, “I’ve never seen a no-show rate for any group that’s as high as for these guys.” She thinks AI Personas could one day provide a solution for some types of research. Right now, they’re mostly used for pre-testing within the Dasher research team.
VF’S TAKE:
We think AI personas in their current form aren’t quite ready for widespread use, but we do think they show the most potential of all the potential use cases for synthetic data. We’ve been investigating AI personas for the past year—if you’re interested in learning more about our findings, reach out to our Chief Research Officer Jason Kramer.
We hope to see you at The Market Research Event, where we’re presenting with Microsoft on how we’ve cracked the code on large, global qual-at-scale ad testing.


Not attending TMRE?
Reach out to beth.oshaughnessy@vitalfindings.com for a custom presentation to your team!






