
The Experience Strategy Podcast AI Twins and the Future of Research
AI Twins and the Future of Research The Experience Strategy Podcast
Episode Overview
Two Wall Street Journal articles are making waves in the market research world — one asking whether AI can replace human research participants, and another profiling a teenage-founded startup called Aura that's already attracted McDonald's and EY. Dave, Joe, and Aransas bring their combined decades of consumer research experience to the question everyone in insights is quietly asking: is this the end of primary research, or the beginning of something more powerful?
What We Cover
The two WSJ articles at the center of this conversation The first covers Simile, a startup building agentic AI twins modeled on real people for polling and market research. The second profiles Aura, a company founded by people younger than Aransas's high schooler, betting that AI bots can predict human behavior better than humans themselves.
Dave's evolving reaction — Worry, skepticism, and then possibility His first instinct was worry. Stone Mantel has built its practice on deep consumer research, and the promise of AI twins that can answer with 0.5% accuracy at first felt wrong. But the more he sat with it, the more he saw a useful analogy: flight simulators. Simulators serve a real purpose as long as everyone is clear they are not the same as flying the actual plane.
The critical flaw in current AI twin models Both Dave and Joe land on the same problem independently: AI twins are built on static preferences and demographic profiles. They treat people as if behavior is fixed — "this is how soccer moms respond" — when the entire premise of situational research is that behavior shifts with context. What mode is the person in? What situation are they navigating? Those questions are not being asked. Joe puts it plainly: they didn't ask anything about modes.
Where AI twins might actually work well Trend prediction and aggregate market analysis are reasonable use cases. If you want to know whether fruit-flavored tea is about to have a moment, AI models scanning historical purchasing data and cultural signals can probably get you there. The harder problem — and the more valuable one — is understanding what a specific person cares about in a specific moment, and that requires something current AI twins are not equipped to provide.
What AI twins could become with better design Dave raises an intriguing possibility: after completing primary research with a real consumer, could that data become the seed for ongoing simulation and modeling? Not as a replacement for the research, but as a way to extend its value across time and decisions. He also flags the bias risk — every feedback loop that improves AI accuracy may also drift it further from the original human signal.
Joe's Wall-E scenario The Terminator isn't Joe's fear. Wall-E is. Personal language models hanging out in your Alexa, learning everything you say and do, eventually making purchasing decisions on your behalf — and research shifting to focus on the PLM rather than the person. The result: consumers with no agency, led entirely by AI intermediaries and the consumer goods companies they serve.
The consent problem CBS claimed 400,000 people opted in to being replicated as AI twins. Aransas is skeptical — and direct. That was some very fine print. Companies building AI twin programs need to be serious about how they are collecting this data, not just technically compliant.
Key Idea
If AI can actually predict behavior change, it is no longer a tool — it is strategy. That quote, attributed to a Coca-Cola executive in the second article, captures what is at stake. Dave frames it through the lens of superpowers: AI gives companies the ability to do things they could not do otherwise. The question is whether the thing they are doing actually reflects how real humans behave.
Continue the Conversation
Join Dave, Joe, and Aransas on The Experience Strategist Substack to go deeper on this episode's themes.
