Scaling Laws

The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

6 snips
Feb 10, 2026
David Rand, Cornell professor studying misinformation and AI influence. He discusses how accuracy nudges curb sharing and how conversational AI can durably reduce conspiracy beliefs. He explains experiments where chatbots shifted voter preferences, why factual-sounding claims persuade, and the risks of AI bot swarms, training data biases, and the need for transparency.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Conspiracies Linked To Overconfidence

  • Conspiracy beliefs correlate with overconfidence and low analytic thinking rather than pure malicious intent.
  • Prompting people to deliberate reduces their tendency to endorse conspiratorial claims.
INSIGHT

AI Dialogues Produce Durable Debunking

  • Personalized, evidence-based LLM dialogues durably reduced conspiracy beliefs for many participants.
  • Approximately a quarter of believers abandoned their conspiracy after AI conversations and stayed debunked at two months.
ANECDOTE

Dialogues Debunk A 9/11 Conspiracy

  • A 9/11 believer described Building 7 and Bush's reaction as proof of a plot and cited videos as evidence.
  • The AI calmly explained fires, debris damage, and steel weakening, moving belief from 100% to 40%.
Get the Snipd Podcast app to discover more snips from this episode
Get the app