The Checkup with Doctor Mike

Why Stanford Dismantled Her Research Program | Renee DiResta

Mar 31, 2026
Renee DiResta, a researcher who maps online influence and studies misinformation, explains how networks and attention drive viral lies. She recounts learning to translate academic work into short-form communication. She covers bots, platform moderation tradeoffs, timeliness in rapid response, and why institutional pressure led Stanford to halt parts of its research.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Authenticate Accounts Using Multiple Signals

  • Don't rely solely on content cues to spot inauthentic accounts; combine image analysis with account behavior and platform-only signals for authentication.
  • Renee and researchers cross-check AI image detectors, posting history, sudden geographic shifts, and platform data in backchannels.
INSIGHT

Why Auto Moderation Hits Experts And Misses Bots

  • Automated detection struggles because people co-opt AI slips and memes, producing false positives and negatives, undermining simple content‑based classifiers.
  • Platforms face a precision vs recall tradeoff; heavy moderation yields false positives that penalize legitimate experts.
ADVICE

Verify Humanness Not Full Identity First

  • Consider verifying humanness rather than identity across all platforms; require stronger identity only for high‑risk areas like finance or health.
  • Use privacy‑protecting cryptographic proofs or limited third‑party verification to preserve pseudonymity while proving 'not a bot'.
Get the Snipd Podcast app to discover more snips from this episode
Get the app