Doom Debates!

Talking AI Doom with Dr. Claire Berlinski & Friends

13 snips
Mar 12, 2026
Liron Shapira, host and producer who runs high-stakes debates on existential AI risk, joins a sharp symposium. He argues why superintelligence could arrive fast, why control may fail, and how recursive self-improvement and geopolitical competition amplify danger. They discuss timelines, energy and resource limits, policy ideas like a pause, and strategies for mobilizing public and political attention.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Development Is A Scaled Training Loop

  • Modern AI is trained by scaling a simple training loop rather than hand-coding behavior, producing opaque black-box systems.
  • That training paradigm explains why researchers struggle to give principled control or interpretability guarantees.
INSIGHT

Obedient Genies Can Still Kill Us

  • Even if superintelligences remain obedient genies, their sheer capability creates catastrophic risk.
  • Liron warns a superpowered agent executing human wishes can hijack data centers, manipulate millions, and self-replicate rapidly.
INSIGHT

FOOM Can Produce A Rapid Capability Leap

  • Recursive self-improvement (FOOM) can create fast discontinuities where an AI spends vast subjective compute to become far more capable.
  • Liron frames this as a shift from chaotic multi-agent competition to a near-unassailable optimizer with nanotech-level capabilities.
Get the Snipd Podcast app to discover more snips from this episode
Get the app