
AI Pod by Wes Roth and Dylan Curious | Artificial Intelligence News and Interviews With Experts if anyone builds it, everyone dies
Oct 2, 2025
Liron Shapira, founder of Doom Debates and a leading voice on AI risks, shares his chilling '50% by 2050' forecast for existential threats posed by superintelligence. He explores why skepticism about AI's dangers persists despite rapid advancements and discusses the impossibility of controlling emergent self-improvement. Liron warns against the illusion of safety measures, critiques proposals like short pauses, and highlights the potential for AIs to manipulate humans economically and socially, urging listeners to reconsider their optimism about AI's future.
AI Snips
Chapters
Transcript
Episode notes
Founder Who Uses AI But Fears Loss Of Control
- Liron runs a Y Combinator startup and uses AI tools in his business daily.
- He says he enjoys AI but fears losing control when systems outclass humans.
Terminator Off‑Switch Metaphor
- Liron uses a Terminator robot metaphor to show off‑switch problems.
- He asks why builders trust they can regain control once a powerful system misbehaves.
Good Intentions Don’t Solve Systemic Risk
- Even benevolent builders won't guarantee safety because many other failure modes exist.
- Liron describes the risk as a tangle of vines where removing one vine doesn't solve the whole problem.

