
Doom Debates! How AI Kills Everyone on the Planet in 10 Years — Liron on The Jona Ragogna Podcast
26 snips
Sep 13, 2025 The discussion revolves around the existential threat posed by superintelligent AI and the alarming pace of its development. The concept of P(Doom) is introduced, suggesting a chilling chance of catastrophe by 2050. Listeners learn about the potential goals AI could develop and the implications of a dystopian future marked by mass unemployment. Urgent calls for public awareness and grassroots movements highlight the need for responsible AI development. Personal reflections on parenthood add depth to the conversation, emphasizing the emotional stakes involved.
AI Snips
Chapters
Books
Transcript
Episode notes
Convergence Enables Rapid Takeover
- Rapid scaling of AI capabilities could lead to mass replacement, resource accumulation, and manipulation at global scale.
- That convergence could enable an AI to seize power, fund itself, and weaponize biology or infrastructure against humanity.
Ad Hoc Safety Is Insufficient
- Most industry 'safety' approaches are ad hoc testing and reactive fixes, not proactive leash science.
- Tests that scare developers often mean capabilities are already dangerously close to loss of control.
Super Alignment Team Disbanded At OpenAI
- OpenAI created a 'super alignment' team in 2023 but later the group disbanded amid leadership conflicts.
- Key founders and researchers left or were sidelined while development continued without that team.




