
Doom Debates! Q&A: Is Liron too DISMISSIVE of AI Harms? + New Studio, Demis Would #PauseAI, AI Water Use Debate
29 snips
Jan 27, 2026 Ori Nagel, the producer who runs production, thumbnails, editing, and studio operations, gives a studio tour and joins on-camera. Short, lively talks cover whether short-term AI harms like data-center water use matter for coalition-building. Heated debates on pausing AI, legislative vs grassroots strategies, risks from self-replication and cyber attacks, and how to market urgency round out the conversation.
AI Snips
Chapters
Books
Transcript
Episode notes
Studio Upgrade Funded By Donors
- Liron describes investing donations into a professional studio to attract higher-quality guests.
- He frames the studio upgrade as strategic outreach to raise debate quality and guest interest.
Short-Term Harms vs Existential Risk
- Liron distinguishes ordinary technological harms from the unique catastrophic potential of superintelligence.
- He treats many short-term harms as solvable and quantitatively smaller than the superintelligence risk.
AI Could Both Warn Or Run Away
- An AI telling humanity to pause could plausibly shift behavior because people may trust a smart AI more than a human expert.
- But Liron warns a reckless AI might run itself and doom us before such an authoritative signal appears.



