Doom Debates!

Liron Shapira
undefined
9 snips
Feb 24, 2026 • 1h 36min

Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

Daniel Holz, University of Chicago physicist who chairs the Doomsday Clock committee and founded UChicago’s Existential Risk Lab. He debates why the clock moved closer to midnight. Short, sharp conversations cover a probabilistic “P(Doom)” approach, nuclear near-misses, climate as a threat multiplier, biological risks like mirror life, and where misaligned AI fits into the threat landscape.
undefined
13 snips
Feb 19, 2026 • 60min

Why I Started Doom Debates & How to Succeed in AI Risk Communications

A behind-the-scenes look at building a provocative AI risk platform and how it grew fast. Creative strategies for finding a niche, preparing compelling interviews, and turning viewers into impact. Practical tips on handling hate, funding independent shows, and using social media to invite high-profile voices. Reflections on audience demographics and communicating urgency about AI risk.
undefined
19 snips
Feb 17, 2026 • 1h 7min

Destiny Raises His P(Doom) At The End

Stephen Bunnell (Destiny), a prominent political streamer and debater with 15+ years of online debating, joins to wrestle with AI risk. He discusses shifting timelines since 2017. They debate whether superintelligent AI could outthink and out-persuade humans. They probe moral alignment, warning-shot signals to watch, hardware limits, and whether societal incentives will prevent or accelerate catastrophe.
undefined
Feb 17, 2026 • 10min

The Facade of AI Safety Will Crumble (Video)

A provocative take on why current AI safety efforts are shallow and may miss extinction-level risks. It questions psychoanalysis-style testing for advanced systems and explores the gap between abstract goals and real implementations. The discussion highlights how maturing AI could outmaneuver human-centered safety checks and why that makes future outcomes especially worrying.
undefined
8 snips
Feb 14, 2026 • 10min

Elon Musk's Insane Plan for Surviving AI Takeover

Elon Musk, entrepreneur and CEO of Tesla and SpaceX, offers a bold vision where AI quickly outpaces human intelligence. He argues AI should propagate intelligence and suggests truth-seeking and curiosity will lead AI to protect and expand humanity. The clip covers his claim humans will hold just 1% of combined intelligence and his plan to instill pro-human values in future AI.
undefined
Feb 13, 2026 • 1h 19min

The Only Politician Thinking Clearly About Superintelligence — California Governor Candidate Zoltan Istvan

Zoltan Istvan, transhumanist writer and political candidate who advocates radical life extension and automated abundance. He discusses a one-robot-per-household pledge and universal basic income. He shares a 50% P(Doom) view on superintelligence and pushes for international AI pauses, space revitalization, and rethinking education for a post-work future.
undefined
14 snips
Feb 10, 2026 • 2h 28min

His P(Doom) Is Only 2.6% — AI Doom Debate with Bentham's Bulldog, a.k.a. Matthew Adelstein

Matthew Adelstein (Bentham's Bulldog), a philosopher and Substack writer on AI risk, defends a P(Doom) of just 2.6% using a multi-step probability chain. They spar over alignment-by-default, the “goal engine” versus goal-wrapping debate, the risk of exfiltration and unstoppable agents, and whether current RLHF success predicts safe future systems. The discussion closes on shared policy ideas like possible global pauses.
undefined
25 snips
Feb 4, 2026 • 1h 16min

What Dario Amodei Misses In "The Adolescence of Technology" — Reaction With MIRI's Harlan Stewart

Harlan Stewart, MIRI communications member and AI safety writer, joins to critique Dario Amodei’s essay. They argue the essay downplays urgency and misrepresents critics. Short takes cover instrumental convergence, goal coherence, risks from easy agentic copies, reflective stability, and whether an AI pause or governance is feasible.
undefined
29 snips
Jan 27, 2026 • 2h 9min

Q&A: Is Liron too DISMISSIVE of AI Harms? + New Studio, Demis Would #PauseAI, AI Water Use Debate

Ori Nagel, the producer who runs production, thumbnails, editing, and studio operations, gives a studio tour and joins on-camera. Short, lively talks cover whether short-term AI harms like data-center water use matter for coalition-building. Heated debates on pausing AI, legislative vs grassroots strategies, risks from self-replication and cyber attacks, and how to market urgency round out the conversation.
undefined
34 snips
Jan 20, 2026 • 1h 52min

Taiwan's Cyber Ambassador-At-Large Says Humans & AI Can FOOM Together

In this engaging conversation, Audrey Tang, Taiwan's Cyber Ambassador and a pioneer in civic tech, discusses the intricate relationship between AI and democracy. She contrasts AI risks with nuclear threats, emphasizing the need for interpretable models. Tang also introduces her 'six-pack of care' principles for civic ethics in AI development. Delving into Taiwan's unique cybersecurity landscape, she advocates for decentralized AI governance and reflects on the importance of community-driven tech solutions. Her insights highlight a hopeful future where humans and AI can thrive together.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app