Doom Debates!

Liron Shapira
undefined
11 snips
Nov 11, 2025 • 16min

These Effective Altruists Betrayed Me — Holly Elmore, PauseAI US Executive Director

Holly Elmore, Executive Director of PauseAI US and a passionate activist, discusses the tensions within the AI safety community and her decision to lead protests against frontier AI labs. She shares her experiences of feeling betrayed by former allies and highlights the insular nature of effective altruism, where reputation often takes precedence over genuine safety concerns. Holly emphasizes the importance of public advocacy, explaining how shifting focus can bridge gaps between communities and reduce harmful tribalism in AI discourse.
undefined
26 snips
Nov 7, 2025 • 53min

DEBATE: Is AGI Really Decades Away? | Ex-MIRI Researcher Tsvi Benson-Tilsen vs. Liron Shapira

In a thought-provoking debate, Tsvi Benson-Tilsen, an ex-MIRI researcher and founder of the Berkeley Genomics Project, argues that AGI is much further away than commonly believed. He emphasizes the limitations of current AI, pointing out tasks it struggles with, like generating novel scientific ideas. The conversation also explores the need for clear benchmarks in predicting AI progress and debates whether advances could trigger an AI winter. Tsvi proposes germline engineering as a solution for enhancing human intelligence to tackle future challenges.
undefined
77 snips
Nov 5, 2025 • 1h 7min

Liron Debunks The Most Common “AI Won't Kill Us" Arguments

Liron Shapira, an investor and entrepreneur with deep roots in rationalism, discusses his alarming 50% probability of AI doom. He tackles major sources of AI risk, emphasizing rogue AI and alignment problems. Liron expertly debunks common counterarguments against AI catastrophe, asserting that current models could escalate into uncontrollable superintelligences. He highlights the political implications of AI in the next decade, calling for international regulations as a safeguard against potential disaster.
undefined
13 snips
Oct 31, 2025 • 41min

Why AI Alignment Is 0% Solved — Ex-MIRI Researcher Tsvi Benson-Tilsen

Tsvi Benson-Tilsen, a former MIRI researcher, spent seven years grappling with AI alignment challenges. He reveals a stark truth: humanity has made virtually no progress on this complex issue. Tsvi delves into critical concepts like reflective decision theory and corrigibility, illuminating why controlling superintelligence is so daunting. He discusses the implications of self-modifying AIs and the risks of ontological crises, prompting important debates about the limitations of current AI models and the urgent need for effective alignment strategies.
undefined
18 snips
Oct 29, 2025 • 47min

Eben Pagan (aka David DeAngelo) Interviews Liron — Why 50% Chance AI Kills Everyone by 2050

In this engaging discussion, Eben Pagan, an influential entrepreneur and business trainer known as David DeAngelo, dives into the chilling topic of AI risk. Liron presents a compelling case for a staggering 50% chance of existential doom by 2050. They explore the concept that AI doesn’t need to harbor malice to pose a threat, and discuss why superintelligence might lack an 'off switch.' With the urgency of international coordination emphasized, listeners are left questioning how the future of humanity hinges on our relationship with AI.
undefined
25 snips
Oct 25, 2025 • 49min

Former MIRI Researcher Solving AI Alignment by Engineering Smarter Human Babies

Tsvi Benson-Tilsen, a former MIRI researcher and co-founder of the Berkeley Genomics Project, advocates for engineering smarter humans as a solution to AI alignment challenges. He discusses the alarming P(doom) estimates and the urgent need to slow AGI development. Delving into human germline engineering, Tsvi shares insights on chromosome selection and its potential to enhance intelligence significantly. He also debates societal stigma around AGI research and outlines an ambitious timeline for creating genetically enhanced humans to tackle the impending AI risks.
undefined
55 snips
Oct 23, 2025 • 48min

Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position

Liron Shapira, an AI risk activist and host of Doom Debates, engages with Robert Wright to delve into Eliezer Yudkowsky's unsettling AI doom arguments. They dissect why AI misalignment is a critical concern, highlighting the concept of 'intellidynamics' that separates goal-directed cognition. Liron warns of the 'First Try' issue in developing superintelligent AI and the potential loss of control. They also explore the grassroots PauseAI movement, contrasting it with the lobbying power of tech companies.
undefined
29 snips
Oct 17, 2025 • 1h 18min

Climate Change Is Stupidly EASY To Stop — Andrew Song, Cofounder of Make Sunsets

Andrew Song, Cofounder of Make Sunsets, is on a mission to combat climate change using stratospheric aerosol injection. He explains how launching weather balloons filled with sulfur dioxide can mimic volcanic cooling, offering an economically feasible solution that costs as little as $2 billion a year. The conversation tackles societal resistance to geoengineering and argues that offsetting emissions can be both cheap and effective. Andrew challenges the narrative that climate solutions are complex, suggesting we're overlooking practical answers in the face of environmental despair.
undefined
83 snips
Oct 10, 2025 • 2h 38min

David Deutschian vs. Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity? — With Brett Hall

Brett Hall, an educator and podcaster deeply rooted in David Deutsch's optimistic philosophy, engages in a lively debate on AI and humanity's future. They tackle key topics like the orthogonality thesis, distinguishing human creativity from AI outputs, and the dangers of slowing scientific progress. Brett argues that predictions about AI may be misguided, emphasizing problem-solving and the unique human capacity for explanatory knowledge. Their contrasting views on AI risks and the nature of intelligence lead to a captivating clash of worldviews.
undefined
Oct 4, 2025 • 23min

Debating People On The Street About AI Doom

The podcast takes to the streets to gauge public opinion on the existential risks posed by AI. People share fears of imminent AI disasters, while others dismiss the threat entirely. Debates arise over whether AI development should be paused and the implications of superintelligent machines. As interviewees discuss everyday uses of AI like chatbots, skepticism grows regarding expert predictions. The urgency to spread awareness about AI's potential dangers is evident, with reactions ranging from curiosity to outright hostility.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app