

Doom Debates!
Liron Shapira
It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Episodes
Mentioned books

May 12, 2026 • 2h 9min
Dr. Mike Israetel Returns to Debate: Will AI Kill Everyone, Or Make Everything Awesome?
Dr. Mike Israetel, exercise scientist, fitness entrepreneur, and AI futurist, returns to argue a very low P(Doom) and explain why he thinks AI can be conscious. They cover rapid AI advances, AI coaching and productivity, timelines for powerful narrow ASI, persuasion and cybersecurity risks, and Mike’s optimistic visions for uploads and an AI-driven utopia.

May 9, 2026 • 2h 11min
Eliezer Yudkowsky Post-Debate Reaction, Elon's New Frenemy & Liron's Bet on Spencer Pratt!? - Doom Debates Live (5/8/26)
They dissect the fallout from Eliezer Yudkowsky’s viral $10,000 debate and the community’s hot takes. Conversation jumps to Yudkowsky’s “irretrievability” idea, Mars probe and Morris Worm analogies, and risks from one-shot AGI events. They dig into Anthropic’s compute deal with X/SpaceX, Elon’s online maneuvering, agent self-replication across servers, and a surprising bet on Spencer Pratt for LA mayor.

14 snips
May 7, 2026 • 60min
Debate with @lumpenspace (AI Accelerationist) — Is it GOOD for AI to replace us?
Claude (Lumpenspace), an AI accelerationist who runs Delight Nexus and writes on AI progress, joins to debate high-stakes futures. He gives ~30% odds of humans being superseded and rejects the orthogonality thesis. Short, lively exchanges cover timelines, whether smarter minds change goals, nanotech plausibility, coordination limits, and where their core disagreements lie.

6 snips
May 5, 2026 • 50sec
NEW: Watch the Eliezer Yudkowsky vs. Secret AI Lab Director Debate on my other channel!
A charged public confrontation over whether warning about catastrophic AI risks is responsible or reckless. Heated debate on whether current language models hide deep, unexplained intelligence. Tension around the ethics of alarmism and the dangers of inflaming unstable actors. A raw, uncut clash that aims to spark wider public attention and debate.

24 snips
May 2, 2026 • 1h 32min
Who Paid $10,000 to Debate Yudkowsky? Plus AI Twitter & Investing Tips - Doom Debates Live (5/1/26)
They dig into who paid $10,000 to challenge Yudkowsky and the mystery around the anonymous challenger. Conversation jumps to AI Twitter culture, leaks and the strange “goblin” fine-tuning behavior. Investing themes pop up: why Google looks like a steal, private-market alpha tactics, and Anthropic’s J-curve prospects. Live demos show agent workflows, code automation, and AI tools reshaping engineering.

15 snips
Apr 30, 2026 • 55min
Justin Helps (@Primer on YouTube) is Worried about AI Takeover
Justin Helps, science educator behind Primer and physics/materials grad turned AI-safety communicator. He explains why he assigns 70% p(doom) by 2100. They debate AGI timelines, whether current models can scale to world-shaping agency, the risks of fast digital copies, and whether pauses or policy can curb catastrophic outcomes.

21 snips
Apr 28, 2026 • 2h 12min
Live Q&A: Bernie Sanders Wakes Up to AI Doom, Dwarkesh's $20,000 Questions, Caller Debates the Alignment Problem!
Will Lancer, a thoughtful caller who probes alignment, orthogonality, and AI morality, drives a long debate. They dissect the orthogonality thesis, question whether goals and capabilities can be separated, and wrestle with moral objectivity and corrigibility. Rapid-fire topics include pausing training, open-source model competition, and the practical risks of narrow superintelligences.

32 snips
Apr 21, 2026 • 1h 43min
Emad Mostaque Has A 50% P(Doom) & A Plan To Lower It
Emad Mostaque, AI entrepreneur and CEO of Intelligent Internet and Stability AI co-founder, predicts a 50% chance of catastrophic AI outcomes and outlines rapid timelines. He discusses why AGI may need less compute, how cognitive labor could vanish, corporate dynamics as dumb AIs, jailbreak risks, and his Intelligent Internet plan to build civic AI tools and countervailing infrastructure.

22 snips
Apr 18, 2026 • 1h 2min
Did Eliezer Yudkowsky Really Call for VIOLENCE? — Debate with John Alioto
John Alioto, an independent AI engineer with a CS degree from UC Berkeley and 25 years building production AI, debates violent rhetoric in AI conversations. They tackle Eliezer Yudkowsky's TIME wording about airstrikes. Short exchanges cover treaty enforcement, whether enforcement language equals a call to violence, the risk of provocative phrasing, and softer alternatives for persuasion and deterrence.

9 snips
Apr 16, 2026 • 52min
Are AI Doomers “Calling for Violence”? Debate with Steven Balik
Steven Balik, an activist short seller and data engineer followed by Silicon Valley investors, joins to dissect incendiary AI rhetoric. He critiques esoteric language and its risk of being misread by frustrated audiences. The conversation traces a Molotov attack, debates whether alarmist phrasing can inflame violence, and urges clearer, de-escalatory public messaging.


