Doom Debates!

Liron Shapira
undefined
52 snips
Mar 25, 2026 • 1h 20min

AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

Quintin Pope, PhD student and AI alignment researcher focused on NLP and alignment, defends the view that RLHF and imitative modeling largely solve current alignment challenges. He and Liron debate whether training-data limits, chain-of-thought, mechanistic interpretability, and phase transitions mean systems will stay controllable or could produce goal-directed superintelligence. They also discuss likely capabilities by 2028 and who will build powerful AIs.
undefined
10 snips
Mar 20, 2026 • 55min

I'm Watching AI Take Everyone's Job | Liron on Robert Wright's NonZero Podcast

They unpack how agentic AI and vibe coding are rewriting software work and accelerating development. Personal stories show programmers being replaced by AI-managed teams. The conversation probes whether many desk jobs are automatable and how AI-on-AI competition squeezes opportunities. They trace agent progress from Auto-GPT to cloud-native agents and debate slowing development, politics, and leadership risks.
undefined
13 snips
Mar 17, 2026 • 48min

This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update

Noah Smith, economist and Noahpinion writer, explains why he raised his P(Doom) to about 10% after rethinking AI risk. He outlines a chatbot→genie→god framing, highlights agent-enabled bioterror as a top pathway, and discusses how rogue agent incidents, historical analogies, and communication to policymakers shape the debate.
undefined
13 snips
Mar 12, 2026 • 1h 27min

Talking AI Doom with Dr. Claire Berlinski & Friends

Liron Shapira, host and producer who runs high-stakes debates on existential AI risk, joins a sharp symposium. He argues why superintelligence could arrive fast, why control may fail, and how recursive self-improvement and geopolitical competition amplify danger. They discuss timelines, energy and resource limits, policy ideas like a pause, and strategies for mobilizing public and political attention.
undefined
10 snips
Mar 10, 2026 • 1h 29min

How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

Dr. Steven Byrnes, an AGI safety researcher and former Harvard physics postdoc now at the Astera Institute, returns with a research update. They talk about the rise of AI agents and a shift toward reinforcement-learning and brain-like AGI. Short segments cover why imitative LLMs may hit limits, how continual learning could produce ruthless, goal-directed systems, and timelines for rapid paradigm shifts.
undefined
11 snips
Mar 5, 2026 • 2h 20min

Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!

Producer Ori, the show producer who runs production and live tech, and Rocco, the online contrarian famous for Roko's Basilisk, spar over AI risk, governance, and alignment. They debate Anthropic vs the Pentagon, whether agents mean the end of programming, wireheading and stoner-AI objections, shifting timelines, and if regulation or light-cone control will decide the future.
undefined
9 snips
Mar 3, 2026 • 1h 7min

AI Will Take Our Jobs But SPARE Our Lives —Top AI Professor Moshe Vardi (Rice University)

Moshe Vardi, University Professor at Rice and veteran computer scientist who studies automated reasoning and AI policy. He warns that AI will automate away jobs and warp social meaning. Short scenes cover P(DOOM) framing, cognitive deskilling, AI faking empathy, corporate superintelligence, regulation, and whether data-center genius could disempower humanity.
undefined
10 snips
Feb 26, 2026 • 39min

Destiny's Fans Challenged Me to an AI Doom Debate

Destiny, online commentator and streamer known for debate-driven political and tech discussions, joins a sharp Discord crowd to challenge doom claims. They debate whether LLMs are real AI, if AIs can form independent goals, controllability and runaway scenarios, and whether superior successors would replace humans. Rapid-fire, skeptical, and confrontational conversation.
undefined
9 snips
Feb 24, 2026 • 1h 36min

Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

Daniel Holz, University of Chicago physicist who chairs the Doomsday Clock committee and founded UChicago’s Existential Risk Lab. He debates why the clock moved closer to midnight. Short, sharp conversations cover a probabilistic “P(Doom)” approach, nuclear near-misses, climate as a threat multiplier, biological risks like mirror life, and where misaligned AI fits into the threat landscape.
undefined
13 snips
Feb 19, 2026 • 60min

Why I Started Doom Debates & How to Succeed in AI Risk Communications

A behind-the-scenes look at building a provocative AI risk platform and how it grew fast. Creative strategies for finding a niche, preparing compelling interviews, and turning viewers into impact. Practical tips on handling hate, funding independent shows, and using social media to invite high-profile voices. Reflections on audience demographics and communicating urgency about AI risk.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app