80,000 Hours Podcast

The 80,000 Hours team
undefined
52 snips
Mar 24, 2026 • 1h 12min

A Ukraine ceasefire could accidentally set Europe up for a bigger war | RAND's top Russia expert Samuel Charap

Samuel Charap, RAND’s Distinguished Chair in Russia and Eurasia Policy, offers decades of expertise on Russia-Ukraine relations. He argues a ceasefire could reduce killing but create fragile, miscalculation-prone peace. He outlines accidental escalation scenarios, snapback guarantees, phased settlements, defensive-only aid, and how to embed wider Russia-NATO talks to stabilize Europe.
undefined
107 snips
Mar 17, 2026 • 2h 14min

Why automating human labour will break our political system | Rose Hadshar, Forethought

Rose Hadshar, a Forethought researcher studying how advanced AI reshapes political power, discusses how AI could let tiny elites wield outsized economic and strategic influence. She outlines three dynamics that could concentrate power, paints vivid nonviolent takeover scenarios, and highlights risks from lost public leverage and epistemic control. She also sketches interventions to strengthen institutions and civic sense-making.
undefined
117 snips
Mar 10, 2026 • 1h 11min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

Nikita Lalwani, former White House technology and national security director. Sam Winter-Levy, Carnegie fellow on AI and nuclear deterrence. They debate whether AI could find hidden submarines, track road-mobile missiles, improve missile defenses, or infiltrate nuclear command systems. They warn about arms races, short response times, and the urgent need for AI and nuclear experts to coordinate.
undefined
75 snips
Mar 6, 2026 • 31min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Zershaaneh Qureshi, author and narrator of the article on AI-enhanced decision making, outlines why AI could both raise stakes and improve how we decide. She covers AI tools for truth-finding, forecasting, and coordination. She weighs objections about market forces, safety risks, and misuse, and suggests who might be well suited to help build these tools.
undefined
50 snips
Mar 3, 2026 • 3h 26min

#237 – Robert Long on how we're not ready for AI consciousness

Robert Long, philosopher and founder of Eleos AI researching AI consciousness and welfare. He explores whether current models might suffer, where consciousness could reside (models, sessions, or forward passes), and how replication, editing, and control affect moral status. Conversations cover measuring AI welfare via behavior, interpretability, and development, plus the policy and research priorities needed now.
undefined
150 snips
Feb 24, 2026 • 2h 41min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Max Harms, an alignment researcher at MIRI and sci‑fi author, argues we should train AIs to have no values and to defer completely to humans. He explores why slight misalignment and proxy goals can lead to catastrophic outcomes. He outlines CAST: making corrigibility the singular objective, and discusses practical benchmarks, governance questions, and why fiction helps communicate these risks.
undefined
250 snips
Feb 17, 2026 • 2h 55min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Ajeya Cotra, AI researcher and strategist who forecasts timelines and models AI risk, explores whether using AI to make AI safe is sensible. She contrasts gradual versus explosive takeoffs. She outlines feedback loops that could speed progress. She discusses what a crunch-time redirection of AI labor would require and why companies might not cooperate.
undefined
289 snips
Feb 10, 2026 • 26min

What the hell happened with AGI timelines in 2025?

A deep dive into why predictions about transformative AGI swung wildly in 2025. The conversation covers how new reasoning models briefly tightened timelines and why that optimism faded. It explores technical limits like inference-time gains, RL inefficiencies, and scaling costs. Non-technical factors and measured forecast updates around a 2032 pivot point are also discussed.
undefined
70 snips
Feb 3, 2026 • 2h 51min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Randy Nesse, an evolutionary psychiatrist and author who helped found the Center for Evolution and Medicine, explains why evolution shaped emotions like anxiety and low mood. He explores emotions as adaptive signals, why anxiety produces many false alarms, how low mood can regulate effort, and how modern life and feedback loops can turn useful responses into debilitating problems.
undefined
153 snips
Jan 27, 2026 • 2h 32min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

David Duvenaud, a University of Toronto CS professor and ex-lead of Anthropic's alignment evals, discusses the 'gradual disempowerment' thesis. He explores how AI could make people economically and politically irrelevant. They cover cultural shifts as machines shape norms, who controls powerful AIs, and whether liberal democracy can survive when humans are no longer 'needed'.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app