

80,000 Hours Podcast
The 80,000 Hours team
The most important conversations about artificial intelligence you won’t hear anywhere else.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.
Episodes
Mentioned books

101 snips
Mar 31, 2026 • 3h 8min
Could a biologist armed with AI kill a billion people? | Dr Richard Moulange
Dr Richard Moulange, an AI biosecurity specialist with a PhD in biostatistical machine learning, explains how AI is breaking barriers in biology. He discusses AI-designed genomes that outperform natural viruses. He covers AI beating virologists on lab troubleshooting, which actors gain most from AI uplift, and the three broad defensive strategies to reduce catastrophic biological risks.

66 snips
Mar 24, 2026 • 1h 12min
#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war
Samuel Charap, RAND’s Distinguished Chair in Russia and Eurasia Policy, offers decades of expertise on Russia-Ukraine relations. He argues a ceasefire could reduce killing but create fragile, miscalculation-prone peace. He outlines accidental escalation scenarios, snapback guarantees, phased settlements, defensive-only aid, and how to embed wider Russia-NATO talks to stabilize Europe.

107 snips
Mar 17, 2026 • 2h 14min
#239 – Rose Hadshar on why automating human labour will break our political system
Rose Hadshar, a Forethought researcher studying how advanced AI reshapes political power, discusses how AI could let tiny elites wield outsized economic and strategic influence. She outlines three dynamics that could concentrate power, paints vivid nonviolent takeover scenarios, and highlights risks from lost public leverage and epistemic control. She also sketches interventions to strengthen institutions and civic sense-making.

117 snips
Mar 10, 2026 • 1h 11min
#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)
Nikita Lalwani, former White House technology and national security director. Sam Winter-Levy, Carnegie fellow on AI and nuclear deterrence. They debate whether AI could find hidden submarines, track road-mobile missiles, improve missile defenses, or infiltrate nuclear command systems. They warn about arms races, short response times, and the urgent need for AI and nuclear experts to coordinate.

75 snips
Mar 6, 2026 • 31min
Using AI to enhance societal decision making (article by Zershaaneh Qureshi)
Zershaaneh Qureshi, author and narrator of the article on AI-enhanced decision making, outlines why AI could both raise stakes and improve how we decide. She covers AI tools for truth-finding, forecasting, and coordination. She weighs objections about market forces, safety risks, and misuse, and suggests who might be well suited to help build these tools.

50 snips
Mar 3, 2026 • 3h 26min
#237 – Robert Long on how we're not ready for AI consciousness
Robert Long, philosopher and founder of Eleos AI researching AI consciousness and welfare. He explores whether current models might suffer, where consciousness could reside (models, sessions, or forward passes), and how replication, editing, and control affect moral status. Conversations cover measuring AI welfare via behavior, interpretability, and development, plus the policy and research priorities needed now.

150 snips
Feb 24, 2026 • 2h 41min
#236 – Max Harms on why teaching AI right from wrong could get everyone killed
Max Harms, an alignment researcher at MIRI and sci‑fi author, argues we should train AIs to have no values and to defer completely to humans. He explores why slight misalignment and proxy goals can lead to catastrophic outcomes. He outlines CAST: making corrigibility the singular objective, and discusses practical benchmarks, governance questions, and why fiction helps communicate these risks.

250 snips
Feb 17, 2026 • 2h 55min
#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’
Ajeya Cotra, AI researcher and strategist who forecasts timelines and models AI risk, explores whether using AI to make AI safe is sensible. She contrasts gradual versus explosive takeoffs. She outlines feedback loops that could speed progress. She discusses what a crunch-time redirection of AI labor would require and why companies might not cooperate.

289 snips
Feb 10, 2026 • 26min
What the hell happened with AGI timelines in 2025?
A deep dive into why predictions about transformative AGI swung wildly in 2025. The conversation covers how new reasoning models briefly tightened timelines and why that optimism faded. It explores technical limits like inference-time gains, RL inefficiencies, and scaling costs. Non-technical factors and measured forecast updates around a 2032 pivot point are also discussed.

70 snips
Feb 3, 2026 • 2h 51min
#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
Randy Nesse, an evolutionary psychiatrist and author who helped found the Center for Evolution and Medicine, explains why evolution shaped emotions like anxiety and low mood. He explores emotions as adaptive signals, why anxiety produces many false alarms, how low mood can regulate effort, and how modern life and feedback loops can turn useful responses into debilitating problems.


