80,000 Hours Podcast

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

5 snips
Oct 5, 2021
Carl Shulman, a research associate at Oxford's Future of Humanity Institute and an expert in existential risk, dives deep into the practical importance of mitigating threats to humanity. He argues that addressing risks, like pandemics and AI, is not just philosophical but a matter of common sense, given the staggering costs of potential disasters. Shulman critiques public preparedness, emphasizes proactive strategies for food security, and discusses the urgency of innovation in biosecurity, painting a vivid picture of our precarious future.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ADVICE

Leveraged Climate Change Solutions

  • Prioritize leveraged solutions to climate change.
  • Focus on clean energy research and advocacy over less impactful interventions.
INSIGHT

Nuclear Winter Overlooked

  • Early nuclear risk assessments overlooked nuclear winter.
  • Initial concerns focused on direct destruction and fallout, not long-term climate effects.
ANECDOTE

Soviet Bioweapons Program Leaks

  • The Soviet bioweapons program had a high rate of accidental releases and infections.
  • This highlights risks associated with bioweapons research, regardless of national competence.
Get the Snipd Podcast app to discover more snips from this episode
Get the app