80,000 Hours Podcast

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

720 snips
Mar 11, 2025
Will MacAskill, a philosopher and AI strategy researcher, discusses the potentially explosive advancements in AI that could compress a century's worth of change into just ten years. He elaborates on the implications of AI surpassing human capabilities, leading to rapid scientific progress and the urgency of adapting societal institutions. MacAskill also highlights the critical risks associated with AI, such as power concentration and ethical dilemmas, calling for proactive governance and alignment with human values to navigate this unprecedented transformation.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Industrial Explosion's Significance

  • People focus on software intelligence explosion, but underestimate the industrial explosion.
  • Autonomous factories producing AIs and chips will create a powerful feedback loop, accelerating growth.
INSIGHT

Risks of Rapid Change

  • Rapid change poses a risk to any being because it disrupts an environment proven conducive to survival.
  • While technological change has benefits, sudden environmental shifts can lead to disempowerment or destruction.
INSIGHT

Lock-in and AGI

  • Humanity hasn't experienced lock-in because humans die, ideas drift, and no single power controls the future indefinitely.
  • AGI changes this dynamic due to its potential immortality, replicability, and consistent goals.
Get the Snipd Podcast app to discover more snips from this episode
Get the app