80,000 Hours Podcast

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

93 snips
Jul 31, 2023
Holden Karnofsky, co-founder of GiveWell and Open Philanthropy, focuses on AI safety and risk management. He discusses the potential pitfalls of AI systems that may not exceed human intelligence but could outnumber us dramatically. Karnofsky emphasizes the urgent need for safety standards and the complexities of aligning AI with human values. He also presents a four-part intervention playbook for mitigating AI risks, balancing innovation with ethical concerns. The conversation sheds light on the critical importance of responsible AI governance in shaping a safer future.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI-Driven Super-Exponential Growth

  • AI could trigger super-exponential economic growth by creating a feedback loop of resources and AI.
  • This differs from human growth, where resources don't translate directly into more humans.
INSIGHT

Supernumerousness, Not Just Superintelligence

  • AI's potential threat isn't solely about superintelligence, but also 'supernumerousness'.
  • Copying AIs and using them for research creates a population explosion, regardless of individual intelligence.
INSIGHT

Alignment Isn't Enough

  • Holden disagrees that aligning AI guarantees a positive outcome.
  • Even aligned AIs pose governance challenges like digital rights and misuse by bad actors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app