80,000 Hours Podcast

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

23 snips
Jan 9, 2023
Join Ben Garfinkel, a Research Fellow at Oxford's Future of Humanity Institute, as he dives into the complex world of artificial intelligence risk. Garfinkel argues that classic AI risk narratives may be overstated, calling for more rigorous scrutiny. He challenges perceptions around the governance of AI, emphasizing the importance of ethical frameworks and the potential consequences of misaligned AI objectives. With insights on historical parallels and funding disparities in AI safety, this conversation is a crucial exploration of our AI-driven future.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Classic AI Risk Argument

  • The classic AI risk argument involves a rapid jump from human-level AI to superintelligence.
  • Superintelligent AI, with potentially misaligned goals (like maximizing paperclips), could lead to disastrous outcomes.
ADVICE

Challenge the Brain-in-a-Box Scenario

  • Challenge the "brain-in-a-box" scenario, which assumes a sudden jump to human-level AI without significant precedents.
  • Consider alternative scenarios like smooth expansion with gradual increases in AI capabilities and generality.
INSIGHT

Alternative AI Development Scenarios

  • Consider a “smooth expansion” scenario: gradual increases in AI capabilities and generality.
  • Another scenario: specialized, narrow AI systems might dominate, reducing the importance of general AI, argues Eric Drexler.
Get the Snipd Podcast app to discover more snips from this episode
Get the app