80,000 Hours Podcast

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

21 snips
Jul 9, 2020
Ben Garfinkel, a Research Fellow at Oxford’s Future of Humanity Institute, discusses the need for rigorous scrutiny of classic AI risk arguments. He emphasizes that while AI safety is crucial for positively shaping the future, many established concerns lack thorough examination. The conversation highlights the complexities of AI risks, historical parallels, and the importance of aligning AI systems with human values. Garfinkel advocates for critical reassessment of existing narratives and calls for increased investment in AI governance to ensure beneficial outcomes.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Unintended Consequences from AI

  • Unintended consequences from AI development are a concern, as increasingly capable and autonomous systems may create greater harm.
  • Current examples like self-driving car accidents highlight the potential for more significant issues with advanced AI.
INSIGHT

Scrutinizing Classic AI Risk Arguments

  • Bostrom and Yudkowsky's arguments, though influential, warrant further scrutiny given their impact on AI safety research.
  • Their focus on abstract concepts, thought experiments, and limited engagement with current AI research raises questions.
INSIGHT

Brain in a Box vs. Gradual Progress

  • Many AI risk arguments assume a "brain in a box" scenario, where a human-level AI suddenly emerges.
  • This contrasts with potential gradual progress, where AI systems become more general and capable over time.
Get the Snipd Podcast app to discover more snips from this episode
Get the app