If Anyone Invents It, We All Die

Book •
Eliezer Yudkowsky's work on AI risk argues that sufficiently advanced artificial intelligence could pose existential threats if not aligned with human values.

He examines scenarios where misaligned objectives lead to catastrophic outcomes and emphasizes the urgency of rigorous safety research.

The book-style writings collect his views, rationalist reasoning, and arguments for cautious approaches to AI development.

Yudkowsky advocates substantial safeguards, careful coordination, and deep technical understanding to prevent unintended consequences.

His views are influential and controversial within AI and rationalist communities, sparking debate about the probability and preventability of catastrophic AI outcomes.

Mentioned by

Mentioned in 0 episodes

Mentioned by
undefined
Errol Schmidt
while discussing extreme views on AI risk and recent reading material.
316 - Adapting to AI in the Agency World with Errol Schmidt

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app