

#273
Mentioned in 98 episodes
If Anyone Builds It, Everyone Dies
Book • 2025
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
Mentioned by






















Mentioned in 98 episodes
Mentioned by 

to introduce Eliezer Yudkowsky as a co-author and AI apocalypse warner.


Cal Newport

1,397 snips
Ep. 377: The Case Against Superintelligence
Mentioned by 

who says that he has not read it, although he does not plan to.


Sam Parr

705 snips
Story Of The Most Important Founder You've Never Heard Of
Mentioned by 

while citing Nate Soares condemning violence and identifying him as co-author of a book on superhuman AI risk.


Nathaniel Whittemore

474 snips
AI Populism Turns Violent
Mentioned by 

as a book co-authored by 

and Eliezer Yudkowsky, about the dangers of superintelligence.


Andy Mills


Nate Soares

400 snips
EP 6: The AI Doomers
Mentioned by 

as the upcoming book by 

and 

on the dangers of superhuman AI.


Sam Harris


Eliezer Yudkowsky


Nate Soares

382 snips
#434 — Can We Survive AI?
Recommended as a fascinating lesson and extreme case for fear around AI.

243 snips
500. Japan, China, and the Fight for Taiwan (Question Time)
Mentioned by 

as the work of Eliezer Yudkaski, known as the prophet of AI doom.


Nathan Labenz

183 snips
What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit
Mentioned by 

as a contrast to his own concerns, referring to Eliezer Yudkowsky's views on AI.


Blaise Aguera y Arcas

143 snips
Google Researcher Shows Life "Emerges From Code" - Blaise Agüera y Arcas
Mentioned by 

as a provocative title by Eliezer Yudkowsky regarding the dangers of AI.


Tristan Harris

124 snips
Feed Drop: "Into the Machine" with Tobias Rose-Stockwell
Mentioned by 

and 

when introducing the guest and discussing the book's warning about superintelligent AI risks.


Alex O'Connor


Nate Soares

96 snips
#153 If Anyone Builds It, EVERYONE Dies - AI Expert on Superintelligence




