

#112
Mentioned in 174 episodes
Superintelligence
Paths, Dangers, Strategies
Book • 2014
In this book, Nick Bostrom delves into the implications of creating superintelligence, which could surpass human intelligence in all domains.
He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values.
The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks.
He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values.
The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks.
Mentioned by
Mentioned in 174 episodes
Mentioned by Nval Ravakant when discussing the singularity and the potential risks of advanced AI.

11,718 snips
#18 Naval Ravikant: The Angel Philosopher
Mentioned by 

in the discussion of AI risks.


Marc Andreessen

5,129 snips
#386 – Marc Andreessen: Future of the Internet, Technology, and AI
Mentioned by 

as a book that made the challenges of AI alignment feel real to him.


Ben Mann

4,546 snips
Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann
Mentioned by 

as an earlier source of information regarding AI, before the LLM boom.


Chris Williamson

2,741 snips
AI Safety, The China Problem, LLMs & Job Displacement - Dwarkesh Patel - #979
Mentioned by 

as the author of "Superintelligence", a book exploring the potential dangers of advanced AI.


Peter Thiel

2,119 snips
#2190 - Peter Thiel
Mentioned by 

when describing his early interest in AI safety and later referencing Nick Bostrom’s framing of recursive self-improvement and public discussion of AI risk.


Chris Williamson

1,695 snips
AI Expert Warns: “This Is The Last Mistake We’ll Ever Make” - Tristan Harris - #1079
Mentioned by Martin Casado as the originator of the anthropomorphic fallacy when thinking about AI.

1,686 snips
Balaji Srinivasan: How AI Will Change Politics, War, and Money
Mentioned by 

as a book that popularized the term superintelligence and key to most arguments about it.


Cal Newport

1,397 snips
Ep. 377: The Case Against Superintelligence
Mentioned by 

, referencing a book that explored potential AI development paths.


Chris Williamson

1,154 snips
The New World Order Is Here - Peter Zeihan - #1028
Mentioned by 

as a text used in his doctoral seminar on superintelligence with AI doctoral students.


Cal Newport

929 snips
Ep. 393: Can Movies Save Us From Our Phones?









