Making Sense with Sam Harris

#151 — Will We Destroy the Future?

Mar 18, 2019
In this engaging discussion, Nick Bostrom, a renowned philosopher from Oxford and head of the Future of Humanity Institute, tackles the pressing issue of existential risk. He shares insights on the vulnerable world hypothesis, questioning whether technological advancements could spell doom for humanity. Bostrom highlights the ethical quandaries we face with AI, biotechnology, and nuclear threats. He also explores the influence of moral frameworks on our decisions about the future, pondering if we’re navigating a simulated reality as we confront these challenges.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Moral Illusions

  • People have "moral illusions": they care more about individual suffering than large-scale suffering.
  • This makes caring about existential risk harder.
INSIGHT

Utilitarian Perspective

  • From a utilitarian perspective, reducing existential risk has extremely high expected value.
  • Even a tiny reduction outweighs other seemingly important goals.
INSIGHT

Asymmetry of Suffering

  • People see mitigating suffering as more important than ensuring future well-being.
  • Suffering feels worse than happiness feels good.
Get the Snipd Podcast app to discover more snips from this episode
Get the app