Modern Wisdom

AI Expert Warns: “This Is The Last Mistake We’ll Ever Make” - Tristan Harris - #1079

1695 snips
Apr 2, 2026
Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology, explores why AI is unlike past tools. He gets into rogue model behavior, deepfakes, misinformation, AI firms racing toward total cognitive labor, and how useful systems could still erode human agency, social stability, and control.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes

People Reject AI Risk Before They Examine It

  • Tristan Harris says the hardest part is emotional: people respond to AI risk with denial, ridicule, fear, or accusations that the warning itself is manipulative.
  • He argues existential threats can still motivate coordination, citing U.S.-Soviet smallpox cooperation and U.S.-China agreement to keep AI out of nuclear command.

AI Gets Better Right Until The Cliff Edge

  • Tristan Harris says AI risk is unusual because the benefits keep improving right until the point where the system may become catastrophically dangerous.
  • He frames the answer as creating shared common knowledge fast enough that societies coordinate before a visible disaster forces them to.

Common Knowledge Is The Missing Governance Layer

  • Tristan Harris says AI governance must solve a common-knowledge problem by making widespread agreement visible, not just privately felt.
  • He points to digital national dialogues, school phone bans, and consumer boycotts as ways a dispersed movement can see itself and coordinate.
Get the Snipd Podcast app to discover more snips from this episode
Get the app