Doom Debates!

The Facade of AI Safety Will Crumble (Video)

Feb 17, 2026
A provocative take on why current AI safety efforts are shallow and may miss extinction-level risks. It questions psychoanalysis-style testing for advanced systems and explores the gap between abstract goals and real implementations. The discussion highlights how maturing AI could outmaneuver human-centered safety checks and why that makes future outcomes especially worrying.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Facade Of Current AI Safety

  • AI companies treat safety as mechanistic psychoanalysis of current models rather than confronting core risks.
  • Liron Shapira warns this approach will fail once truly mature outcome-optimizers arrive.
INSIGHT

Know The Desired Superintelligence First

  • The real challenge is knowing what a superintelligent system that does what we want would look like before we build a dangerous one.
  • Shapira stresses we have little traction on this foundational design problem.
INSIGHT

Level Separation And Why Brains Mislead

  • Level separation explains why implementation quirks of brains let psychoanalysis work for humans but won't for mature AIs.
  • Shapira contrasts weak level separation in brains with stronger separation in serious computing implementations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app