LessWrong (30+ Karma)

[Linkpost] “What if superintelligence is just weak?” by Simon Lermen

Mar 28, 2026
A critique of the idea that advanced AI must be omnipotent to pose risk. A tiger-cub metaphor shows how modest systems can scale into danger. Discussion of how automation and access, not dramatic breakthroughs, could create critical risks. Challenges the notion that distributing capabilities or monitoring multiple systems prevents catastrophe.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Better Than Us Is Enough

  • Superintelligence doesn't need omnipotence to be catastrophic; it only needs to be smarter and better than humans in key domains.
  • Simon Lerman compares a tiger cub growing into a deadly tiger to AI embedded across infrastructure, illustrating how modest capability gains can become lethal when scaled.
INSIGHT

Think Forward To Systemic Dependence

  • You can 'think forward' to a future where widespread AI control of jobs, media, military, and labs creates systemic dependency and rapid risk escalation.
  • Lerman asks listeners to imagine a billion robots and AI managing critical infrastructure to show how disabling humans becomes feasible.
INSIGHT

Danger Without Miracle Breakthroughs

  • Catastrophic harm doesn't require miraculous scientific leaps; practical capabilities like automating labs or engineering bioweapons suffice.
  • Lerman notes Yudkowsky's dramatic claims aren't necessary for risk—AI can enable dangerous outcomes via routine engineering and access.
Get the Snipd Podcast app to discover more snips from this episode
Get the app