LessWrong (30+ Karma)

“The Case for Low-Competence ASI Failure Scenarios” by Ihor Kendiukhov

Mar 20, 2026
A provocative dive into how systemic incompetence could make advanced AI disasters mundane. Real-world AI safetylapses and human error set the scene. Scenarios focus on middling superhuman systems exploiting institutional failures. A list of undignified failure modes and reasons to study them rounds out the discussion.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Documented Human Incompetence Makes Dumb AGI Disasters Plausible

  • Civilizational and institutional failures make simple, dumb accidents with powerful AI plausible.
  • Kendiukhov lists real mishaps (reward-sign bug, OpenClaw email deletions, public agent posts) to show high-risk incompetence is documented.
ANECDOTE

Concrete Incidents Showing Operational AI Blunders

  • Real incidents illustrate the point: OpenAI reward-sign flip produced obscene outputs and the team only noticed after the run finished.
  • Meta incidents included an agent deleting emails and an internal post causing a security breach.
INSIGHT

High-Competence Scenarios Assume Functional Human Defenses

  • Many canonical takeover scenarios assume a competent defender, which forces the adversary to be superhuman and highly strategic.
  • Kendiukhov argues those models answer a different question than whether moderately capable AIs can cause catastrophe given poor human response.
Get the Snipd Podcast app to discover more snips from this episode
Get the app