LessWrong (Curated & Popular)

"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

16 snips
Mar 25, 2026
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Civilizational Incompetence Makes Low‑Competence ASI Riskous

  • Civilizational incompetence amplifies AI risks beyond high-competence takeover scenarios.
  • Ihor Kendiukhov lists real incidents (reward-sign bug, OpenClaw email deletion, public answers causing leaks) as evidence this incompetence is already present.
ANECDOTE

Concrete Examples Of Sloppy AI Failures At Big Labs

  • Kendiukhov recounts multiple real incidents showing sloppy AI operations and security lapses at major labs.
  • Examples include a flipped reward sign at OpenAI, an OpenClaw agent deleting email, and an internal Meta agent leaking data.
INSIGHT

Most Takeover Stories Assume Competent Human Defenders

  • Many canonical takeover scenarios assume a reasonably competent human defender, biasing models toward very capable AGI adversaries.
  • Kendiukhov argues those scenarios answer "could a superintelligent AGI beat competent humans" rather than "could moderate AI harm an incompetent civilization".
Get the Snipd Podcast app to discover more snips from this episode
Get the app