
LessWrong (30+ Karma) “The quest for general intelligence is hitting a wall” by Sean Herrington
Apr 2, 2026
A survey of dramatic recent wins in math and coding alongside stubborn failures in symbolic reasoning. A look at opacity and why models often hallucinate or pursue unintended goals. Doubts about near-term architectural breakthroughs and concerns about scaling limits, context-window gaps, and risks from jailbreaks and shallow social alignment.
AI Snips
Chapters
Transcript
Episode notes
Scaling Alone Is Reaching Diminishing Returns
- Current non-biological systems show dramatic gains in tasks like coding and math but still hit fundamental limits in reasoning and generality.
- Sean Harrington argues improvements largely come from scaling and scaffolding, not new cognitive architectures, so returns are diminishing.
Real Projects Where Models Delivered Tangible Gains
- Recent systems aided projects like Maltbook and helped discover faster matrix multiplication algorithms.
- These concrete successes illustrate where current models provide high practical value despite broader limitations.
Symbolic Reasoning And Context Window Weaknesses
- The best models still fail at symbolic tasks like multiplying 16‑bit integers and lose critical details due to short context windows.
- Harrington highlights automatic compaction and out-of-distribution failures as core technical bottlenecks.
