Linear Digressions

The Bitter Lesson

14 snips
Mar 15, 2026
They trace how scale and data repeatedly outpace hand-crafted engineering, from chess engines to image nets and web-scale language systems. They highlight Sutton's argument that raw compute often beats sophistication. They suggest when to complement large models with retrieval, system access, or human judgment rather than trying to out-engineer them.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Deep Blue Won By Searching More Positions

  • Deep Blue beat Garry Kasparov by focusing on brute-force search rather than hand-coded chess heuristics.
  • The system won by searching vastly more positions per second instead of embedding human strategic rules.
ANECDOTE

Google's Data First Language Experiment

  • Google researchers (led by Peter Norvig) showed simple models trained on web-scale data beat elaborate rule-based language systems.
  • Their paper The Unreasonable Effectiveness of Data argued larger datasets outperformed handcrafted linguistic rules.
ANECDOTE

AlexNet's Scale Shock To Computer Vision

  • AlexNet trained raw-pixel deep nets on ImageNet and cut top-5 error from 26.2% to 15.3%.
  • The paper emphasized making scale tractable via GPUs and regularization rather than hand-engineered visual features.
Get the Snipd Podcast app to discover more snips from this episode
Get the app