
The Information Bottleneck Reinventing AI From Scratch with Yaroslav Bulatov
11 snips
Mar 30, 2026 Yaroslav Bulatov, an AI researcher and early member of OpenAI and Google Brain, now pushes to rebuild learning algorithms for far greater energy efficiency. He discusses why current deep learning is wasteful, the idea of replaying AI history with hindsight, hierarchical message passing alternatives, the Muon optimizer breakthrough, and why small, open teams and non-experts can drive rapid innovation.
AI Snips
Chapters
Books
Transcript
Episode notes
Efficiency Over Chasing New Capabilities
- Yaroslav prioritizes energy efficiency and refactoring the AI stack over chasing marginal capability gains.
- He argues current methods were designed for the wrong hardware and memory access dominates energy cost, so efficiency could be 10–100x better.
Replay AI History With Agents To Iterate Faster
- Replay the historical research process with hindsight and use AI agents to accelerate it instead of starting from today's methods.
- Yaroslav runs an open Sutra group that uses agents weekly to recreate past discoveries focused on energy.
Legacy Algorithms Create GPU Workarounds
- Legacy design choices (e.g., sequential backprop) force hacks to adapt to modern GPUs, creating inefficiencies.
- He compares it to the recurrent laryngeal nerve and Python GIL to illustrate path-dependent suboptimal designs.



