
Generally Intelligent There will be a scientific theory of deep learning
52 snips
Apr 24, 2026 Josh Albrecht, Imbue co-founder and applied ML engineer; Daniel Kunin, Berkeley postdoc studying mathematical principles of intelligence; Jamie Simon, Imbue research fellow and physics-trained deep learning theorist. They explore a proposed “learning mechanics” — a physics-like theory of deep learning. Short takes cover why theory is needed, scaling and limit behaviors, progressive sharpening and edge-of-stability, universality of representations, and how theory and mechanistic interpretability can work together.
AI Snips
Chapters
Transcript
Episode notes
Edge Of Stability And Macroscopic Laws
- Simple macroscopic laws like neural scaling and the edge of stability emerge reproducibly and invite mechanistic explanations.
- Jamie Simon highlights the sharpness stabilizing near 2/learning-rate and ties it to classical optimization instability thresholds.
Limits Turn Complexity Into Tractable Laws
- Taking limits (infinite width, depth, data, step-size->0) simplifies analysis and often reveals tractable continuous descriptions of training.
- Jamie Simon compares this to statistical physics where large-N limits make emergent laws like PV=nRT derivable.
Practical Deep Learning As A Discretized Continuous System
- The Discretization Hypothesis: practical deep learning is a discretization of an ideal continuous system, and scaling refines that discretization.
- Jamie Simon argues finite width/depth/steps are mesh choices; more parameters approximate a smoother underlying flow.

