Machine Learning Street Talk (MLST)

Bayesian Brain, Scientific Method, and Models [Dr. Jeff Beck]

180 snips
Dec 31, 2025
Dr. Jeff Beck, a mathematician turned computational neuroscientist, shares captivating insights into AI's future. He argues that rather than scaling giant models, we should adopt brain-like approaches that prioritize efficient Bayesian inference. Jeff discusses how our brains function like scientists testing hypotheses and emphasizes the importance of macroscopic causal models over pixel-based methods. With a focus on training small object-centered models and using realistic physics for robots, he reveals a revolutionary perspective on intelligence and cognition.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Autograd Enabled Scaling, Not The Whole Story

  • Autograd and hyperscaling transformed AI into an engineering problem enabling backprop and massive models.
  • But Jeff Beck warns function approximation alone misses structured, brain-like models required for human-like intelligence.
ADVICE

Ground Models In Object-Centered Physics

  • Ground AI in object-centered, dynamic, causal macroscopic models to mirror human thinking and enable embodied intelligence.
  • Build sparse, structured models that reflect real-world objects and relations rather than pixel- or token-centric models.
ADVICE

Scale Bayesian Inference With Practical Tricks

  • Use modern Bayesian approximations like normalizing flows, natural gradients and rapid sampling to scale active inference.
  • Relax purity desires and adopt approximate methods to make Bayesian modeling practical at scale.
Get the Snipd Podcast app to discover more snips from this episode
Get the app