Learning Bayesian Statistics

#150 Fast Bayesian Deep Learning, with David Rügamer, Emanuel Sommer & Jakob Robnik

18 snips
Jan 28, 2026
David Rügamer, LMU professor working on uncertainty in deep models; Emanuel Sommer, PhD researcher building practical JAX sampling tools; Jakob Robnik, Berkeley physicist developing the Microcanonical Langevin sampler. They discuss scaling Bayesian neural networks, fast sampling tricks and software, microcanonical dynamics, bottlenecks in high dimensions, hybrid warm-start strategies, and tooling for practical uncertainty quantification.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Software Is As Important As Samplers

  • JAX plus modular samplers enabled practical scaling of Bayesian neural network sampling on GPUs.
  • Software choices and engineering (memory, callbacks) matter as much as algorithms.
ADVICE

Warm-Start Before Sampling

  • Warm-start sampling from an optimized network to reduce initialization error.
  • Use parallel short chains and hybrid optimization-sampling workflows to allocate compute efficiently.
INSIGHT

Fixed-Velocity Dynamics Scale Better

  • Microcanonical Langevin dynamics keep chain velocity fixed, improving stability in sharp likelihood regions.
  • Eliminating Metropolis correction and controlling discretization bias lets step sizes remain constant as dimension grows.
Get the Snipd Podcast app to discover more snips from this episode
Get the app