Machine Learning: How Did We Get Here?

The Chaotic Evolution of the Field with Tom Dietterich

Mar 9, 2026
Tom Dietterich, Distinguished Professor Emeritus known for foundational work in error-correcting output codes and hierarchical reinforcement learning. He maps the chaotic shifts in machine learning over decades. Short takes cover paradigm waves, the tug of theory versus practice, ensembles and SVMs, reinforcement learning breakthroughs, startup lessons, and the need for causality and robust world models.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Feature Vectors United Disparate Communities

  • The mid-1980s shift back to feature vectors (decision trees, neural nets) simplified problems and enabled cross-community progress between CS and signal-processing camps.
  • Tom credits ID3, neural nets, and clearer evaluation (train/test) for moving ML toward statistical methodology.
ANECDOTE

NetTalk Benchmark Shaped Evaluation Practices

  • Dietterich compared decision trees versus neural nets on the NetTalk text-to-speech problem and found neural nets performed slightly better.
  • That comparison helped crystallize supervised learning with separate test sets as the dominant evaluation paradigm.
INSIGHT

SVMs Brought Theory To Practical ML

  • Kernel methods and SVMs brought rigorous functional-analysis theory into practice, enabling convex formulations like hinge loss and quadratic programming.
  • Dietterich highlights Vapnik, reproducing kernel Hilbert spaces, and connections to statistical tradition as theory-led breakthroughs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app