

Machine Learning: How Did We Get Here?
Tom Mitchell | Stanford Digital Economy Lab | Carnegie Mellon University
Tom Mitchell literally wrote the book on machine learning. In this series of candid conversations with his fellow pioneers, Tom traces the history of the field through the people who built it. Behind the tech are stories of passion, curiosity, and humanity.
Tom Mitchell is the University Founders Professor at Carnegie Mellon University, a Digital Fellow at the Stanford Digital Economy Lab, and the author of Machine Learning, a foundational textbook on the subject. This podcast is produced by the Stanford Digital Economy Lab.
Tom Mitchell is the University Founders Professor at Carnegie Mellon University, a Digital Fellow at the Stanford Digital Economy Lab, and the author of Machine Learning, a foundational textbook on the subject. This podcast is produced by the Stanford Digital Economy Lab.
Episodes
Mentioned books

Mar 30, 2026 • 21min
Machine Learning Theory with Leslie Valiant
Leslie Valiant, Turing Award–winning computer scientist and Harvard professor who founded PAC learning theory, reflects on how theory met practice in machine learning. He recounts the origins of the PAC model, why learnability can defy hardness, the split between statistical and computational approaches, and his broader aims to formalize cognition and educability.

Mar 23, 2026 • 24min
Decision Tree Learning with Ross Quinlan
Tom speaks with Ross Quinlan, whose algorithms C4.5 and ID3 helped establish decision trees as one of the most popular approaches in machine learning, and who founded RuleQuest Research, which accelerated the commercial adoption of machine learning.Ross (published as "JR Quinlan") describes a sabbatical visit to Stanford University where he took a course that drove him to invent the first successful learning algorithm for decision trees, follow-on research that led to decision trees becoming one of the most popular machine learning algorithms, and his experience moving from academia into the commercial world.

Mar 16, 2026 • 34min
Reinforcement Learning with Rich Sutton
Rich Sutton, research scientist and professor celebrated for foundational work in reinforcement learning and a 2024 Turing Award co-winner. He defines learning from trial and error, traces RL’s historical roots, explains temporal-difference learning, and contrasts RL with supervised approaches. He discusses early successes like TD-Gammon and AlphaGo, limits of deep learning representations, and open problems in continual representation learning.

Mar 9, 2026 • 1h 5min
The Chaotic Evolution of the Field with Tom Dietterich
Tom Dietterich, Distinguished Professor Emeritus known for foundational work in error-correcting output codes and hierarchical reinforcement learning. He maps the chaotic shifts in machine learning over decades. Short takes cover paradigm waves, the tug of theory versus practice, ensembles and SVMs, reinforcement learning breakthroughs, startup lessons, and the need for causality and robust world models.

Mar 2, 2026 • 1h 21min
A University and Corporate Perspective with Yann LeCun
Yann LeCun, NYU professor and Turing Award winner known for convolutional nets and self-supervised learning. He traces neural-net history from early perceptrons and inspiration from vision neuroscience to commercial wins and the ImageNet revolution. He discusses PyTorch/autodiff, the rise of self-supervision and Transformers, and his world-model and JEPA ideas for learning predictive representations.

Feb 23, 2026 • 46min
Five Decades of Neural Networks with Geoffrey Hinton
Geoffrey Hinton, University Professor Emeritus and Nobel laureate who helped revive deep learning. He tells how neural nets rose with backprop in the 1980s and exploded again in 2012. He discusses GPUs, the 2012 ImageNet win, Transformers and large language models, industry shifts, and concerns about future superintelligent systems.

Feb 23, 2026 • 1h 8min
The History of Machine Learning with Tom Mitchell
A brisk tour of machine learning’s origins from philosophical questions about induction to Turing’s learning metaphor. Highlights include early programs like checkers, the perceptron saga and its rebirth with backprop, the rise of probabilistic graphical models and SVMs, and the deep learning revolution from ImageNet to transformers and self-supervised pretraining.


