
Machine Learning: How Did We Get Here? The History of Machine Learning with Tom Mitchell
Feb 23, 2026
A brisk tour of machine learning’s origins from philosophical questions about induction to Turing’s learning metaphor. Highlights include early programs like checkers, the perceptron saga and its rebirth with backprop, the rise of probabilistic graphical models and SVMs, and the deep learning revolution from ImageNet to transformers and self-supervised pretraining.
AI Snips
Chapters
Transcript
Episode notes
Backprop Resurrected Neural Networks
- The mid-1980s rebirth of neural nets came with multi-layer training via backpropagation, led by Rumelhart, McClelland, and Hinton.
- Jeff Hinton applied backprop to learn representations, e.g., predicting the last term of family-tree triples like early language-modeling.
Neural Nets Learned Real Driving From One Drive
- Dean Pomerleau trained a neural net to steer a vehicle from camera images; after one human drive the net could take over within minutes for similar roads.
- He progressed from batch overnight training to real-time learning and completed a ~100-mile drive at up to 55 mph.
Reinforcement Learning Changes The Training Signal
- Reinforcement learning reframes supervision: agents learn from delayed rewards instead of immediate labeled outputs, addressing tasks like games where only final outcome signals exist.
- Rich Sutton emphasized RL learns from the natural data stream animals receive and doesn't need prepared supervised labels.
