
Machine Learning: How Did We Get Here? Machine Learning Theory with Leslie Valiant
Mar 30, 2026
Leslie Valiant, Turing Award–winning computer scientist and Harvard professor who founded PAC learning theory, reflects on how theory met practice in machine learning. He recounts the origins of the PAC model, why learnability can defy hardness, the split between statistical and computational approaches, and his broader aims to formalize cognition and educability.
AI Snips
Chapters
Transcript
Episode notes
PAC Learning Reframes What It Means To Learn
- Leslie Valiant framed learning as a probabilistic, computational problem leading to the PAC (probably approximately correct) model introduced around 1983–84.
- PAC requires good performance only on the training distribution, letting rare, unseen distinctions be ignored and making complex concepts learnable.
Learning Can Beat Worst Case Complexity
- Valiant showed that some problems hard for worst-case computation (e.g., 3-CNF equivalence) can be learnable under PAC because learning targets the input distribution.
- Example: 3-variable CNF formulas are PAC-learnable despite NP-hardness in the classical sense.
Same Distribution Assumption Is The Liberating Trick
- The key trick in PAC is assuming training and test distributions match, so learners needn't distinguish hypotheses on rare inputs they won't see.
- That constraint makes many otherwise intractable tasks feasible in practice.

