Machine Learning: How Did We Get Here?

Five Decades of Neural Networks with Geoffrey Hinton

Feb 23, 2026
Geoffrey Hinton, University Professor Emeritus and Nobel laureate who helped revive deep learning. He tells how neural nets rose with backprop in the 1980s and exploded again in 2012. He discusses GPUs, the 2012 ImageNet win, Transformers and large language models, industry shifts, and concerns about future superintelligent systems.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Why A Better Tech Proposal Was Rejected For Optics

  • Geoffrey described a British Telecom project where a 20-foot mechanical wall displayed network loads and his Sun workstation alternative was rejected for political reasons.
  • The wall persisted because it impressed visiting politicians, illustrating how non-technical incentives block better engineering.
INSIGHT

Backprop Proved Representation Learning Works

  • Backpropagation demonstrated neural nets can learn representations and probabilistic rules instead of hand-coded logic.
  • Hinton's 'tiny language model' on family trees showed learned vector features (generation, nationality) that Nature found convincing.
INSIGHT

Compute And Data Were The Missing Ingredients

  • Progress stalled not because of theory but because compute and data were insufficient for large problems.
  • Hinton says the main missing ingredients were massive data and faster hardware until GPUs and more data arrived.
Get the Snipd Podcast app to discover more snips from this episode
Get the app