The Moonshot Podcast

The Moonshot Podcast Deep Dive: Andrew Ng on Deep Learning and Google Brain

Aug 8, 2025
Andrew Ng, AI researcher and entrepreneur who founded Google Brain and Coursera. He talks about the early days of neural networks and the scale-driven ideas that sparked Google Brain. He recalls the famous cat video discovery, hardware choices like GPUs and TPUs, transformer roots, and how foundation models enable many new applications.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Why Scaling Beat Algorithmic Tinkering

  • Scaling neural networks and using commodity compute (GPUs) was controversial but produced consistent performance gains as model size increased.
  • Andrew's student experiments showed model performance trended up with size, motivating the disruptive scaling strategy.
ANECDOTE

Jeff Dean As Systems Co‑Founder

  • Jeff Dean joined Google Brain as a systems partner, combining Andrew's ML expertise with Jeff's deep knowledge of scalable infrastructure.
  • Their partnership let them map parallel training ideas (MapReduce) into practical ML training systems at Google scale.
INSIGHT

GPU Hesitancy Slowed Early Speedups

  • Google Brain delayed full GPU adoption because of concerns about heterogeneous datacenter complexity, which slowed early ML speedups.
  • Andrew ran GPU demos from his Stanford group while Google used CPUs, later moving to GPUs and then TPUs once infrastructure evolved.
Get the Snipd Podcast app to discover more snips from this episode
Get the app