Machine Learning Street Talk (MLST)

"Vibe Coding is a Slot Machine" - Jeremy Howard

298 snips
Mar 3, 2026
Jeremy Howard, deep learning researcher and fast.ai co-founder known for ULMFiT and practical transfer learning. He clocks the origins of fine-tuning, why AI-assisted coding creates a tempting 'vibe coding' slot-machine feeling, and how LLMs interpolate code without true understanding. Short takes on notebooks, maintenance risks, and who actually benefits from AI coding.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Next Word Prediction Builds Hierarchies Of Abstraction

  • Predicting next-word forces models to form hierarchical abstractions about objects, people and institutions.
  • Howard argues those implicit hierarchies let language models capture structural knowledge from huge corpora.
INSIGHT

Compositional Creativity Hits A Distribution Ceiling

  • LLMs can perform vast combinatorial creativity but cannot reliably extrapolate outside training distributions.
  • Howard observes models go from clever to absurdly wrong when operating beyond seen examples.
ANECDOTE

Claude's C Compiler Was Mostly Interpolation

  • Howard discusses Anthropic's Claude writing a C compiler and shows it interpolated existing LLVM/Rust patterns from training data.
  • He found the generated repo copied unusual LLVM idioms, evidencing interpolation rather than original design.
Get the Snipd Podcast app to discover more snips from this episode
Get the app