The Stack Overflow Podcast cover image

The logos, ethos, and pathos of your LLMs

The Stack Overflow Podcast

00:00

Inductive biases explain data efficiency gaps

Tom contrasts human priors with transformer inductive biases to explain why humans need far less data than LLMs.

Play episode from 22:17
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app