Dwarkesh Podcast cover image

Reiner Pope – The math behind how LLMs are trained and served

Dwarkesh Podcast

00:00

Inference Economics Favor Overtraining

Reiner Pope estimates that optimal deployment may equalize pretraining, RL, and inference costs, implying frontier models can be heavily overtrained.

Play episode from 01:19:03
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app