Dwarkesh Podcast cover image

Reiner Pope – The math behind how LLMs are trained and served

Dwarkesh Podcast

00:00

Inference Scale Needs Steady Traffic

Reiner Pope explains train-departure style batching, queueing delay, and why serving efficient batches requires substantial token throughput.

Play episode from 21:59
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app