Semi Doped cover image

Can Pre-GPT AI Accelerators Handle Long Context Workloads?

Semi Doped

00:00

Tokens, attention, and the KV cache

Vikram explains tokens, transformer attention, and why the KV cache grows linearly with context length.

Play episode from 05:44
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app