Cheeky Pint cover image

Reiner Pope of MatX on accelerating AI with transformer-optimized chips

Cheeky Pint

00:00

Long context, memory bandwidth, and compaction

Reiner discusses context-size bottlenecks, compaction strategies, and application-level interventions like OpenClaw.

Play episode from 52:19
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app