Inference by Turing Post cover image

Inside MiniMax: How They Build Open Models

Inference by Turing Post

00:00

Compute constraints and efficient RL infrastructure

Olive describes teams optimizing GPU use and RL pipelines to stabilize training while reducing compute cost.

Play episode from 13:20
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app