MLOps.community  cover image

We Cut LLM Latency by 70% in Production

MLOps.community

00:00

Deploy, roll out, then continuously optimize

He emphasizes iterative deployment, evaluation, and ongoing optimization as new models and techniques appear.

Play episode from 36:24
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app