MLOps.community  cover image

Serving LLMs in Production: Performance, Cost & Scale // CAST AI Roundtable

MLOps.community

00:00

Inference engines and orchestration tools

Igor surveys VLLM, TensorRT, GGML and orchestration projects like Bricks, Dynamo, KServe, and LMCache.

Play episode from 36:32
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app