AI DevOps Podcast cover image

Aaron Stannard: Software 2.0 using AI - Episode 396

AI DevOps Podcast

00:00

Running Models Locally: Hardware and Latency Tradeoffs

Aaron explains using AMD GPUs, llama.cpp/olama, quantized Qwen models, and tradeoffs between context window and speed.

Play episode from 26:41
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app