
Powering the AI Inference Wave with EPRI's Ben Sooter - Ep. 292
NVIDIA AI Podcast
00:00
Training vs. inference energy demand
Ben distinguishes model training from inference, noting inference consumes around 80% of a model's lifetime compute.
Play episode from 04:03
Transcript


