
OpenAI’s Shopping U-Turn Complications, Nvidia’s Groq Chip, Synthesia’s AI Video for Enterprise
The Information's TITV
00:00
Memory crunch: SRAM vs HBM in inference
David Levy outlines SRAM-on-chip designs and why inference is increasingly a memory, not compute, bottleneck.
Play episode from 11:08
Transcript


