TBPN

FULL INTERVIEW: Why I Think Nvidia Is Perfectly Positioned In The AI Race

93 snips
Mar 30, 2026
Tae Kim, a technology analyst and author covering semiconductors and AI infrastructure, breaks down why Nvidia still looks strong in the AI race. The conversation hits inference demand, supply lockups, GPU and CPU shortages, open models, vertical AI agents, and why fears around depreciation, pullbacks, and a future compute glut may be overdone.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Why Tae Kim Thinks Nvidia Selloff Is Overblown

  • Tae Kim argues Nvidia’s selloff looks more like macro fear than business deterioration, echoing prior tariff and DeepSeek panics.
  • He says AI capex fears and the Iran oil shock are masking strong fundamentals, just as a 30% drawdown happened while business kept flying.
INSIGHT

Inference Demand Is Driving A Fresh Compute Shortage

  • Tae Kim says inference demand is exploding from AI agents and coding assistants, creating real AI compute shortages across major labs.
  • He cites talks with Ian Buck and engineers at Meta, Google, and Nvidia, plus users running sneaker bots for scarce B200 GPUs.
INSIGHT

Why Grok Fits Nvidia's Inference Strategy

  • Tae Kim sees Nvidia’s Grok move as a pragmatic extension of its stack, not a rejection of GPUs or a full customer-competitive pivot.
  • He says Grok could handle roughly 25% of low-latency inference while Vera Rubin covers 75%, matching the new coding-agent workload mix.
Get the Snipd Podcast app to discover more snips from this episode
Get the app