The Compound and Friends

Nvidia GTC Highlights, Uber Is Too Cheap, Cliffwater and the Private Credit Panic

73 snips
Mar 17, 2026
NVIDIA's GTC vision and the shift from training to constant inference in AI. The rise of AI in physical products and robotics plus CUDA's competitive moat. Concerns around private credit, rapid redemptions, and controversy at an asset manager. A case for why Uber might be undervalued given partnerships and autonomous plans.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Nvidia's Inference Inflection Is The Real Product

  • Nvidia's next phase is the "inference inflection" where AI moves from model training to continuous real-world usage, turning inference into a metered utility.
  • Jensen Huang highlighted Vera Rubin chips for pre-fill and Grok-derived silicon for decode, aiming to embed Nvidia silicon across inference workloads.
INSIGHT

CUDA Gives Nvidia A Defensive Moat

  • Nvidia's biggest moat remains CUDA, the 20-year software platform that runs across public and private clouds.
  • Competitors may offer cheaper or more abundant chips, but CUDA's ecosystem makes Nvidia hard to displace for many workloads.
ANECDOTE

Carpet Cleaner Chatbot Booked Appointments Overnight

  • Josh shared a Twitter story where a carpet-cleaning customer interacted with a chatbot that handled booking off hours, illustrating practical AI automation gains.
  • The bot took the request, corrected an email, and booked an appointment while staff were away, showing real cost savings.
Get the Snipd Podcast app to discover more snips from this episode
Get the app