Venture with Grace

Jeff Tatarchuk, TensorWave Co-Founder on AI Compute & GPU Clouds

Feb 7, 2026
Jeff Tatarchuk, co-founder and CGO of TensorWave and serial entrepreneur, builds hyperscale AI compute with AMD GPUs. He discusses securing GPU and FPGA capacity, building capital‑intensive AI/cloud infrastructure, AMD vs NVIDIA strategy, designing massive liquid‑cooled training clusters, and practical GPU procurement, deployment and financing for training and inference.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Optionality Matters In GPU Supply

  • Market risk arises when a single supplier dominates; customers seek optionality away from Nvidia.
  • AMD's open-standards approach and broad stack (CPUs, GPUs, FPGAs) position it as a sustainable alternative.
ANECDOTE

Building A Large AMD Training Cluster

  • TensorWave built an 8,192 AMD MI325 liquid-cooled training cluster to prove AMD can train at scale.
  • The cluster served as a stake in the ground to gain credibility and customer demand for AMD-based training.
ADVICE

Value Memory (HBM) For Production Models

  • Prefer GPUs with more HBM (memory) for better inference performance and larger context handling.
  • More and closer memory reduces latency and enables bigger models in production.
Get the Snipd Podcast app to discover more snips from this episode
Get the app