
Startup Project: Build the future Inside the Battle for AI Cloud Dominance — Why Cloud Builders like TensorWave are Rethinking NVIDIA’s Monopoly | Jeff Tatarchuk, Co-Founder of TensorWave
Rethinking AI Compute Infrastructure: The TensorWave ApproachIn this episode, Jeff Tatarchuk, co-founder of TensorWave, shares how his deep industry experience and innovative mindset are transforming AI compute infrastructure. We explore how building specialized data centers, focusing on AMD GPUs, and creating flexible ecosystems are shaping the future of scalable AI.
In this episode:
- The evolution of cloud companies and the rise of Neo clouds focused on AI compute
- TensorWave’s unique strategy of deploying AMD GPUs in custom data centers
- Lessons learned from FPGA cloud business and transitioning into GPU infrastructure
- The technical challenges and solutions in scaling data centers quickly amidst power and supply chain constraints
- The importance of software ecosystems, interoperability, and supporting AMD’s software stack
- How TensorWave differentiates itself from purely financial arbitrage models and pure Nvidia-centric clouds
- AMD’s advantages in memory capacity, chiplet architecture, and software support
- The technical intricacies of CUDA versus ROCm, and efforts to build an open ecosystem
- Future vision: democratized, reliable, and flexible AI compute options for enterprise and labs
Timestamps:00:00 – Introduction to TensorWave and the AI compute landscape
02:30 – The rise of Neo clouds and innovation waves in cloud infrastructure
06:00 – How TensorWave’s FPGA cloud background shaped its GPU strategy
10:00 – Challenges in deploying large data centers: power, supply chain, and permitting
14:00 – Building and scaling AMD GPU data centers quickly and efficiently
19:00 – Software ecosystems: the CUDA moat and TensorWave’s ‘Beyond CUDA’ summit
23:00 – Market differentiation: technical and operational challenges in the Neo cloud space
27:00 – Supporting enterprise fine tuning and large-scale training demands
32:00 – AMD’s technical advantages: VRAM, chiplet architecture, and software support
36:00 – Building an open, heterogeneous AI ecosystem beyond CUDA
40:00 – What success looks like: a resilient, accessible AI compute future
Resources & Links:
- Beyond CUDA Summit
- Scalar LM by Greg De Almos
- AMD MI300X Data Center Chip
- Nvidia H100
- RoCM Software Stack
This conversation offers a strategic look at how focused infrastructure development, software ecosystem support, and hardware differentiation are critical in shaping the future of accessible, scalable AI compute. Whether you're building data centers, developing AI hardware, or just interested in industry shifts, this episode provides valuable insights into how companies like TensorWave are reshaping the landscape.
