David Bombal

#536: Inside the Cisco 8000: 100Tbps Capacity

Feb 18, 2026
Will Eatherton, Cisco infrastructure and networking leader with ASIC and system design roots. He walks through Cisco's G300 silicon and 100Tbps chassis. He explains why AI data centers are switching to liquid cooling. He outlines scale-out networking changes for massive GPU clusters and the telemetry and management needed to run them.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

From ASIC Designer To Cisco Infrastructure Lead

  • Will started his career in ASIC design and returned to Cisco, bringing deep hardware and systems experience.
  • That background shaped his focus on silicon, system architecture, and collaboration with hyperscalers.
INSIGHT

G300 Doubles Bandwidth, Optimizes AI Traffic

  • Cisco's G300 doubles chip bandwidth to 100Tbps while adding smarter packet handling features.
  • These silicon advances reduce AI job completion time by improving load balancing and throughput.
INSIGHT

Programmable Chips Handle Diverse Traffic

  • The G300 adds programmable algorithms to handle both massive single flows and many small flows concurrently.
  • Architectural features yield up to ~25–30% improvement in optimized load-balancing tests versus prior generations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app