Will Eatherton, Cisco infrastructure and networking leader with ASIC and system design roots. He walks through Cisco's G300 silicon and 100Tbps chassis. He explains why AI data centers are switching to liquid cooling. He outlines scale-out networking changes for massive GPU clusters and the telemetry and management needed to run them.
31:58
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
question_answer ANECDOTE
From ASIC Designer To Cisco Infrastructure Lead
Will started his career in ASIC design and returned to Cisco, bringing deep hardware and systems experience.
That background shaped his focus on silicon, system architecture, and collaboration with hyperscalers.
insights INSIGHT
G300 Doubles Bandwidth, Optimizes AI Traffic
Cisco's G300 doubles chip bandwidth to 100Tbps while adding smarter packet handling features.
These silicon advances reduce AI job completion time by improving load balancing and throughput.
insights INSIGHT
Programmable Chips Handle Diverse Traffic
The G300 adds programmable algorithms to handle both massive single flows and many small flows concurrently.
Architectural features yield up to ~25–30% improvement in optimized load-balancing tests versus prior generations.
Get the Snipd Podcast app to discover more snips from this episode
Is this the most powerful network switch ever built? In this interview from Cisco Live, we look at the new generation of Cisco 8000 and Nexus switches capable of routing 100 Terabits per second. We break down why AI data centers are forced to move from air cooling to liquid cooling, and how a single switch chassis can now handle the equivalent mobile traffic of 100 million people.
AI models are growing faster than the infrastructure can keep up. To solve this, the network switch had to evolve. In this video, I talk to Will from Cisco about the engineering challenges of building the "G300" generation of switches, hardware so dense that air cooling is no longer enough.
We discuss the massive architectural shift occurring in data centers, where Liquid Cooled Switches are becoming the new standard to support 1.6T Ethernet ports and massive GPU clusters.
Key Hardware Topics:
• The 100 Terabit Chassis: How Cisco architecture handles massive throughput.
• Liquid Cooling: Why switches are adopting "direct-to-chip" cooling just like gaming rigs.
• Scale-Out Networking: How these switches manage congestion for AI training jobs (Job Completion Time).
• Career Insights: Will manages 5,000 engineers and explains why understanding the physical layer and hardware constraints is a superpower for modern developers.
Big thanks to Cisco for sponsoring my trip to Cisco Live EMEA and for changing my life and the lives of many other people.
// Will Eatherton SOCIAL //
LinkedIn: / willeatherton
Newsroom: https://newsroom.cisco.com/c/r/newsro...
// David's SOCIAL //
Discord: discord.com/invite/usKSyzb
Twitter: www.twitter.com/davidbombal
Instagram: www.instagram.com/davidbombal
LinkedIn: www.linkedin.com/in/davidbombal
Facebook: www.facebook.com/davidbombal.co
TikTok: tiktok.com/@davidbombal
YouTube: / @davidbombal
Spotify: open.spotify.com/show/3f6k6gE...
SoundCloud: / davidbombal
Apple Podcast: podcasts.apple.com/us/podcast...
// MY STUFF //
https://www.amazon.com/shop/davidbombal
// SPONSORS //
Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com
// MENU //
0:00 - Coming up
0:49 - Will Eatherton introduction and projects // Hyperscale, neocloud & enterprises
08:03 - New Cisco hardware // Silicon One G300
13:18 - Data centers + AI + GPUs
16:27 - G300 use case & Cisco Nexus
21:56 - Liquid-cooled switches
24:12 - Networking as a career path
28:41 - Development and opportunities in networking
30:14 - Conclusion
Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!
Disclaimer: This video is for educational purposes only.
#cisco #ciscolive #ciscoemea