
#567: Why Power Is Becoming a Major Problem for AI in 2026
Mar 28, 2026
Nathan Jokel, Head of Corporate Strategy at Cisco who leads long-term AI and security partnerships, discusses the future of AI data centers. He covers Cisco and NVIDIA's work to remove networking bottlenecks with 1.6T ports and G300 silicon. Topics include Secure AI infrastructure, observability and security (Splunk, eBPF, HyperShield), power constraints for data centers, and preparations for post-quantum and quantum networking.
AI Snips
Chapters
Transcript
Episode notes
Cisco and NVIDIA Built An Integrated Secure AI Factory
- Cisco and NVIDIA created the Secure AI Factory to deliver integrated AI infrastructure combining Cisco servers, networking, security and NVIDIA GPUs.
- The stack includes networking tuned for GPUs, Splunk for observability, and Cisco management layers to reduce deployment friction.
Advanced Congestion Control Prevents Costly Training Slowdowns
- Cisco integrated NVIDIA Spectrum technology and advanced congestion control into its switches to prevent packet loss impacting training jobs.
- Packet loss or latency can stall expensive training; Spectrum handshakes improve utilization.
Provide Familiar Management To Speed AI Adoption
- Give enterprise IT familiar management interfaces when deploying AI infrastructure to reduce adoption friction.
- Cisco offers Nexus or Hyperfabric so IT teams don't need to learn NVIDIA-native OSes like Sonic to manage AI systems.
