
#561: Why 1 small network FAIL breaks your massive 2026 AI job
Mar 18, 2026
Hendrik Blokhuis, Cisco CTO for EMEA partners, and Gary Middleton, NTT Data networking lead in Europe, talk infrastructure pressures from AI. They unpack NeoClouds vs hyperscalers. They highlight data sovereignty, extreme power and cooling needs, single-point network failure risks, edge inferencing for robotics, and the urgent skills needed for 2026 networks.
AI Snips
Chapters
Transcript
Episode notes
Resource Scarcity Shapes Neocloud Strategy
- Major constraints for NeoClouds are scarce resources: electricity, GPUs and memory with only a few suppliers worldwide.
- Providers innovate by locating where resources are cheaper or reuse waste energy to stay viable.
Power And Cooling Decide AI Data Center Design
- Power and cooling drive data center placement and design; edge vs hyperscale choices depend on workload density and latency.
- Liquid cooling and rack-level kilowatt planning (80–120 kW/rack) are now practical requirements for AI compute.
Redesign Data Centers For Scale Up Out And Across
- AI demands redesigning data centers across three dimensions: scale-up, scale-out and scale-across to manage distributed GPU clusters.
- Cisco cites Nexus 9k for top-of-rack and 800G optical plus 8K routing to treat multiple sites as one brain.
