
The Everything Feed - All Packet Pushers Pods HN793: A Deep Dive Into High-Performance Switch Memory
Aug 22, 2025
LJ Wobker, a Principal Engineer at Cisco with expertise in high-performance memory systems, dives deep into the intricacies of high-performance switch memory. He discusses the differences among TCAM, SRAM, and DRAM, highlighting their trade-offs in networking functions. The conversation touches on memory management in switches and the complex challenges of packet processing. LJ also explains how TCAM enables fast data retrieval, the significance of ASIC interfaces, and the evolving demands of network performance in the face of rising memory bandwidth requirements.
AI Snips
Chapters
Transcript
Episode notes
TCAM: Parallel Masked Matching Engine
- TCAM performs masked, massively parallel lookups and returns match vectors, making it ideal for ACLs and flexible matching.
- TCAMs are deterministic but extremely expensive in area, power, and heat when scaled up.
TCAM Scaling Is Limited By Heat And Cost
- You can't simply scale TCAM capacity arbitrarily due to transistor count and heat; big TCAMs can physically melt the chip.
- Cost and thermal limits force hybrid designs and strict resource trade-offs.
Engine 5's Million-Entry TCAMs
- LJ recalled Engine 5 that used three 1M-entry TCAMs to implement FIB, QoS, and NetFlow, which delivered high capability at steep cost.
- That design solved problems then but would be prohibitively expensive and power-hungry today.
