
The Tech Strategy Podcast 2 Problems with Scaling Agentic AI (281)
Apr 21, 2026
A deep dive into two scaling problems for agentic AI: exploding data demands and the need for a robust context layer. Discussion covers single‑agent versus multi‑agent workflows and new ways to measure scale like agent counts and token budgets. Practical focus on layered data architecture, vector stores, and quick wins by modernizing high‑impact workflows.
AI Snips
Chapters
Transcript
Episode notes
Agentic AI Greatly Increases Data Volume
- Scaling agentic AI multiplies data demands because agents run continuously rather than episodically.
- Jeff Towson notes agents produce far higher volume and constant activity, so data quality and flow must be sustained nonstop.
Multi-Agent Teams Require Shared Interoperable Data
- Agents need many interoperable data sets and models and often operate as multi-agent teams.
- Towson explains tasks split across specialized agents require shared knowledge graphs and consistent data interfaces to coordinate.
Measure Scale By Agents Tokens And Data Quality
- Traditional scale metrics (headcount) become less relevant for agent-first firms; measure agents, tokens, and data quality.
- Towson argues tokens and data quality multiply agent productivity, shifting how productive capacity is assessed.
