Stratechery

Agents Over Bubbles

271 snips
Mar 16, 2026
A tight tour of three LLM inflection points reshaping AI and compute demand. Discussion of models that self-evaluate and multi-step agents that verify work without humans. Exploration of who will run agents and why enterprises will pay for AI productivity. Analysis of where profits may land, and how big tech integration strategies differ.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Three LLM Inflection Points That Changed Reliability

  • LLMs evolved through three inflection points that increased reliability and usefulness.
  • ChatGPT made LMs readable, O1 added internal reasoning to self-evaluate, and reasoning greatly raised practical dependability.
INSIGHT

Agents Turn Models Into Task-Completing Systems

  • Agents add a harness that directs models and uses deterministic tools, enabling verification and iterative retries without human intervention.
  • That combination turned models into systems that can complete multistep tasks like coding by checking and remediating errors autonomously.
INSIGHT

Agents Drive Massive Compute Demand And CapEx

  • Agent workloads dramatically increase compute needs because they call reasoning models multiple times and need CPU-based deterministic tooling.
  • Agents also raise usage since they are more useful, so demand outstrips existing supply and justifies hyperscaler CapEx.
Get the Snipd Podcast app to discover more snips from this episode
Get the app