Dwarkesh Podcast

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

2469 snips
Mar 13, 2026
Dylan Patel, founder and chief analyst at SemiAnalysis, maps the real choke points behind AI compute growth. He gets into why old H100s can get more valuable, how Nvidia locked in TSMC capacity early, why memory may be the nastiest crunch ahead, and why ASML could become the limiting factor by 2030. They also touch on power buildouts, China timelines, robots, and Taiwan risk.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Fast Construction Does Not Solve The Fab Problem

  • Elon-style speed may help build fab shells faster, but process technology remains the much harder bottleneck.
  • Dylan Patel says a fab’s clean room might be accelerated, yet developing and integrating the chipmaking process cannot be rushed nearly as much.
INSIGHT

3D DRAM Could Help But Not Soon

  • 3D DRAM could eventually ease AI memory bottlenecks by massively increasing bits per lithography pass.
  • Dylan Patel says roadmaps still use EUV and require huge fab retooling, so relief likely comes only near decade-end or early next decade.
INSIGHT

Power Can Scale Far Beyond Current AI Needs

  • Power will not be the end-decade limiter because many expensive but workable generation paths can be deployed.
  • Dylan Patel lists aeroderivatives, ship engines, reciprocating engines, fuel cells, batteries, and behind-the-meter builds that can add hundreds of gigawatts.
Get the Snipd Podcast app to discover more snips from this episode
Get the app