Latent Space: The AI Engineer Podcast

Claude Code for Finance + The Global Memory Shortage: Doug O'Laughlin, SemiAnalysis

1420 snips
Feb 24, 2026
Doug O'Laughlin, founder of SemiAnalysis and semiconductor analyst known for deep memory, GPU, and supply-chain research. He talks about Claude Code powering analyst workflows and agent swarms for automation. He explores the memory supply crunch, HBM vs DRAM pressures, and how AI tooling reshapes information work and institutional memory.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes

LLMs Compressed Multi‑Year Research Into Days

  • Doug used LLMs to compress a lifetime of memory-cycle research into days, assembling historical NAND/DRAM data, covariates and narratives rapidly.
  • This allowed fast creation of dashboards and regime summaries that previously required months of human work.

Always Keep Expert Review For Final Judgment

  • Do keep humans in the loop for meta-level judgment; Doug stresses expert review on LLM output and that the model still makes frequent mistakes.
  • Train reviewers to spot 'slop' and provide the final 5% of artisanal judgment that matters.

Benchmarks Suggest LLMs Reach White Collar Parity

  • GDPVal-style benchmarks show LLMs already reach parity with many white‑collar experts, implying broad automation potential across information work.
  • Doug treats this as an AGI definition for automation of common jobs, distinct from ASI claims.
Get the Snipd Podcast app to discover more snips from this episode
Get the app