Interconnects

My bets on open models, mid-2026

62 snips
Apr 15, 2026
They debate whether open models can keep pace with closed labs and why a simple catch-up story is unlikely. They discuss surprising parity on benchmarks and where closed models still hold robustness advantages. They explore how economics, distillation, RL training, and real-world distribution shape who wins. They highlight growing sovereign and business demand for open weights and hidden demand from personal agents.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Open Models Won't Win Every Arena

  • Open models will not keep up with closed labs across every area despite some parity on benchmarks.
  • Nathan Lambert frames the future as a complex balance driven by capability gaps, funding, distillation, regulation, and user adoption.
INSIGHT

Open Labs Match Benchmarks Through Talent And Compute

  • Open-weight labs have kept pace on established benchmarks due to abundant talent and sufficient compute.
  • This trend continued through late 2025 and reflects fast-following technical strength rather than monopoly compute advantage.
INSIGHT

Closed Models Offer Hard To Measure Robustness

  • Closed models show greater robustness and practical usefulness than open models with similar benchmark scores.
  • Nathan Lambert notes closed models possess hard-to-measure qualities that matter for continuous, real-world assistance like knowledge work.
Get the Snipd Podcast app to discover more snips from this episode
Get the app