

Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Episodes
Mentioned books

49 snips
May 7, 2026 • 17min
Notes from inside China's AI labs
A travel-soaked dispatch from visits to leading Chinese AI labs. The conversation covers how organizational design and cultural incentives make companies rapid fast-followers. It highlights meticulous engineering across data, model architecture and RL. It describes students as key hands-on contributors and contrasts China’s collaborative humility with Western individualism.

53 snips
May 4, 2026 • 9min
The distillation panic
A debate over whether calling API scraping “distillation attacks” will unfairly stigmatize a key ML technique. A look at legitimate distillation workflows, multi-stage training, and how hard it is to trace origins. Legal and policy gray areas around using closed-model APIs. Worries that overzealous rules could hurt Western research and push security-focused responses instead.

62 snips
Apr 15, 2026 • 7min
My bets on open models, mid-2026
They debate whether open models can keep pace with closed labs and why a simple catch-up story is unlikely. They discuss surprising parity on benchmarks and where closed models still hold robustness advantages. They explore how economics, distillation, RL training, and real-world distribution shape who wins. They highlight growing sovereign and business demand for open weights and hidden demand from personal agents.

40 snips
Apr 11, 2026 • 6min
The inevitable need for an open model consortium
Conversation covers the case for a multi-company consortium to fund near-frontier open models. It examines recent turnover at open model labs and the funding pressures they face. It explores trade-offs between releasing strong open models and pursuing revenue-generating AI products. It surveys which firms might publish many fine-tunable models and what governance or funding mechanisms could help.

60 snips
Apr 9, 2026 • 9min
Claude Mythos and misguided open-weight fearmongering
A rapid takedown of the panic around a new Claude model and why broad anti-open-weight narratives conflate separate risks. Discussion of the benefits of a 6–18 month lag between closed and open models for safety. Exploration of what it actually takes to weaponize a model beyond released weights, including tools, serving costs, and attacker sophistication. Call for targeted measurement and monitoring rather than blanket bans.

77 snips
Apr 3, 2026 • 9min
Gemma 4 and what makes an open model succeed
A wide field of new open models competes with established players, creating hidden opportunities and higher surprise potential. Benchmarks at release tell only part of the story. Tooling, fine-tunability, and licensing shape real adoption. Gemma 4’s lineup and Apache 2 license spark debate about ease of use, the sweet spot around 30B models, and what will drive long-term success.

68 snips
Mar 22, 2026 • 13min
Lossy self-improvement
Debate over whether AI will accelerate itself into a rapid takeoff or hit practical limits. Definitions and history of recursive self-improvement are explored. Technical, political, and economic frictions that slow self-improvement are highlighted. Discussions cover AutoML lessons, diminishing returns from many agents, and why progress may feel linear rather than explosive.

73 snips
Mar 18, 2026 • 7min
GPT 5.4 is a big step for Codex
A lively take on how GPT 5.4 advances agent workflows by improving correctness, speed, ease of use, and cost. Discussion covers everyday engineering tasks that used to cause frequent failures and why the new model feels smoother. Comparisons highlight contrasting styles and practical trade offs between different AI systems. Thoughts on Codex app polish, token efficiency, and future integrations round out the conversation.

55 snips
Mar 16, 2026 • 18min
What comes next with open models
A look at why 2025 pushed many companies to release open AI models and how one breakout win shifted strategies. A discussion of whether open models can economically compete with closed labs and the persistent performance gap. A breakdown of three future model classes and why small, specialized open models may be the most practical opportunity. Thoughts on systems, tools, and building diverse ecosystems instead of chasing frontier scale.

51 snips
Mar 6, 2026 • 36min
Dean Ball on open models and government control
Dean W. Ball, policy and governance commentator and author of the Hyperdimensional newsletter, explores how the Anthropic vs. DoW clash reshapes trust in open models. He discusses open weights as insurance against concentrated control. They cover funding paths, sovereign and regional initiatives, infrastructure and tooling gaps, and why open efforts may win long-term despite short-term hurdles.


