Interconnects

Gemma 4 and what makes an open model succeed

77 snips
Apr 3, 2026
A wide field of new open models competes with established players, creating hidden opportunities and higher surprise potential. Benchmarks at release tell only part of the story. Tooling, fine-tunability, and licensing shape real adoption. Gemma 4’s lineup and Apache 2 license spark debate about ease of use, the sweet spot around 30B models, and what will drive long-term success.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Benchmarks Don’t Tell The Full Story

  • Benchmarks at release are an incomplete story for open models and often hide crucial practical issues.
  • Nathan Lambert points out open models have higher variance and can surprise, so raw scores don't capture tool compatibility or real-world adaptability.
ADVICE

Assess Open Models By Five Practical Criteria

  • Evaluate new open models across five axes: performance, origin, license, tooling at release, and fine-tunability.
  • Nathan Lambert stresses tooling and fine-tunability can take days to weeks to stabilize and shape adoption.
INSIGHT

Hybrid Architectures Increase Tooling Friction

  • New hybrid architectures (gated delta nets, mamba/memba layers) often break 'it just works' expectations.
  • Nathan notes tooling for these hybrids is frequently rough at release and requires community engineering to stabilize.
Get the Snipd Podcast app to discover more snips from this episode
Get the app