Sharp Tech with Ben Thompson

(Preview) Mythos and Project Glasswing, The Year of Anthropic Continues Apace, Q&A on the NYT, Altman, De-globalization

7 snips
Apr 10, 2026
They dig into Anthropic’s Mythos reveal and Project Glasswing, focusing on security risks and model privacy. They debate why code makes large models uniquely dangerous and whether distillation can be stopped. They discuss Anthropic’s scaling, business incentives, and ties with big tech and government. The conversation ends with media adaptation, Sam Altman’s trajectory, and implications of de-globalization.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Are Naturally Suited To Code Vulnerability Discovery

  • Large language models excel at code because software is predictable language at scale, making bug-finding and exploitation increasingly feasible.
  • Ben Thompson warns Mythos-level capabilities should not surprise us given models' ability to process massive, structured codebases.
INSIGHT

Distillation Is Hard To Stop In Practice

  • Preventing model distillation is extremely difficult because attackers can query APIs at scale to recreate behavior.
  • Ben Thompson compares policing model copies to policing chips vs uranium — much harder to detect and stop.
ADVICE

Restrict High Power Models To Paying Preview Partners

  • Limit access to the most powerful models to paying, vetted customers to manage compute strain and reduce distillation risk.
  • Anthropic uses higher API pricing and private previews to ration usage during capacity crunches.
Get the Snipd Podcast app to discover more snips from this episode
Get the app