
Limitless: An AI Podcast Anthropic Just Got Hacked by China. These are the New Front Lines.
14 snips
Feb 25, 2026 A deep dive into allegations that Chinese open source labs used distillation to copy massive amounts of conversational data. The conversation explores why distillation is powerful for scaling models and how it can strip safety limits. They debate legal, ethical, and national security tensions as cheap open models reshape industry power and geopolitics.
AI Snips
Chapters
Transcript
Episode notes
Anthropic's 16 Million Conversation Claim
- Anthropic says three Chinese labs generated 16 million fake Claude conversations using 24,000 sham accounts to train their models.
- Ajaz / Limitless Host names DeepSeek (150k), Moonshot (3.4M) and Minimax (13M) as primary actors in the dataset extraction.
Top Labs Already Distill Their Own Models
- Distillation is industry-standard: Anthropic distilled Claude Opus into Haiku and Google made Gemini Nano from Gemini Ultra.
- Ajaz / Limitless Host uses these examples to show the practice isn't unique and raises the question of illegality.
Terms Of Service Don’t Stop Cross-Border Distillation
- Anthropic claims legal violations via terms of service and geographic restrictions, but enforcement is weak when actors operate from China.
- Josh Kale and Ajaz stress that preventing access is on Anthropic and cross-border legal recourse is limited.
