
AI Hustle: Make Money from AI Anthropic Accuses Chinese AI Labs of Claude Mining
Mar 3, 2026
They dig into allegations that Chinese AI labs used a proprietary model as raw training data and the controversial distillation technique behind it. The conversation covers how massive simulated chats and synthetic data can recreate models. They also compare costly cloud AI services to local open-source alternatives and debate the ethics and enforceability of copying in AI development.
AI Snips
Chapters
Transcript
Episode notes
How Distillation Can Clone A Model's Tone
- Distillation lets competitors replicate a model’s tone and behavior by generating millions of synthetic Q&A pairs from the target model.
- Hosts describe Chinese firms running ~16 million automated Claude chats across 24,000 accounts to train smaller, locally runnable models.
Small Open Models Erode Big AI Moats
- Distillation narrows the moat of big AI firms because small teams can copy outputs instead of building costly research stacks.
- The hosts argue cloned small models can be open source, run locally, and still deliver high-quality responses.
Use Local Open Models To Cut Experiment Costs
- Try using open source local models to cut costs for experimental projects instead of expensive API tiers.
- Jaeden gives an example switching from a $1,300/month 11 Labs plan to running Alibaba's open Quen3 TTS locally for voice cloning.
