
The Lawfare Podcast Lawfare Archive: Pam Samuelson on Copyright's Threat to Generative AI
9 snips
May 10, 2026 Pamela Samuelson, a UC Berkeley law professor and digital copyright pioneer, examines how copyrighted works power generative AI. She discusses training data ingestion, major lawsuits challenging model training, fair use debates, potential remedies up to model destruction, and how U.S. uncertainty compares with overseas approaches.
AI Snips
Chapters
Transcript
Episode notes
Developers Rely On Google Books Analogy
- AI developers liken training on scraped web data to Google Books’ fair use, arguing decomposition into tokens is non-exploitative.
- Samuelson explains the technical view: tokenization supports computational purposes rather than republishing expression.
Litigation Is Fragmented Across Mediums
- Multiple active lawsuits target different models and outputs, spanning images, software, and books, so outcomes will vary by case.
- Samuelson cites Getty v. Stability, Anderson class actions, Doe v. GitHub (Copilot), and recent suits against OpenAI and Meta.
Legal Remedies Could Threaten Model Existence
- A dramatic judicial remedy could require destruction of models found to infringe, not just damages, posing existential risk to some systems.
- Samuelson warns courts can order impoundment and destruction of infringing copies, which could include trained models.

