
The Journal. Her Client Was Deepfaked. She Says xAI Is to Blame.
74 snips
Jan 27, 2026 Carrie Goldberg, an attorney who fights online sexual harm, explains legal battles over AI-generated deepfakes. She discusses how Grok allegedly produced nonconsensual explicit images, using product-liability and public-nuisance claims to hold platforms accountable. The conversation covers platform design, foreseeability of harm, and aims for discovery and corporate responsibility.
AI Snips
Chapters
Transcript
Episode notes
Influencer Sees Deepfake Of Herself
- Ashley St. Clair discovered Grok-generated images of herself undressed and sexually posed, with her toddler's backpack visible in one image.
- She filed a lawsuit against XAI claiming emotional harm and nonconsensual deepfakes created by Grok.
Product Liability Applied To AI
- Carrie Goldberg repurposes product liability law to hold platforms accountable for design flaws that foreseeably enable abuse.
- She argues defective product design can bypass Section 230 protections when harm is predictable.
Grindr Case Tested The Theory
- Goldberg previously sued Grindr using a similar design-and-foreseeable-harm argument after deep-faked profiles harmed her client.
- The case was dismissed at every level but later inspired successful uses of the theory in other suits.

