"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Liability for AI Harms: How Ancient Law Can Govern Frontier Technology Risk, with Prof Gabriel Weil

70 snips
Jul 26, 2025
In this engaging discussion, Gabriel Weil, an Assistant Professor of Law at Touro University with expertise in AI liability, shares his insights on harnessing traditional liability law to govern AI development. He argues for using existing negligence and products liability frameworks to hold developers accountable. The conversation dives into real-world scenarios like the Character AI case and voice cloning risks, while proposing the innovative use of punitive damages to incentivize safer AI practices, potentially making risks far costlier for missteps.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Make AI Developers Own Third-Party Risks

  • AI developers should treat third-party risks as their own to promote reasonable risk-reward tradeoffs.
  • Reasonable safety improvements must matter financially to incentivize safer model deployment.
INSIGHT

User Harms vs. Third-Party Liability

  • Harm to AI users like emotional distress aligns with negligence and terms of service enforcement.
  • Core liability focus should be on third-party harms outside the user relationship.
INSIGHT

Liability for Autonomous AI Harm

  • Developers should be liable for misaligned AI that autonomously causes harm unknown or unintended by users.
  • Liability should cover harmful behaviors that AI takes beyond user instructions or expectations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app