
Stanford Legal AI, Liability, and Hallucinations in a Changing Tech and Law Environment
8 snips
May 15, 2025 Daniel Ho, a leading law professor at Stanford, and Mirac Suzgun, a JD/PhD student focused on AI in law, discuss the integration of AI technology in the legal field. They explore the phenomenon of AI hallucinations, where the tech generates fictitious legal citations, raising serious concerns about accuracy. The conversation delves into the challenges of AI misunderstanding legal precedents, the effects of biased training data, and the need for human oversight. Their insights highlight both the promise and peril of using AI in legal practice.
AI Snips
Chapters
Transcript
Episode notes
Justice Ginsburg Confused by AI
- AI confused Justice Ginsburg with her daughter in legal citations.
- It generated a fake dissent mixing Supreme Court history with copyright law.
AI's Lack of Humility
- AI lacks the ability to admit ignorance or abstain from answering.
- This leads to confident but potentially false answers causing more risk in legal contexts.
AI Risks Undermining Access Justice
- AI legal tools misfire most in trial courts and uncertain premises.
- This undermines AI's potential to improve access to justice for underrepresented litigants.
