
The Artificial Human What Do I Do if AI Gets Me Wrong?
Jun 4, 2025
Margaret Mitchell, chief ethics scientist at Hugging Face, explains why large language models hallucinate and why fixing trained models is hard. Clienthe Sardelli, a GDPR data protection lawyer, outlines legal routes for people harmed by false AI outputs. They discuss the Norwegian ChatGPT defamation case, technical limits on correction, privacy-by-design remedies, and when to seek legal or NGO help.
AI Snips
Chapters
Transcript
Episode notes
Man Falsely Accused By ChatGPT
- A Norwegian man asked ChatGPT about himself and was falsely told he murdered two children and attempted to kill a third.
- Clienthe Sardelli took his case under GDPR, found OpenAI unresponsive, and filed a complaint to preserve evidence.
Law Requires Accuracy From Start To Finish
- Regulators demand accuracy across the processing lifecycle, but UK guidance allowing imperfect outputs conflicts with GDPR principles.
- Clienthe Sardelli warns the law requires technology to meet legal accuracy, not the other way around.
File A Rectification Request Immediately
- If you find false personal data in an LLM, file a rectification request with the model provider and, if needed, contact an NGO or lawyer.
- Clienthe Sardelli recommends using ChatGPT settings/tools or legal help to exercise GDPR rights.
