
Marketplace All-in-One The ethics of using AI to immortalize the dead
Mar 18, 2026
Tomáš Holánek, a University of Cambridge researcher on digital preservation and AI-driven postmortem avatars, explains how griefbots are built and why they are growing. He discusses privacy, consent, dignity, societal burdens, and potential responsible uses. The conversation also covers regulation, preservation choices, and ethical experiments that might help balance risks and benefits.
AI Snips
Chapters
Transcript
Episode notes
How Grief Bots Are Constructed
- Postmortem avatars are built by uploading personal artifacts like WhatsApp messages, emails, and videos to train models that simulate a deceased person.
- Tomáš Holánek explains companies compile this data into an interactive representation that can converse like the original person.
Growth Is Diffuse Across Dedicated Services And DIY Tools
- The market is growing but user numbers are unclear because people use both dedicated services and general AI tools like ChatGPT to recreate conversations.
- Holánek notes difficulty quantifying users since improvisation with mainstream models supplements specialized companies.
Consent And Dignity Gaps For The Deceased
- Major ethical concerns include violations of postmortem dignity, privacy, and consent because people can't meaningfully control representations after death.
- Holánek warns there are effectively no laws protecting consent for the deceased or their data recipients.
