
Marketplace Tech The ethics of using AI to immortalize the dead
10 snips
Mar 18, 2026 Tomáš Holanek, a Cambridge researcher on ethics and AI-driven memorialization. He explains how postmortem avatars are built from messages and videos. He explores who uses them and how common they are. He highlights ethical concerns about dignity, consent, privacy, psychological risks, and unequal access. He discusses potential responsible uses and the need for oversight.
AI Snips
Chapters
Transcript
Episode notes
How Grief Bots Are Constructed From Digital Traces
- Postmortem avatars are built by feeding a person's digital traces into models to produce a representation of them.
- Tomáš Holanek explains inputs like WhatsApp messages, emails, and videos are compiled and processed to simulate conversations with the deceased.
Consent Privacy And Psychological Harm Are Core Risks
- Key ethical concerns include postmortem dignity, privacy, and violated consent because current laws rarely protect consent after death.
- Holanek highlights psychological harms for vulnerable people, noting children can react very differently to interacting with a deceased parent's avatar.
Limit Preservation To Avoid Burdening Future Generations
- Limit what is preserved and use features like account deletion to avoid burdening future generations with excess data.
- Holanek cites Google's inactive account manager as an intervention that deletes accounts after inactivity to reduce preservation load.
