Imagine losing a loved one and having the chance to interact with their digital doppelgänger—a voice, a personality, a semblance of their presence, all powered by AI. It sounds like something out of a sci-fi novel, but it’s happening today. But here’s where it gets controversial: Is this a comforting way to keep their memory alive, or does it blur the line between tribute and exploitation? Let’s dive in.
In 2024, James Vlahos shared a deeply personal story with the BBC. After learning his father was terminally ill with cancer, he recorded his dad’s voice and used AI to create a chatbot that could mimic his speech and personality. While it didn’t erase the pain of loss, Vlahos found solace in having an interactive keepsake. “It’s like having a living compendium of his memories,” he said. Yet, this raises a question: Can technology truly bridge the gap between life and death, or does it risk reducing a person’s legacy to code and algorithms?
The concept of ‘deathbots’—AI systems designed to replicate the voices, speech patterns, and personalities of the deceased—is gaining traction. However, this is the part most people miss: these tools are only as meaningful as the data they’re fed. Jacqueline Gunn, founder of Workplace Bereavement, notes that while curiosity is growing, widespread adoption isn’t yet the norm. “Grief is a deeply personal, evolving process,” she explains. “AI can’t adapt to the complexities of human emotion. It might be a stepping stone, but it’s not the destination.”
Researchers Eva Nieto McAvoy from King’s College London and Bethan Jones from Cardiff University explored how these technologies function in practice. They found that while deathbots are marketed as sources of comfort, they often rely on oversimplified interpretations of memory, identity, and relationships. For instance, what happens if the AI starts saying things the deceased would never say, or evolves in ways that distort their values? And this is where it gets even more contentious: Is it ethical to let an algorithm represent someone’s legacy?
When asked if they’d want their own families to recreate them digitally after death, the researchers’ responses were telling. Kidd admitted, ‘If it’s playful and respectful, maybe. But if it starts mangling people’s recollections of me, that’s a problem.’ Dr. Nieto McAvoy, on the other hand, was more ambivalent. ‘Once I’m dead, who cares?’ she said. ‘But I’m not sure I want my family paying for a service that could be misconstrued.’
Here’s the bold question for you: Would you want a digital version of yourself to exist after you’re gone? Or do you think it’s a step too far? Let’s spark a conversation—share your thoughts in the comments. After all, in an age where technology can mimic humanity, where do we draw the line between innovation and intrusion?