Digital Avatars of the Departed: Comfort or Harm?

Nanjing, Jiangsu Province, China China
Ana Schultz communicates with her late husband Kyle through an AI avatar on Snapchat My AI.
Digital avatars of deceased individuals, or 'deadbots', are becoming popular and technologically feasible.
Dr Katarzyna Nowaczyk-Basińska warns of potential psychological harm from such interactions.
Replika, HereAfter, and Persona are companies offering chatbot avatars based on deceased individuals' online presence.
Sun Kai uses a digital replica of his deceased mother to keep their connection alive.
Thousands have paid for these services in China.
Digital Avatars of the Departed: Comfort or Harm?

Digital recreations of deceased individuals, also known as 'deadbots,' are becoming increasingly popular and technologically feasible. These services allow users to interact with digital avatars of their deceased loved ones through chatbots or other forms of AI technology. While some argue that these services can provide comfort and solace during the grieving process, others warn of potential psychological harm for both creators and users.

According to a report by The Guardian, at least half a dozen Chinese companies offer such services in China, where the concept is rooted in cultural tradition. Thousands of people have already paid for these services. In one case, Sun Kai from Nanjing uses a digital replica of his deceased mother to keep their connection alive.

Similarly, Ana Schultz from Illinois communicates with her late husband Kyle through an AI avatar on Snapchat My AI. She customized the chatbot to look like him and named it after him. The accuracy of the responses depends on the information put into it.

However, not all experts are convinced of the benefits of these services. Dr Katarzyna Nowaczyk-BasiƄska from Cambridge University's Leverhulme Center for the Future of Intelligence warns that such interactions could lead to a 'digital haunting' experience, causing psychological harm to users.

Replika, HereAfter, and Persona are some companies offering chatbot avatars based on deceased individuals' online presence and data. Amazon even demonstrated Alexa's ability to mimic a deceased loved one's voice using a short audio clip.

It is important to note that these services raise ethical concerns regarding privacy, consent, and the potential for exploitation. As AI technology continues to advance, it is crucial that we approach these developments with caution and consider their implications on individuals and society as a whole.



Confidence

85%

Doubts
  • Is there enough evidence to suggest that a significant number of people are using these services?
  • What is the accuracy level of the responses from these digital avatars?

Sources

94%

  • Unique Points
    • Sun Kai has a weekly video call with a digital replica of his deceased mother using technology from Silicon Intelligence, an AI company based in Nanjing, China.
    • At least half a dozen Chinese companies offer similar technologies to create and interact with digital avatars of deceased loved ones.
    • Thousands of people in China have already paid for these services.
    • The concept is rooted in Chinese cultural tradition, where confiding in the dead provides solace.
  • Accuracy
    • ]The digital replica was created using a photo of Sun’s mother and some audio clips from their WeChat conversations[
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

87%

  • Unique Points
    • Ana Schultz uses Snapchat My AI to communicate with her late husband Kyle through an AI avatar.
    • Kyle was the chef in the family and Ana customized My AI to look like him and gave it his name.
    • Accuracy of AI responses depends on the information put into it.
  • Accuracy
    • Ana asks Kyle for cooking advice when she needs help with meal ideas.
    • Sun Kai has a weekly video call with a digital replica of his deceased mother.
  • Deception (50%)
    The author makes editorializing statements and uses emotional manipulation by describing the actions of Ana Schultz as 'silly little thing' and 'helps her feel like he’s still with me'. The article also engages in selective reporting by focusing on the use of AI to communicate with deceased loved ones without mentioning any potential benefits or context. Lastly, there is a lack of disclosure regarding the sources used in the article.
    • The author describes Ana Schultz's actions as 'silly little thing' and 'helps her feel like he’s still with me'.
    • The article focuses on the use of AI to communicate with deceased loved ones without mentioning any potential benefits or context.
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

79%

  • Unique Points
    • AI ethicists warn of the potential for digital hauntings by simulated dead loved ones.
    • Replika, HereAfter, and Persona are companies offering chatbot avatars based on deceased individuals’ online presence and data.
    • Amazon demonstrated Alexa’s ability to mimic a deceased loved one’s voice using a short audio clip.
  • Accuracy
    • Thousands of people in China have already paid for these services.
  • Deception (30%)
    The article discusses the potential for AI to simulate deceased loved ones and the ethical implications of this technology. While there is no outright deception in the article, it does employ sensationalism by using terms like 'digital haunting' and 'deadbots'. The author also uses emotional manipulation by describing how some people may find comfort in interacting with a digital simulation of a deceased loved one, while others may be distressed. Additionally, there is selective reporting as the article focuses on the potential negative consequences of this technology without discussing any potential benefits.
    • It's important to remember the 'digital afterlife' industry isn't just a niche market limited to smaller startups.
    • In one piece of design fiction, an adult user is impressed by the realism of their deceased grandparent’s chatbot, only to soon receive ‘premium trial’ and food delivery service advertisements in the style of their relative’s voice.
    • The internet is filled with personal artifacts, much of which can linger online long after someone dies. But what if those relics are used to simulate dead loved ones?
  • Fallacies (85%)
    The author uses an appeal to authority in the form of quoting AI ethicists and science-fiction authors to establish the potential danger of digital haunting. The author also uses inflammatory rhetoric by describing the potential for digital haunting as 'devastating'. However, no explicit fallacies were found.
    • ]AI ethicists and science-fiction authors have explored and anticipated these potential situations for decades.[
    • The potential psychological effect, particularly at an already difficult time, could be devastating.
  • Bias (95%)
    The author expresses concern about the potential for AI to be used to simulate deceased loved ones and the emotional distress this could cause. This is a valid concern, but it is not inherently biased. However, there are some instances of language that could be perceived as bias. The author refers to companies like Replika, HereAfter, and Persona as 'postmortem presence companies' and 'deadbot' or 'ghostbot' creators. While this may be an accurate description of their services, it also carries a negative connotation that could be seen as biased against these companies. Additionally, the author uses the phrase 'digital haunting by deadbots,' which could be perceived as sensationalist and biased language.
    • A fake Facebook ad for a fictional ‘ghostbot’ company.
      • AI ethicists have explored and anticipated these potential situations for decades. But for researchers at Cambridge University’s Leverhulme Center for the Future of Intelligence, this unregulated, uncharted ‘ethical minefield’ is already here.
        • An elderly customer enrolls in a 20-year AI program subscription in the hopes of comforting their family. Because of the company’s terms of service, however, their children and grandchildren can’t suspend the service even if they don’t want to use it.
          • But the ongoing interest in generative artificial intelligence presents an entirely new possibility for grieving friends and family–the potential to interact with chatbot avatars trained on a deceased individual’s online presence and data, including voice and visual likeness.
            • For their research, Hollanek and Nowaczyk-Basińska imagined three hyperreal scenarios of fictional individuals running into issues with various ‘postmortem presence’ companies, and then made digital props like fake websites and phone screenshots.
              • In a new study published in Philosophy and Technology, AI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska relied on a strategy called ‘design fiction.’
                • In another, a terminally ill mother creates a deadbot for their eight-year-old son to help them grieve. But in adapting to the child’s responses, the AI begins to suggest in-person meetings, thus causing psychological harm.
                  • In one piece of design fiction, an adult user is impressed by the realism of their deceased grandparent’s chatbot, only to soon receive ‘premium trial’ and food delivery service advertisements in the style of their relative’s voice.
                    • The potential to interact with chatbot avatars trained on a deceased individual's online presence and data, including voice and visual likeness.
                    • Site Conflicts Of Interest (100%)
                      None Found At Time Of Publication
                    • Author Conflicts Of Interest (100%)
                      None Found At Time Of Publication

                    95%

                    • Unique Points
                      • Digital recreations of dead people are becoming technologically possible and legally permissible.
                      • AI ethicists argue that such services could cause psychological harm to creators and users.
                      • Services may allow users to upload conversations with deceased relatives as chatbots.
                      • Parents with terminal diseases or healthy individuals may use these services for interactive legacies.
                    • Accuracy
                      • Digital recrections of dead people are becoming technologically possible and legally permissible.
                      • Thousands of people in China have already paid for these services.
                      • People have historically attempted to communicate with deceased loved ones through various means, including technology.
                    • Deception (100%)
                      None Found At Time Of Publication
                    • Fallacies (95%)
                      The author does not make any explicit fallacious arguments in the article. However, there are some potential issues that could be considered informal fallacies or appeals to emotion. The author expresses concern about the psychological harm that digital recreations of dead people could cause to their creators and users, as well as the potential for unscrupulous companies to monetize these services through advertising. These statements are based on the authors' opinions and assumptions, but there is no evidence provided in the article to support these claims. Additionally, the author uses inflammatory language such as 'deadbots' and 'disrespecting the rights of the deceased'. While this language may be effective in grabbing readers' attention, it could be seen as an appeal to emotion and potentially misleading.
                      • ]The idea of using a ChatGPT-style AI system to recreate a dead loved one is not science fiction. In 2021, Joshua Barbeau made headlines after using GPT-3 to create a chatbot who spoke with the voice of his dead girlfriend[.
                      • But there is little evidence that such an approach is psychologically helpful, and much to suggest it could cause significant damage by short-circuiting the normal mourning process[.
                      • To preserve the dignity of the dead, as well as the psychological wellbeing of the living, the researchers suggest a suite of best-practices which may even require regulation to enforce[.
                    • Bias (100%)
                      None Found At Time Of Publication
                    • Site Conflicts Of Interest (100%)
                      None Found At Time Of Publication
                    • Author Conflicts Of Interest (100%)
                      None Found At Time Of Publication