AI interviews dead child: Is it time to set ethical boundaries?

Theguardian

Seven years after his tragic death in the Parkland school shooting, Joaquin Oliver, then 17, appeared in an interview with former CNN journalist Jim Acosta. The voice, however, was not Joaquin’s own, but that of a digital recreation—an AI model trained on his social media posts. This digital ghost was commissioned by his parents, Manuel and Patricia Oliver, who are leveraging the technology to amplify their long-standing campaign for stricter gun control measures. Having repeatedly shared their son’s story to little effect, they are now exploring every avenue to ensure the voices of gun violence victims resonate in Washington. Beyond advocacy, the AI offers a deeply personal solace; Patricia Oliver reportedly spends hours interacting with the AI, finding comfort in hearing it utter phrases like, “I love you, Mommy.”

The profound grief of losing a child is an immeasurable pain, and the ways families cope are deeply personal. Whether it’s preserving a child’s room as a shrine, speaking to a gravestone, or clinging to a cherished item, these acts are understood as a natural part of mourning. The aftermath of 9/11 saw families repeatedly listening to final voicemail messages from loved ones, and many today still reread old text exchanges or even send messages to deceased relatives’ numbers, not expecting a reply, but unable to sever the connection. Yet, this very vulnerability in grief also presents a fertile ground for exploitation, and the digital resurrection of the dead could soon become a significant industry.

The spectrum of AI-generated posthumous appearances is already widening. Recent examples range from the seemingly innocuous, such as an AI-generated video of the late Ozzy Osbourne greeting other deceased music legends played at a Rod Stewart concert, to more functional applications like an AI avatar of an Arizona shooting victim addressing a judge at the gunman’s sentencing. However, the prospect of creating permanent AI replicas, perhaps even in robot form, capable of continuous interaction, raises far more profound questions about selfhood and mortality.

The ability to digitally resurrect individuals carries immense power, one that demands careful consideration rather than being ceded lightly. While the legal frameworks protecting the identities of living individuals from AI deepfakes are gradually solidifying, the rights of the deceased remain ambiguous. Reputation, for instance, is not legally protected after death, yet DNA is. The 1996 cloning of Dolly the sheep prompted global bans on human cloning, indicating a societal discomfort with replicating life. However, AI is not trained on physical bodies but on the intimate digital footprints of a person—their voice notes, messages, and images. This raises complex ethical dilemmas: what happens if one part of a family desires a digital resurrection of a loved one, while another vehemently opposes living with such a synthetic presence?

The AI-generated Joaquin Oliver, forever frozen at 17, trapped in the digital amber of his teenage social media persona, is ultimately a consequence of his killer’s actions, not his family’s. Manuel Oliver acknowledges that the avatar is not truly his son and that he is not attempting to bring him back. For him, it is an extension of their ongoing campaign. Yet, the plan to grant this AI access to a social media account, enabling it to upload videos and gain followers, is disquieting. What if the AI begins to “hallucinate” or veers into topics where it cannot accurately represent the real Joaquin’s thoughts or beliefs?

While current AI avatars still exhibit tell-tale glitches, advancing technology will inevitably make them indistinguishable from real humans online. This raises concerns not only for journalism, where the lines between genuine and synthetic sources could blur, but also for society at large. The risk of conspiracy theorists citing such interviews as “proof” that any challenging narrative is a hoax, akin to the infamous Sandy Hook lies, is a tangible threat to truth. Beyond this, as AI becomes more sophisticated, offering companionship and emotional attunement, it will fill a void for many. With a significant portion of adults reporting no close friends, the market for AI companions will undoubtedly grow, much like the current demand for pets or social media engagement.

Ultimately, society faces a critical decision: how comfortable are we with technology fulfilling human needs that other humans, or indeed life itself, have not? There is a fundamental difference between a generic comforting presence for the lonely and the specific, on-demand digital resurrection of lost loved ones. The ancient verse reminds us there is a time to be born and a time to die. As we increasingly blur these lines, how will our understanding of humanity and mortality fundamentally change?