AI interview of deceased Parkland victim sparks ethics firestorm
What was promoted as a “one-of-a-kind interview” has instead ignited a fervent debate across the media landscape, potentially marking a new, unsettling chapter in the age of artificial intelligence. Jim Acosta, the former CNN anchor now hosting a Substack program, found himself at the epicenter of this controversy after conducting a televised conversation with an AI-generated likeness of Joaquin Oliver, a 17-year-old tragically killed in the 2018 Marjory Stoneman Douglas High School shooting in Parkland, Florida.
The interview, which aired at the explicit request of Joaquin’s parents, was conceived as a poignant effort to preserve their son’s memory and amplify his impassioned message against gun violence. However, the segment quickly drew widespread condemnation from viewers across the political spectrum, who decried it as exploitative, emotionally manipulative, and a dangerous precedent for journalism.
Acosta first teased the segment on X (formerly Twitter) on August 4, inviting audiences to a show featuring an “interview with Joaquin Oliver. He died in the Parkland school shooting in 2018. But his parents have created an AI version of their son to deliver a powerful message on gun violence.” In the clip, Acosta posed a question to the AI avatar: “Joaquin, I’d like to know what your solution would be for gun violence?” The AI responded with a comprehensive answer, suggesting “a mix of stronger gun control laws, mental health support, and community engagement,” emphasizing the need for “safe spaces for conversations and connections” and “building a culture of kindness and understanding.” In a surprising turn, the avatar then asked Acosta for his thoughts, to which he replied, “I think that’s a great idea.”
The promotional tweet quickly garnered nearly 4 million views, but it also unleashed a torrent of criticism. Users accused Acosta of overstepping ethical boundaries, leveraging the digital likeness of a deceased child to advance a political agenda. Comments ranged from “Jim Acosta hits a new low… Interview an AI version of a dead kid in order to push gun control!!!” to “WTF? This is beyond sick” and “This is one of the weirdest things I’ve ever seen in my life.” The backlash was so intense that Acosta eventually disabled replies on the tweet.
Even within the media industry, the segment drew sharp rebuke. Journalist Glenn Greenwald highlighted the “cross-ideological revulsion” the interview provoked, noting concerns about AI “superseding humanity, sleazy media exploitation, [and] the ability to create fake videos.” These criticisms underscore fundamental questions about trust, ethics, and the profound implications of using AI to speak on behalf of the dead. Critics fear such applications could pave the way for unprecedented manipulation, envisioning scenarios where political groups might create AI avatars of fetuses to argue against abortion, or companies might generate posthumous endorsements from celebrities. The core issue revolves around how society will navigate the ethical minefield of generative AI in media and advocacy.
In response to the mounting outrage, Acosta defended his decision by emphasizing that the concept originated directly from Joaquin’s parents, Manuel and Patricia Oliver. In a follow-up tweet, Acosta posted, “Joaquin, known as Guac, should be 25 years old today. His father approached me to do the story… to keep the memory of his son alive.” He linked to a video where Manuel Oliver emotionally explained, “This is Manuel Oliver. I am Joaquin Oliver’s father… We asked our friend Jim Acosta to do an interview with our son, because now, thanks to AI, we can bring him back. It was our idea.” Oliver continued, asserting, “We feel that Joaquin has a lot of things to say, and as long as we have an option that allows us to bring that to you and to everyone, we will use it.” Acosta urged viewers to watch the father’s video, implying that the parents’ wishes provided crucial context and deserved respect.
Regardless of the parents’ heartfelt intent, the interview has ignited a broader cultural reckoning. For some, it represents a touching, albeit unconventional, application of technology to honor a lost loved one. For many others, however, it signifies a deeply uncomfortable blurring of reality and simulation, risking the dehumanization of the deceased and transforming personal tragedy into algorithmically rendered activism. The incident compels a critical examination of whether this marks a new normal in digital remembrance or a decisive moment that forces society to establish clear ethical boundaries for the use of AI.