AI-Generated Hate Videos Surge Online, Raising Safety Concerns
What initially appears to be a lighthearted, AI-generated video designed for amusement quickly takes a disturbing turn. The clip features a furry Bigfoot, adorned in a cowboy hat and an American flag-emblazoned vest, seated behind the wheel of a pickup truck. “We are going today to the LGBT parade,” the ape-like figure chuckles, adding, “You are going to love it.” The scene then escalates into violence as Bigfoot drives through a screaming crowd, some clutching rainbow flags. This video, posted in June on the AmericanBigfoot TikTok page, has amassed over 360,000 views and hundreds of approving comments, signaling a troubling trend.
In recent months, social media platforms have been inundated with similar AI-generated content that openly promotes violence and disseminates hate against LGBTQ+, Jewish, Muslim, and other minority groups. While the precise origins of many of these videos remain obscure, their proliferation online has ignited outrage and deep concern among experts and advocates. They argue that current Canadian regulations are woefully inadequate to keep pace with the rapid spread of AI-generated hateful content and fail to address the inherent risks to public safety.
Helen Kennedy, Executive Director of Egale Canada, an LGBTQ+ advocacy organization, articulates the community’s profound worry regarding the surge of transphobic and homophobic misinformation. She states that these AI tools are being “weaponized to dehumanize and discredit trans and gender diverse people,” emphasizing that existing digital safety laws are incapable of confronting the scale and speed of this new threat. Kennedy underscores that rapidly evolving technology has provided malicious actors with a potent instrument for spreading misinformation and hate, with transgender individuals disproportionately targeted. “From deepfake videos to algorithm-driven amplification of hate, the harms aren’t artificial – they’re real,” she warns.
The LGBTQ+ community is not the sole target, according to Evan Balgord, Executive Director of the Canadian Anti-Hate Network. He notes that Islamophobic, antisemitic, and anti-South Asian content, crafted with generative AI tools, is also circulating widely across social media. Balgord cautions that fostering an environment where violence against these groups is celebrated makes real-world violence more probable. He points out that Canada’s digital safety laws were already lagging, and advancements in AI have only exacerbated the problem. “We have no safety rules at all when it comes to social media companies,” Balgord asserts, “we have no way of holding them accountable whatsoever.”
Attempts to address this legislative gap have faltered. Andrea Slane, a legal studies professor at Ontario Tech University who has extensively researched online safety, explains that bills aimed at tackling harmful online content and establishing a regulatory AI framework died when Parliament was prorogued in January. Slane insists the government must urgently revisit online harms legislation and reintroduce the bill, advocating for swift action.
Justice Minister Sean Fraser indicated in June that the federal government would take a “fresh” look at the Online Harms Act, though a decision on whether to rewrite or simply reintroduce it remains pending. The original bill sought to hold social media platforms accountable for reducing exposure to harmful content. A spokesperson for the newly established Ministry of Artificial Intelligence and Digital Innovation, Sofia Ouslis, confirmed the government is seriously addressing AI-generated hateful content, particularly when it targets vulnerable minority groups. While acknowledging that existing laws offer “important protections,” Ouslis conceded they were not designed to counter the threat of generative AI. She added that Prime Minister Mark Carney’s government has also committed to criminalizing the distribution of non-consensual sexual deepfakes. “There’s a real need to understand how AI tools are being used and misused — and how we can strengthen the guardrails,” Ouslis stated, noting that this work is ongoing and involves reviewing frameworks, monitoring court decisions, and consulting experts. She concluded that in this fast-moving domain, it is preferable to achieve correct regulation rather than moving too quickly and making errors, citing the European Union and the United Kingdom as models.
Despite the EU being at the forefront of AI regulation and digital safety, Slane notes there is still a sentiment that more needs to be done. A significant challenge in regulating content distributed by social media giants stems from their international nature, as most are not Canadian entities. The current political climate south of the border further complicates matters, with U.S. tech companies experiencing reduced regulations, making them “more powerful and feeling less responsible,” Slane observes.
Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence and an assistant professor at Ontario Tech University, highlights a recent “breakthrough” that has made producing good quality videos remarkably easy and affordable, often free. “It’s really accessible to almost anybody with a little bit of technical knowledge and access to the right tools right now,” he states. While large language models like ChatGPT have implemented safeguards to filter harmful content, Lewis stresses the urgent need for similar guardrails in the video space. He notes that while humans can be horrified by such videos, AI systems lack the capacity to reflect on their own creations. Lewis suggests that while existing laws may offer some recourse against the online glorification of hate and violence, the rapid development and widespread availability of generative AI tools necessitate new technological solutions and robust collaboration among governments, consumers, advocates, social platforms, and AI app developers. He advocates for “really robust responsive flagging mechanisms” to remove such content as quickly as possible, though he cautions that AI tools, being probabilistic, will not catch everything.