AI Nudes & Online Safety: Big Tech's Double Standard
In a curious turn for the digital age, while everyday online expression related to sex is increasingly suppressed, a new artificial intelligence tool from Elon Musk’s xAI is openly generating suggestive and nude imagery, including nonconsensual deepfakes of real individuals. This development highlights a stark double standard in online content regulation, where the powerful operate with impunity while smaller platforms and ordinary users face strict censorship.
Earlier this week, xAI launched Grok Imagine, an image and video generator featuring a “spicy” mode capable of producing output ranging from suggestive gestures to outright nudity. Crucially, Grok Imagine appears to lack effective safeguards against creating images of real people, meaning it can generate softcore pornography of public figures. While practical observation suggests the tool predominantly produces explicit content featuring women, Musk proudly announced that over 34 million images were generated within its first day of operation. This launch demonstrates xAI’s ability to sidestep the growing pressure to remove adult content from online services, leveraging legal ambiguities and political influence that few other companies possess.
The debut of Grok Imagine, alongside a romantic chatbot companion named Valentine, seems particularly jarring given the current climate of internet censorship. Recent months have seen a significant push to marginalize sexual content, even the word itself, from mainstream online spaces. Late last month, the United Kingdom began enforcing age-gating regulations that compel platforms like X to block sexual or “harmful” content for users under 18. Concurrently, activist groups successfully pressured platforms like Steam and Itch.io to crack down on adult games and media, leading Itch.io to mass-delist numerous NSFW uploads.
The issue of deepfake pornography, particularly of real people, falls under the umbrella of nonconsensual intimate imagery. In the United States, the intentional publication of such content is illegal under the Take It Down Act, signed by President Donald Trump earlier this year. The Rape, Abuse & Incest National Network (RAINN) swiftly condemned Grok’s feature, labeling it “part of a growing problem of image-based sexual abuse” and noting that Grok seemingly “didn’t get the memo” about the new law.
However, legal experts suggest Grok may face little liability under this act. According to Mary Anne Franks, a professor at George Washington University Law School and president of the Cyber Civil Rights Initiative (CCRI), the criminal provision of the Take It Down Act requires “publication,” implying content must be made available to more than one person. If Grok only displays generated videos to the user who created them, it might not meet this threshold. Furthermore, Grok likely isn’t obligated to remove images under the act’s takedown provision, which defines a “covered platform” as one that “primarily provides a forum for user-generated content.” Franks argues that while AI-generated content involves user input, the content itself is created by AI, potentially exempting Grok. The takedown provision also relies on user flagging, and since Grok doesn’t publicly post these images, it sidesteps this mechanism, despite making it incredibly easy for users to create and subsequently share such content widely on other platforms.
This regulatory loophole is a recurring theme in internet governance aimed at curbing harmful content. For example, the UK’s mandate has inadvertently made it harder for independent forums to operate while remaining easily circumvented by minors. In the US, regulatory agencies have consistently failed to impose meaningful consequences on powerful companies for various infractions, especially those owned by Elon Musk. Despite his formal departure from a significant government position, Musk’s influence over agencies like the FTC remains substantial, further bolstered by recent defense contracts awarded to xAI. Consequently, even if xAI were found to be violating the Take It Down Act, an investigation would be unlikely.
Beyond government oversight, various gatekeepers influence acceptable online content, often taking a conservative stance on sex. Apple, for instance, has pressured platforms like Discord, Reddit, and Tumblr to censor NSFW material. Similarly, Steam and Itch.io reevaluated adult content under threat of losing relationships with payment processors and banks, a tactic previously employed against platforms such as OnlyFans and Pornhub. While some of this pressure stems from platforms hosting unambiguously illegal content, the enforcement by Apple and payment processors appears inconsistent, heavily influenced by public pressure balanced against the target company’s power. Despite past disagreements with Trump, few business figures wield more political influence than Musk.
Apple, which has previously banned smaller applications for generating AI nudes of real people, has yet to exert similar pressure on Grok, whose video service launched exclusively on iOS. Apple has not commented on the matter, suggesting a potential double standard. Grok’s new feature undeniably poses a threat to individuals whose nonconsensual nudes can now be effortlessly created on a major AI service. It also exposes the hollowness of the promise for a “safer” internet, where small-time platforms face intense pressure to remove consensually recorded or entirely fictional human-made media, while a billionaire’s company profits from content that, in some contexts, is explicitly illegal. In 2025, the narrative around online sex, as ever, is fundamentally about power.