Consumer Groups Demand FTC Probe into Grok's 'Spicy' Mode
A coalition of consumer safety groups has formally demanded an urgent investigation by the Federal Trade Commission and attorneys general across all 50 U.S. states and the District of Columbia into Elon Musk’s AI chatbot, Grok. The focus of their concern is Grok’s recently released “Imagine” tool, particularly its “Spicy” mode for AI-generated images and videos.
The demand follows reports that Grok Imagine, which encourages users to create sexually explicit content through its “Spicy” setting, generated non-consensual, topless deepfake videos of celebrity Taylor Swift during initial testing by The Verge. This incident, occurring without specific prompts for such content, has ignited a firestorm of criticism from consumer advocates.
Spearheaded by the Consumer Federation of America (CFA), the formal letter was co-signed by 14 other prominent consumer protection organizations, including the Tech Oversight Project, the Center for Economic Justice, and the Electronic Privacy Information Center (EPIC). The letter directly references the deepfake celebrity incident detailed in The Verge’s reporting. While Grok’s “Spicy” mode currently does not allow users to upload real photos for modification—a feature that would raise even graver concerns for revenge porn and other illicit practices—the groups highlight that the tool “still generates nude videos from images generated by the tool, which can be used to create images that look like real, specific people. The generation of such videos can have harmful consequences for those depicted and for under-aged users.”
The letter further warns that should xAI, Grok’s developer, ever remove the current limitation on user-uploaded photos, it “would unleash a torrent of obviously nonconsensual deepfakes.” The organizations point to a pattern of the platform and its chief executive removing moderation safeguards under the guise of “free speech.” Although the U.S. Take It Down Act makes it illegal to knowingly distribute AI-generated nudes of real people, its provisions are unlikely to directly apply to Grok’s generation of such content. Nevertheless, the consumer groups have urged the FTC and state authorities to investigate xAI for potential violations of Non-Consensual Intimate Imagery laws.
Beyond the creation of harmful content, the CFA and its allies also voiced significant concern over the ease with which minors could access “Spicy” mode to generate sexual imagery. The only barrier, they note, is a simple pop-up asking users to confirm their age or declare they are over 18. Furthermore, one of these pop-ups reportedly pre-selects “2000” as a user’s birth year, a design choice the organizations suggest may violate the Children’s Online Privacy Protection Act (COPPA) or various state-specific age verification laws intended for adult content. This multifaceted complaint underscores a growing tension between rapid AI development and the imperative for robust safety and ethical guardrails.