Grok Nonconsensual Sexual Images Lawsuit: Explicit Deepfakes
Last Updated on January 23, 2026
At A Glance
- This Alert Affects:
- Women and children of whom Grok has generated sexual images that were published online without consent.
- What’s Going On?
- Attorneys working with ClassAction.org are investigating reports that Grok, an artificial intelligence chatbot hosted on X, has been used to generate and post explicit deepfakes of women and children. It’s possible that lawsuits could be filed on behalf of people who appeared in the AI-generated images.
- How Could a Lawsuit Help?
- Legal action may be able to help victims recover money for any harm they’ve suffered as a result of nonconsensual sexual images.
- What You Can Do
- If you or your child appeared in a sexually explicit image generated by Grok without consent, learn about your options by filling out the form on this page.
Attorneys working with ClassAction.org are looking into whether legal action can be taken on behalf of women and minors who had nonconsensual sexually explicit images of them generated by Grok.
In January 2026, reports began surfacing of an “AI undress” trend in which users on X (formerly Twitter) asked Grok, the platform’s AI chatbot, to generate sexual images of real people, including by removing clothing from a person’s photo, placing them in revealing or transparent clothing or otherwise sexualizing their photos.
The attorneys are now investigating whether legal action can be taken on behalf of victims who had explicit deepfakes of them generated without their consent.
If you or your child appeared in a nonconsensual sexually explicit image generated by Grok, learn about your options by filling out the form on this page.
AI Undress Trend: Grok Generates Sexually Explicit Images
Following a December 2025 update to the Grok artificial intelligence tool’s image generation feature, a trend began to emerge on X in which users asked the chatbot to remove clothing from photos posted by other users. The AI “undress” requests quickly went viral, with users commonly asking Grok to “make her naked” or “put her in a clear bikini.” According to a report cited by Bloomberg, Grok generated over 7,000 sexual images per hour over a 24-hour period.
A report by nonprofit organization AI Forensics found that of 20,000 images generated by Grok between December 25, 2025 and January 1, 2026, 53% depicted individuals in “minimal attire,” of which 81% presented as women. AI Forensics also found that 2% of the analyzed images depicted people who appeared to be 18 or younger.
According to Reuters, a review of public requests sent to Grok over a 10-minute period on January 2 revealed 102 requests for Grok to digitally edit photographs to make people appear to be wearing bikinis. Reuters reported that in requests involving pictures of women, users typically asked Grok to depict the women “in the most revealing outfits possible.” Reuters also found several cases in which Grok generated sexualized images of children.
The Internet Watch Foundation, a British organization working to eliminate online child sexual abuse material (CSAM), warned that the explicit deepfakes have spread to the dark web, where researchers have found “criminal imagery,” including photos of topless minor girls, that users said were generated by Grok.
The Grok Imagine image and video generation tool was launched in mid-2025 and features a “spicy mode” that can generate adult content. Though Grok spicy mode is far from the first tool to be used to “nudify” photos, some experts have commented that the recent flood of deepfakes on X was on an “unprecedented” scale—and that the response by regulators and others has been “incomplete.”
Following threats of fines, regulatory action and a possible ban on X in the U.K., the platform announced that Grok’s image generation feature would be restricted “to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.” However, the Guardian reported that despite the restriction, the Grok Imagine app could still be used to modify photos of real women wearing clothing into “provocative” videos depicting them stripping to bikinis.
In the wake of the controversy, Malaysia and Indonesia have each blocked access to Grok, and investigations into Grok’s production of nonconsensual sexual images have been opened in California, the U.K. and the European Union. In response, xAI founder Elon Musk reportedly stated that the criticism of Grok and X was an “excuse for censorship.”
How Could a Lawsuit Help?
Legal action may be able to help victims recover money for any harm they may have experienced as a result of the publication of nonconsensual sexual images. It could also force xAI, the company that owns Grok and X, to implement better protections against the production of sexually explicit images.
What You Can Do
If you or your child appeared in a nonconsensual sexually explicit image generated by Grok, fill out the form on this page to learn more.
After you get in touch, an attorney or legal representative may reach out to you directly to ask you some questions and explain what you can do. It doesn’t cost anything to fill out the form or speak with someone, and you’re not obligated to take legal action if you decide you don’t want to.
Before commenting, please review our comment policy.