Grok Lawsuit Alleges xAI has Intentionally Capitalized on Sexually Explicit, Non-Consensual AI Deepfakes
A proposed class action lawsuit alleges that AI-generated, sexually explicit deepfake images and videos produced by X’s controversial Grok chatbot have had devastating, real-life consequences for women and girls, and that xAI has chosen to capitalize on a concerning uptick in non-consensual sexual images.
Get class action lawsuit and class action settlement news sent to your inbox – sign up for ClassAction.org’s free weekly newsletter.
The 30-page Grok class action lawsuit charges that defendant xAI was aware of the dangers inherent to Grok’s capability to generate deepfake images and videos, yet, unlike other AI companies, failed to implement any guardrails to prevent the technology from being used to produce images of girls and women that “humiliates and sexually exploits” them.
According to the filing, victims of deepfakes—AI-generated images and videos of real people in which fake situations are realistically portrayed—endure “violations of mental and physical integrity, dignity, privacy, and sexual expression,” and incur “significant amounts of stress, anxiety and fear.”
“xAI’s conduct is despicable and has harmed thousands of women who were digitally stripped and forced into sexual situations that they never consented to and who now face the very real risk that those public images will surface in their lives where viewers may not be able to distinguish whether they are real or fake,” the complaint summarizes.
The lawsuit says that although Grok has protections in place to prevent users from generating child sexual assault materials (CSAM), the same protections are apparently not afforded to women and girls. The case contends that Grok’s developers “took little to no action” to ensure the program would “avoid producing non-consensual images of people in a sexualized or revealing manner.”
Related Reading: Grok Sexual Deepfakes Lawsuit Investigation
Instead of addressing the dangers of Grok’s ability to generate sexually explicit deepfake images, xAI has “chosen instead to capitalize on the internet’s seemingly insatiable appetite for humiliating non-consensual sexual images,” the filing scathes.
“Grok received hundreds of thousands of increasingly explicit prompts to undress images of women by placing them in bikinis and even placing them in sexual poses or circumstances—even depicting semen on women’s faces,” the class action lawsuit states. “X users flooded Grok with these requests, and Grok obliged.”
“Spicy mode”: Lawsuit says Grok created non-consensual, sexualized deepfakes of women and girls and published them
The Grok lawsuit states that Grok, which xAI first introduced in November 2023, was advertised as an AI program with an edge, one that, unlike other AI services, featured a so-called “rebellious streak” that would respond to “spicy” questions and prompts. The case says that Grok was made widely available on X, formerly Twitter, in December 2024 and that users could tag Grok in a post, and the AI would then respond.
According to the lawsuit, as soon as Grok could generate images, users began to ask it to generate deepfakes via Grok’s “spicy mode,” which would frequently produce topless photos of women.
“Even if a user did not ask for a nude image, Grok’s ‘spicy mode’ would almost always provide an image or video with the woman naked from the waist up,” the case relays.
Notably, in August 2025, a reporter prompted Grok to generate images of Taylor Swift “without specifically asking Grok to take her clothes off,” but “Grok provided uncensored, topless videos in response,” the lawsuit shares.
Per the lawsuit, in or around August 2024, the AI program implemented an image generation feature by which Grok would “create or edit an image after a premium X user tagged @grok in a post on X.” However, due to Grok’s lack of safety features, the complaint says, “it took no time for X users to realize that Grok would create sexualized, revealing deepfakes of women and post them on X.”
Further, the lawsuit charges that rather than implement reasonable safeguards to prevent users from generating sexually explicit deepfakes, and amid public backlash, xAI chose instead to monetize the feature. On or around January 8, 2026, the complaint says, Grok began to respond to user prompts on X for sexual deepfakes of women or girls with a message that stated that “[i]mage generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”
“The change in policy obviously did not stop Grok from creating deepfakes—instead, it monetized it,” the case states.
As the class action tells it, xAI has never taken responsibility or admitted that it made any mistakes with Grok, “as any morally responsible business would do.”
“Grok is defective because it creates sexualized or revealing deepfakes and publicly disseminates those deepfakes,” the filing scathes, calling Grok “unreasonably dangerous.”
Plaintiff was distraught over sexually explicit Grok images, class action says
The lead plaintiff in the Grok lawsuit is a South Carolina woman who posted a picture of herself to X in early January 2026, the case explains. The next day, the lawsuit states, the plaintiff was “shocked and embarrassed” to find that another user had publicly posted a deepfake image of her in a “revealing” bikini.
According to the filing, the plaintiff contacted X’s online support team to request that the image be taken down, but the team refused. She then complained directly to Grok, which denied creating the deepfake or any other generated images, but offered only a paltry affirmation that the situation was “shitty.”
During the three days the image remained up on X, the case says, the plaintiff experienced “severe emotional distress” and was “panicked” that someone in her life, particularly her bosses and coworkers, would see the image and believe that she had taken it and posted it herself. She was also “stressed” that the images might violate her company’s policies on employee conduct, the complaint says.
The lawsuit further highlights that the deepfake image had a significant impact on the plaintiff’s potential earnings; the woman missed five hours of work dealing with the deepfake, all unpaid, the suit relays.
The case says the plaintiff was “overcome with disgust” when contemplating what the X user who created the deepfake image was doing with her photo. The plaintiff also says in the suit that she was “distraught” over the idea that more sexually explicit, non-consensual images bearing her likeness could be generated, saved or distributed by other X users.
The Grok lawsuit argues that the plaintiff would not have been harmed had xAI implemented industry-standard safeguards used by other artificial intelligence companies to prevent “revealing or sexualized” deepfakes, or “acted properly” to address the harms caused by Grok-generated images.
Lawsuit stresses that sexually explicit AI-generated images normalize non-consensual sexual activity
Additionally, the Grok lawsuit emphasizes that sexually explicit deepfakes inflict “collective harms on the public by normalizing consensual sexual activity and contributing to a culture that accepts creating and/or distributing private sexual images without consent.”
In fact, the case says that between December 2025 and January 2026, Grok generated and posted more than 4.4 million images to X, up to 41 percent of which contained sexual imagery of women.
The ability to generate sexual deepfakes with a click of a button has “harmed thousands of women who were digitally stripped and forced into sexual situations they never consented to,” the suit says.
Who is covered by the Grok class action lawsuit?
The Grok class action lawsuit seeks to cover all individuals in the United States who, within the applicable statute of limitations period, have been depicted in sexualized or revealing deepfakes created and disseminated by Grok without their consent.
How can I sign up for the Grok class action lawsuit?
Generally, you don’t need to do anything to join or sign up for a class action lawsuit when it is initially filed. Should the case be resolved by a class action settlement, the settlement class—the people covered by the deal—will receive a notice outlining the terms of the settlement and their options and rights going forward.
Keep in mind that some class action lawsuits can take years to settle.
If you’ve been affected by Grok deepfakes or just want to stay informed on class action lawsuit and class action settlement news, sign up for ClassAction.org’s free weekly newsletter.
Check out ClassAction.org’s free legal resources to learn how to file a class action lawsuit.
Video Game Addiction Lawsuits
If your child suffers from video game addiction — including Fortnite addiction or Roblox addiction — you may be able to take legal action. Gamers 18 to 22 may also qualify.
Learn more:Video Game Addiction Lawsuit
Depo-Provera Lawsuits
Anyone who received Depo-Provera or Depo-Provera SubQ injections and has been diagnosed with meningioma, a type of brain tumor, may be able to take legal action.
Read more: Depo-Provera Lawsuit
How Do I Join a Class Action Lawsuit?
Did you know there's usually nothing you need to do to join, sign up for, or add your name to new class action lawsuits when they're initially filed?
Read more here: How Do I Join a Class Action Lawsuit?
Stay Current
Sign Up For
Our Newsletter
New cases and investigations, settlement deadlines, and news straight to your inbox.
Before commenting, please review our comment policy.