Grok Nonconsensual Sexual Images Lawsuit: Explicit Deepfakes

Last Updated on January 23, 2026

Comments |

At A Glance

This Alert Affects:
Women and children of whom Grok has generated sexual images that were published online without consent.
What’s Going On?
Attorneys working with ClassAction.org are investigating reports that Grok, an artificial intelligence chatbot hosted on X, has been used to generate and post explicit deepfakes of women and children. It’s possible that lawsuits could be filed on behalf of people who appeared in the AI-generated images.
How Could a Lawsuit Help?
Legal action may be able to help victims recover money for any harm they’ve suffered as a result of nonconsensual sexual images.
What You Can Do
If you or your child appeared in a sexually explicit image generated by Grok without consent, learn about your options by filling out the form on this page.

The information submitted on this page will be forwarded to Berger Montague who has sponsored this investigation.

Contact Us