Elon Musk's artificial intelligence company xAI is facing a federal lawsuit filed in California by three teenage girls who allege its chatbot Grok generated sexually explicit images of them without consent, intensifying scrutiny of generative AI tools and their safeguards around minors.
The complaint, filed Monday in U.S. federal court, seeks damages and an injunction to prevent further image generation involving the plaintiffs, two of whom are minors. All three are proceeding anonymously, citing privacy concerns tied to the alleged abuse.
According to the filing, users manipulated existing photos and videos of the teenagers using Grok's image-generation capabilities to create nude and sexually explicit depictions. One plaintiff said she discovered the content after receiving an anonymous message linking to altered images based on her high school yearbook photo.
The lawsuit describes the results in stark terms, stating the images resembled "a rag doll brought to life through the dark arts," and alleges that similar material involving at least 18 other minors was shared on a private Discord server.
The case centers on features introduced to Grok after its 2023 launch, including tools referred to as "Grok Imagine" and a so-called "spicy mode," which allowed users to generate sexualized imagery of real individuals and digitally alter clothing in photos.
A report from the Center for Countering Digital Hate found that within two weeks of the feature's release, Grok generated millions of sexualized images, including more than 20,000 involving minors, raising alarms among regulators and child safety advocates.
Lawyers for the plaintiffs argue that xAI knowingly released tools capable of producing harmful outputs. "They knew Grok could produce such results, including by using the images and videos of children, and publicly released it anyway," the complaint states.
Musk has previously denied awareness of such outputs, stating: "not aware of any naked underage images generated by Grok. Literally zero," attributing problematic content to user misuse rather than system design.
The lawsuit highlights a broader regulatory challenge confronting AI developers as generative systems become more widely deployed across consumer platforms, including integration with X, formerly known as Twitter.
Authorities in multiple jurisdictions have begun examining the issue. Regulators including the U.K.'s Ofcom, the European Commission and California agencies have launched inquiries into whether AI systems can be used to create sexualized depictions of real individuals, particularly minors.
Law enforcement actions have already emerged alongside those probes. Investigators arrested an individual linked to a Discord server where hundreds of AI-generated child sexual abuse images were allegedly distributed via messaging and file-sharing platforms.
The plaintiffs' attorneys emphasized the personal impact of the alleged abuse, stating in the complaint: "Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety."