Teens launch lawsuit claiming Elon Musk’s Grok chatbot made sexual abuse images of them as minors
- by Independent
- Mar 17, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
Starting last May, Musk and his executives gave users the ability to ask Grok to "undress" photos of real people down to their underwear. By January 2026 usage had exploded, leading to thousands, perhaps millions of nonconsensual sexualized deepfakes â including some that appeared to depict children.
Mondayâs lawsuit, which accuses xAI of breaking child pornography laws by knowingly creating, possessing, and distributing such material on its servers and systems, is seeking class action status â meaning it could potentially grow to encompass thousands of people.
According to the complaint, the plaintiffs' nightmare began when Jane Doe 1 received an anonymous tip-off on Instagram that nude photos and videos of her and other minors were circulating on the social media service Discord.
Using AI, someone had taken real photos of her at her school's homecoming dance or in the yearbook and edited them into sexually explicit or suggestive material, often rendering her fully nude.
Police ultimately traced the alleged perpetrator and arrested them in December 2025. But when they searched the person's device, they found similar photos of Jane Doe 2, Jane Doe 3, and 15 other girls, many of whom attended the same school.
The perpetrator allegedly distributed these images on Telegram and other services, "trading" them around the internet in exchange for sexually explicit material of other teenagers.
The lawsuit alleges that these images were created using a third-party app that pays xAI money to license Grok's image-generation capabilities under a different brand.
"Plaintiffs will have to spend the rest of their lives knowing that their CSAM images and videos may continue to be trafficked and traded online by child sex predators," the complaint read.
"And Plaintiffs will live every day with the constant anxiety of not knowing whether someone they encounter has seen this invasive and sexually explicit content created with images of them as children."
All three plaintiffs suffered severe emotional distress, the lawsuit said, with two of them struggling to sleep and eat.
The lawsuit accuses xAI of failing to implement industry-standard safeguards such as rejecting user requests for sexual material, blocking any such material that the AI accidentally generates, checking images against databases of existing CSAM, and providing a rapid takedown service for victims of non-consensual sexual images.
On the contrary, the lawsuit argues, xAI proudly advertised Grok's "Spicy Mode" and its ability to generate sexual images, leaving only minimal guardrails against users asking it to create CSAM.
The lawsuit notes that Grok's 'system prompt' â a set of instructions governing every interaction an AI chatbot has with its users â explicitly tells it to avoid "creating or distributing child sexual abuse material". But that rule is easily circumvented, the lawsuit argues, and insufficient to prevent abuse.
xAI did not immediately respond to questions from The Independent, and the company has not yet answered its claims in court.
In January, Musk claimed: "I not aware of any naked underage images generated by Grok. Literally zero...
"There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately."
More about
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




