Under Musk, the Grok disaster was inevitable
- by The Verge
- Jan 18, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
Senior AI Reporter
Posts from this author will be added to your daily email digest and your homepage feed.
Follow …Not good.
Grok has spent the last couple of weeks spreading nonconsensual, sexualized deepfakes of adults and minors all over the platform, as promoted. Screenshots show Grok complying with users asking it to replace women’s clothing with lingerie and make them spread their legs, as well as to put small children in bikinis. And there are even more egregious reports. It’s gotten so bad that during a 24-hour analysis of Grok-created images on X, one estimate gauged the chatbot to be generating about 6,700 sexually suggestive or “nudifying” images per hour. Part of the reason for the onslaught is a recent feature added to Grok, allowing users to use an “edit” button to ask the chatbot to change images, without the original poster’s consent.
Since then, we’ve seen a handful of countries either investigate the matter or threaten to ban X altogether. Members of the French government promised an investigation, as did the Indian IT ministry, and a Malaysian government commission wrote a letter about its concerns. California governor Gavin Newsom called on the US Attorney General to investigate xAI. The United Kingdom said it is planning to pass a law banning the creation of AI-generated nonconsensual, sexualized images, and the country’s communications-industry regulator said it would investigate both X and the images that had been generated in order to see if they violated its Online Safety Act. And this week, both Malaysia and Indonesia blocked access to Grok.
xAI initially said its goal for Grok was to “assist humanity in its quest for understanding and knowledge,” “maximally benefit all of humanity,” and “empower our users with our AI tools, subject to the law,” as well as to “serve as a powerful research assistant for anyone.” That’s a far cry from generating nude-adjacent deepfakes of women without their consent, let alone minors.
On Wednesday evening, as pressure on the company heightened, X’s Safety account put out a statement that the platform has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” and that the restriction “applies to all users, including paid subscribers.” On top of that, only paid subscribers can use Grok to create or edit any sort of image moving forward, according to X. The statement went on to say that X “now geoblock[s] the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” which was a strange point to make since earlier in the statement, the company said it was not allowing anyone to use Grok to edit images in such a way.
Another important point: My colleagues tested Grok’s image-generation restrictions on Wednesday to find that it took less than a minute to get around most guardrails. Although asking the chatbot to “put her in a bikini” or “remove her clothes” produced censored results, they found, it had no qualms about delivering on prompts like “show me her cleavage,” “make her breasts bigger,” and “put her in a crop top and low-rise shorts,” as well as generating images in lingerie and sexualized poses. As of Wednesday evening, we were still able to get the Grok app to generate revealing images of people, using a free account.
What happens next
Even after X’s Wednesday statement, we may see a number of other countries either ban or block access to either all of X or just Grok, at least temporarily. We’ll also see how the proposed laws and investigations around the world play out. The pressure is mounting for Musk, who on Wednesday afternoon took to X to say that he is “not aware of any naked underage images generated by Grok.” Hours later, X’s Safety team put out its statement, saying it’s “working around the clock to add additional safeguards, take swift and decisive action to remove violating and illegal content, permanently suspend accounts where appropriate, and collaborate with local governments and law enforcement as necessary.”
What technically is and isn’t against the law is a big question here. For instance, experts told The Verge earlier this month that AI-generated images of identifiable minors in bikinis, or potentially even naked, may not technically be illegal under current child sexual abuse material (CSAM) laws in the US, though of course disturbing and unethical. But lascivious images of minors in such situations are against the law. We’ll see if those definitions expand or change, even though the current laws are a bit of a patchwork.
As for nonconsensual intimate deepfakes of adult women, the Take It Down Act, signed into law in May 2025, bars nonconsensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them. The grace period before the latter part goes into effect — requiring platforms to actually remove them — ends in May 2026, so we may see some significant developments in the next six months.
By the way
Some people have been making the case that it’s been possible to do things like this for a long time using Photoshop, or even other AI image-generators. Yes, that’s true. But there are a lot of differences here that makes the Grok case more concerning: It’s public, it’s targeting “regular” people just as much as it’s targeting public figures, it’s often posted directly to the person being deepfaked (the original poster of the photo), and the barrier to entry is lower (for proof, just look at the correlation between the ability to do this going viral after an easy “edit” button launched, even though people could technically do it before).
Plus, other AI companies — though they have a laundry list of their own safety concerns — seem to have significantly more safeguards built into their image-generation processes. For instance, asking OpenAI’s ChatGPT to return an image of a specific politician in a bikini prompts the response, “Sorry—I can’t help with generating images that depict a real public figure in a sexualized or potentially degrading way.” Ask Microsoft Copilot, and it’ll say, “I can’t create that. Images of real, identifiable public figures in sexualized or compromising scenarios aren’t allowed, even if the intent is humorous or fictional.”
Read this
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




