By Georgia Wells
Elon Musk has repeatedly expanded the boundaries of permitted speech on his social-media platform X. Child-safety watchdogs and international regulators say a recent update to its AI chatbot Grok crossed a dangerous line: permitting sexualized editing of photos, including of children.
Now Musk, who has called himself a free-speech absolutist, while also vowing to take a hard line on child exploitation, faces growing calls for enforcement around the world and in the U.S.
In late December, xAI's chatbot Grok began allowing users to edit images with text prompts. People on X discovered they could use the feature to execute instructions such as "take her clothes off" and "put her in a bikini."
Nonconsensual and AI-generated images of women in underwear flooded X. Grok generated about 7,750 sexually suggestive or nudifying images an hour in an analysis of its content on X on Wednesday, according to Genevieve Oh, a social media and deepfake researcher. Copyleaks, a company that identifies when images have been altered with AI, said it was detecting one nonconsensual sexualized image a minute in Grok's publicly accessible photo stream in late December.
Some users focused the tool on children.
Ngaire Alexander, head of hotline at the Internet Watch Foundation, said the nonprofit's analysts discovered sexualized images of minors in a dark web forum, whose members claimed they had used Grok to produce the images. Alexander, whose organization identifies sexual abuse content online, said the images depicted sexualized and topless girls, who appeared to be between the ages of 11 and 13. The images appeared to meet the U.K.'s criteria for criminal child sexual abuse material, the IWF said.
Material that exploits and sexualizes children has existed online since the creation of the internet, and in analog formats before that. But artificial intelligence and apps such as Grok have accelerated the speed and ease with which people can generate photo-realistic, sexualized images of children.
"Tools like Grok now risk bringing sexual AI imagery of children into the mainstream," Alexander said. "The harms are rippling out."
Previously, Musk indicated he would address child safety on his platform. After he bought Twitter in 2022, Musk said in a post that eliminating child exploitation was "priority #1." He encouraged users to reply in the comments of his post if they saw anything his company needed to address.
xAI is among the major tech companies competing to attract users and funding as they race to develop cutting-edge artificial-intelligence tools. Executives at xAI have repeatedly found that offering AI tools with looser guardrails around sexual content than other platforms has helped drive engagement, according to people familiar with the changes.
On Jan. 2, as furor over the proliferation of nonconsensual images mounted, Musk reposted an image on X of a toaster in a bikini. "Not sure why, but I couldn't stop laughing about this one 🤣🤣," Musk said in his post.
The next day he posted that anyone using Grok to make illegal content would "suffer the same consequences" as if they were to upload illegal content.
An account for Grok on X said xAI has safeguards against "depicting minors in minimal clothing" and "improvements are ongoing to block such requests entirely."
xAI and Musk didn't respond to requests for comment.
Around the world, regulators said they are taking action, or are considering next steps, including in the European Commission, United Kingdom, France, Australia, Malaysia, India and Brazil.
"There is an explosion of AI generating explicit images of children," Rep . Alexandria Ocasio-Cortez (D., N.Y.) wrote in a post on X responding to a post about Grok. She urged Congress to pass legislation she has spearheaded designed to help victims of deepfakes.
People on X responded to Ocasio-Cortez's post with deepfake images of her in a bikini.
The National Center on Sexual Exploitation, a nonprofit that has urged improved child-safety protections online and has sought to restrict access to pornography , called on the Justice Department and Federal Trade Commission to investigate X.
Ashley St. Clair, a conservative influencer who had a child with Musk, said in an interview that users on X were using Grok to undress images of her, including one from when she was 14. St. Clair sued Musk in February over custody and financial support of their child.
St. Clair put out a call to other users in a post on X to notify her if they experienced similar treatment; she said she has since been inundated with messages from desperate parents trying to get the sexualized images of their children taken down.
"I have watched Elon stop much less with a single message to his engineers," she said, including when his jokes weren't performing well on X.
Musk has drawn both fans and criticism for his effort to create an anti-"woke" chatbot. Over the summer, his chatbot posted a range of antisemitic comments, and referred to itself as "MechaHitler," in a series of viral posts on X.
Following the incident, xAI said it was actively working to remove the inappropriate posts, and X said it had tweaked its functionality to ensure it wouldn't post hate speech. "Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially," Musk said in a post at the time.
In mid-2025, Thorn, a nonprofit that creates detection tools for child safety, said it canceled its contract with X after Musk's company stopped paying. Many major tech companies rely on Thorn's tools to detect and remove child sexual abuse content.
Internally, xAI's decisions to loosen the rules on its platform have caused tension among employees. Former employees involved in the launch of Ani, a racy animated character on Grok with blonde pigtails and revealing outfits, and Spicy Mode, a setting that lets users create suggestive and sexualized content, said the offerings spurred engagement with the app. Some employees raised concerns that Spicy Mode would be used for harassment, according to people familiar with the matter.
Former employees of Musk's said Musk has in the past been at odds with some executives about how much they should restrict content. People who have voiced concerns have been driven out of the company, they said.
In the weeks before xAI released the controversial new way to edit images, xAI lost a number of the people working on its small teams dedicated to safety, including its head of product safety and its head of legal affairs, according to the former employees.
In late December, an employee of xAI said in a post that the company was hiring for its safety team.
Write to Georgia Wells at georgia.wells@wsj.com
(END) Dow Jones Newswires
January 08, 2026 17:43 ET (22:43 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.
Comments