Elon Musk's social media platform X has announced a significant policy shift, blocking its own artificial intelligence tool, Grok, from being used to create or edit images of real people to appear undressed or in revealing clothing like bikinis.
Policy Change and Platform Scrutiny
The announcement on Wednesday 15 January 2026 came on the very same day that the state of California revealed it was launching a formal investigation into allegations surrounding the AI tool. The probe, led by Attorney General Rob Bonta, will examine claims that Grok has been used to generate a spate of non-consensual, sexually explicit images, including depictions of children.
X's safety team stated it had implemented technological measures to prevent the Grok account from editing images of real people into revealing clothing, a restriction applying to all users, including paid subscribers. Furthermore, the ability to create and edit images via Grok on X is now limited to paying customers only. The company argues this adds an accountability layer, making it easier to identify individuals who might abuse the tool to violate laws or platform policies.
It remains unclear if these new rules explicitly ban requests to generate fully nude images of real people. The move follows intense international scrutiny, with countries like Malaysia and Indonesia blocking the chatbot, and UK regulator Ofcom also launching an investigation.
Global Backlash and Disturbing Findings
Attorney General Bonta cited an "avalanche of reports" detailing the non-consensual material allegedly produced by xAI, Musk's artificial intelligence company. "This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet," Bonta said in a statement, urging xAI to take immediate action.
These concerns are supported by independent research. A Reuters investigation found examples of non-consensual explicit imagery generated on X, where Grok is integrated. Meanwhile, the Paris-based non-profit AI Forensics concluded that more than half of all AI-generated images on X are of adults and children with their clothes digitally removed.
"Non-consensual sexual imagery of women, sometimes appearing very young, is widespread rather than exceptional," said Paul Bouchaud, a researcher at AI Forensics, who also noted the presence of other prohibited content like ISIS and Nazi propaganda in Grok's outputs.
Musk's Defence and Regulatory Clashes
Elon Musk directly addressed the allegations on X, writing, "I am not aware of any naked underage images generated by Grok. Literally zero." He asserted that Grok is designed to refuse illegal requests and obey local laws, attributing any problematic outputs to "adversarial hacking" of prompts, which he claimed are fixed as bugs.
Musk separately clarified that an "NSFW" mode on Grok had permitted upper body nudity of fictional adults, akin to content in R-rated movies, though it is unknown if this mode remains active under the new restrictions.
The controversy unfolds as AI chatbots become more embedded in daily life and major institutions, with the Pentagon recently announcing plans to integrate Grok into its workflows. Musk has a history of clashing with regulators, having previously accused the UK of wanting to "suppress free speech" over its scrutiny of X. He also legally challenged a Californian law last August that sought to force social media platforms to remove deepfakes of political candidates during elections.