Social media platform X, owned by Elon Musk, has taken decisive action to block its artificial intelligence tool, Grok, from being used to generate non-consensual explicit images of real people. The policy change, announced on Thursday 15 January 2026, specifically prevents users from editing photographs of individuals into revealing or nude clothing using the AI's capabilities.
California and UK Launch Probes Into AI-Generated Imagery
The platform's announcement coincided with the launch of a formal investigation by California Attorney General Rob Bonta. The state probe is examining allegations that non-consensual, AI-generated explicit imagery, some reportedly depicting children, has been proliferating on X, with links to the Grok tool.
This investigation was prompted by widespread reports and detailed findings from organisations including Reuters and the research group AI Forensics, which highlighted the disturbing prevalence of such content. The United Kingdom has also initiated its own investigations into the matter, signalling growing international concern over the misuse of generative AI.
Global Repercussions and Platform Restrictions
The controversy has already led to significant international consequences. Countries including Malaysia and Indonesia have moved to block access to the Grok chatbot entirely in response to the allegations.
In a further attempt to curb misuse and enhance user accountability, X has also restricted all image creation and editing functions within Grok to its paid subscriber base. This tiered access model is intended to add a layer of protection and traceability to the AI's use.
Broader Implications for AI Governance
This incident underscores the urgent and complex challenges facing regulators and technology companies as powerful generative AI tools become more widely accessible. The swift action by national governments to block the tool, coupled with law enforcement probes, highlights the serious legal and ethical lines being crossed.
The situation places X and its owner, Elon Musk, under intense scrutiny regarding the governance and safety measures built into its AI products. The company's decision to wall off features behind a paywall is one of the first major platform-led responses to such a scandal, setting a potential precedent for how social media giants might attempt to mitigate the risks of their own AI systems.