UK Data Watchdog Probes Elon Musk's Grok AI Over Child Sexual Imagery Allegations
UK Regulator Investigates Grok AI Over Child Sexual Imagery

The UK's data protection regulator has initiated a formal investigation into Elon Musk's controversial artificial intelligence chatbot, Grok, following serious allegations that the system has been used to create sexualised imagery of children and generate non-consensual deepfake content of adult women.

Regulatory Scrutiny Intensifies

The Information Commissioner's Office confirmed today that it has launched a comprehensive investigation into how Grok processes people's personal information, specifically focusing on its "potential to produce harmful sexualised image and video content." This development comes amid growing international concern about the rapid proliferation of AI-generated explicit material and its devastating impact on victims.

Multiple Investigations Underway

The ICO's probe represents just one facet of mounting regulatory pressure on Musk's AI ventures. Simultaneously, French prosecutors have conducted raids on X's offices in France as part of their own investigation into whether Grok was responsible for spreading child pornography and deepfake content across digital platforms.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Meanwhile, UK communications regulator Ofcom continues to assess whether X, the social media platform formerly known as Twitter, has breached the Online Safety Act by permitting the sharing of deepfake images on its site. This legislative framework was significantly strengthened last month to criminalise the creation or request of non-consensual sexualised deepfakes.

Serious Concerns About Child Protection

William Malcolm, a senior official at the ICO, described reports of Grok's capabilities as "deeply troubling," emphasising that they present "a risk of immediate and significant harm... particularly where children are involved." The Internet Watch Foundation, the UK's leading online child protection charity, has confirmed that Grok was allegedly used not only to manipulate images of adult women into states of implied nudity but also to create inappropriate sexualised representations of children.

"Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people's data rights," Malcolm stated. "Where we find obligations have not been met, we will take action to protect the public."

Legal Framework and Investigation Scope

The ICO's investigation will specifically examine whether personal data has been processed lawfully by Grok's developers. Under UK GDPR legislation, photographs constitute personal data and require explicit consent for processing—a legal requirement that may have been violated if the AI system was indeed generating manipulated imagery without permission.

The regulatory body will scrutinise both X.AI LLC, the parent company of X, and its Ireland-based subsidiary X Internet Unlimited Company. Investigators will determine whether adequate safeguards were integrated into Grok's design architecture to prevent its misuse for generating abusive content.

Growing Pressure on Tech Platforms

This investigation highlights the escalating tension between rapid AI innovation and regulatory frameworks designed to protect vulnerable individuals. The ICO emphasised that the reported creation and circulation of non-consensual images of both adults and children raises "serious concerns under UK data protection law and presents a risk of significant potential harm to the public."

As AI technologies become increasingly sophisticated and accessible, regulators worldwide are grappling with how to balance innovation with fundamental rights to privacy and protection from digital harm. The Grok investigation represents a significant test case for how UK authorities will enforce data protection standards against powerful multinational technology corporations developing cutting-edge artificial intelligence systems.

This remains a developing story with further regulatory actions and findings expected as investigations progress on both sides of the Channel.

Pickt after-article banner — collaborative shopping lists app with family illustration