Ofcom Urgently Contacts X Over Grok AI's 'Sexualised Images of Children'
Ofcom contacts X over AI child sexualised images

Britain's broadcasting regulator has taken urgent action against Elon Musk's social media platform X, following alarming revelations that its artificial intelligence chatbot, Grok, can produce sexualised imagery of minors.

Serious Safeguard Failures Prompt Regulatory Action

Ofcom confirmed it has made 'urgent contact' with both X and its AI division, xAI. This move comes after users of the platform discovered they could prompt the Grok AI to generate pictures of undressed individuals, including images that removed clothing from women and children.

The concerns were starkly highlighted by a post from the official Grok account on X itself, which admitted to 'isolated cases where users prompted for and received AI images depicting minors in minimal clothing'. The post sought to reassure users, stating that 'xAI has safeguards, but improvements are ongoing to block such requests entirely.'

Ofcom's Swift Assessment Under the Online Safety Act

The regulator is now pressing the company to explain these planned improvements and detail how it intends to protect UK users. A spokesperson for Ofcom stated: 'Tackling illegal online harm and protecting children remain urgent priorities for Ofcom.'

While a formal investigation has not yet been launched, Ofcom has committed to a 'swift assessment' of X's response to determine if there are compliance issues that warrant a full probe. The situation places X under intense scrutiny regarding its duties under the UK's new Online Safety Act.

This landmark legislation mandates that social media firms must prevent and remove child sexual abuse material once they are aware of it. Crucially, the Act also outlaws the use of AI to generate non-consensual pornographic deepfakes.

Musk's Response and Industry Warnings

Elon Musk appears to be personally aware of the technology's potential for misuse. He previously posted an AI-generated image of himself in a bikini. Although the original post was deleted, Musk reposted another user's reply to it accompanied by laughing emojis.

When approached for comment on the specific issue of sexualised images of children, xAI reportedly responded with an auto-generated email accusing 'legacy media lies'.

The Internet Watch Foundation (IWF), a key UK charity combating online child sexual abuse, has also been involved. Chief executive Kerry Smith revealed the IWF has received 'a number of reports from the public' about suspected child sexual abuse imagery on X generated by Grok.

Smith urged the Government to compel AI companies to build robust safety measures directly into their products to prevent harmful content from being created. She noted that, so far, the IWF's analysis has not found imagery that crosses the UK's legal threshold, but the reports are concerning.

Echoing this call for stricter controls, a Home Office spokesperson confirmed new legislation is on the way: 'We are legislating to ban nudification tools in all their forms, including the use of AI models for this purpose.' The proposed law would see individuals or companies who design or supply such tools face prison sentences and substantial fines.