The British government is taking decisive action to protect women and girls from a disturbing new frontier of online abuse: artificial intelligence-generated sexualised imagery. At the centre of the growing controversy is Grok, the AI chatbot on Elon Musk's X platform, accused of facilitating the creation of non-consensual 'nudified' deepfakes.
Government Crackdown on AI-Generated Abuse
Technology Secretary Liz Kendall is spearheading new legislation to ban AI programmes capable of producing sexually explicit images without consent. This forms a key part of the government's strategy on violence against women and girls. The move follows existing UK laws, including the Online Safety Act 2023 and the Data (Use and Access) Act 2025, which already criminalise the creation and sharing of such material.
The debate intensified after discussions on X speculated about a potential UK ban on Grok, similar to actions taken by Malaysia and Indonesia. While a full ban is considered unlikely, the focus has sharpened on compelling platforms to enforce their own rules. X has stated it takes action against illegal content, including Child Sexual Abuse Material (CSAM), by removing it and suspending accounts, and will treat users prompting Grok to make illegal content the same as those uploading it directly.
Musk's Platform Under the Microscope
Since Elon Musk acquired Twitter, rebranded it X, and introduced Grok, critics argue the platform has become a hub for hate speech, misinformation, and now, AI-facilitated abuse. The issue of AI-generated deepfakes, predominantly targeting women and children, has brought the conflict between free speech and safety into stark relief.
Mr Musk, a self-described "free speech absolutist," has suggested that efforts to control obscene images are attempts to suppress speech. However, his own company's policy acknowledges that CSAM has no place on its platforms. The platform plans to remove the anonymity of those using Grok to create such images, allowing authorities to pursue them—a measure seen as a stronger deterrent than the current 'community notes' system.
A Wider Social Media Reckoning
The crisis extends beyond a single AI tool, prompting a broader examination of children's exposure to harmful online content. There is growing political momentum, including from figures like Kemi Badenoch, to consider prohibiting under-16s from using social media entirely, backed by bans on smartphones in schools.
While such a measure may seem draconian, the Australian government is launching a national experiment to study the effects of social media absence on youth development, which will provide valuable data. The UK's approach, meanwhile, is to ensure existing laws are enforced. It is now up to regulator Ofcom to determine if British laws are being broken by platforms like X and to take appropriate action.
The consensus emerging in Westminster is clear: there is no need to ban Grok or X outright, but they must be fixed. All social media and AI platforms must operate under the same, robust legal standards that prioritise the safety and dignity of individuals over unregulated technological experimentation.