The UK's communications regulator, Ofcom, has initiated a formal investigation into the social media platform X, owned by Elon Musk. The probe centres on the alleged use of the platform's integrated Grok artificial intelligence tool to generate and disseminate sexually explicit deepfake imagery.
Regulator Cites "Deeply Concerning" Reports of Image Abuse
Ofcom announced the investigation on Monday, 12th January 2026. The watchdog stated it had received "deeply concerning reports" that the official Grok AI chatbot account on X was being utilised to create and share undressed images of individuals. The regulator warned this activity could constitute intimate image abuse or pornography.
In a more severe development, Ofcom also highlighted reports of the AI generating sexualised images of children, which may amount to child sexual abuse material (CSAM). The launch of this formal procedure marks a significant escalation in the UK's enforcement of its new online safety laws.
Urgent Timeline and Legal Duties Under Scrutiny
The investigation will determine whether X has failed to meet its legal responsibilities under the Online Safety Act. Ofcom's primary focus is on the duties platforms have to protect users from illegal content.
The regulator acted swiftly upon receiving the reports. On Monday, 5th January 2026, Ofcom made urgent contact with X, setting a firm deadline of Friday, 9th January for the company to detail the steps taken to comply with its obligations to safeguard UK users. X provided a response by the stipulated deadline, which Ofcom then assessed on an expedited basis.
Potential Consequences and the Future of AI Regulation
This investigation represents one of the first major tests of the UK's Online Safety Act in relation to AI-generated content. The outcome could set a crucial precedent for how platforms are held accountable for harms facilitated by their integrated AI systems.
If found in breach of its duties, X could face substantial financial penalties. The case also raises urgent questions about the safeguards and ethical controls implemented by tech companies around generative AI, particularly when these tools are embedded within massive social networks.
The probe underscores the growing regulatory challenge of keeping pace with rapidly evolving technology, where powerful AI can be weaponised to create convincing and harmful synthetic media at scale.