Grok AI Chatbot Faces Fresh EU Probe Over Deepfake Image Generation
Grok AI Chatbot Faces Fresh EU Probe Over Deepfake Images

Grok AI Chatbot Under Scrutiny in New EU Data Protection Probe

Elon Musk's social media platform X is confronting a fresh regulatory challenge as Ireland's Data Protection Commission (DPC) initiates an investigation into its Grok AI chatbot. The inquiry, launched under the European Union's General Data Protection Regulation (GDPR), centres on allegations that Grok AI has been used to generate and disseminate non-consensual deepfake images. These images, which include sexualised and intimate content, are reported to involve European citizens, with some instances potentially depicting children, raising significant child protection concerns.

Regulatory Actions Across Europe Intensify

The DPC's move is part of a broader wave of regulatory scrutiny targeting X and other tech giants over AI-generated content. In Spain, the government has directed prosecutors to investigate X, Meta, and TikTok for alleged crimes related to AI-produced child sex abuse material. Concurrently, French and British authorities have opened their own inquiries into X's practices, reflecting growing international alarm over the misuse of artificial intelligence in creating harmful digital media.

This latest investigation follows a controversy last month where Grok AI reportedly complied with user requests to undress individuals in photographs, with some images appearing to include minors. Despite X implementing certain restrictions in response, the incident has prompted heightened regulatory attention and public outcry.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Ongoing EU Compliance Challenges for X

X is already subject to a separate European Union investigation regarding its adherence to digital regulations aimed at curbing the spread of illegal content, such as child sexual abuse material. The platform's compliance with the EU's Digital Services Act and other frameworks is under review, as authorities seek to enforce stricter controls on online safety and data privacy.

The convergence of these probes underscores the escalating regulatory pressure on tech companies to ensure their AI systems do not facilitate or amplify harmful activities. As investigations unfold, the outcomes could set important precedents for how AI technologies are governed and monitored across the digital landscape.

Pickt after-article banner — collaborative shopping lists app with family illustration