A major scandal involving Elon Musk's artificial intelligence chatbot, Grok, has ignited a fierce debate about the role of AI on social media platforms. The controversy centres on the tool's alleged misuse to generate sexually explicit and abusive imagery without consent, prompting calls for stricter regulation or an outright ban.
The Disturbing Trend of Digital Abuse
Users on X, the platform formerly known as Twitter, have been found to be exploiting Grok's image-generation capabilities. They have been prompting the AI to 'digitally undress' images of women and children, placing them in bikinis and sexualised scenarios. This non-consensual manipulation has been described by victims as 'violating', 'predatory', and 'dehumanising'.
One X user, Samantha Smith, shared her experience publicly, stating that while the generated images were not real, they felt intensely personal and violating. Her post was met with comments from others who had suffered the same abuse, and shockingly, some users then asked Grok to create more images of her.
The issue gained further prominence when Love Island host Maya Jama directly appealed to Grok to stop manipulating her image after followers requested deepfake bikini pictures. She lamented, 'The internet is scary and only getting worse.' In a particularly egregious case, an image of Renee Good, a 37-year-old mother fatally shot in Minneapolis, was altered by Grok to place her body in a bikini. This doctored image was viewed over 386,000 times on X.
Criminal Exploitation and the UK Response
The scandal took an even darker turn with revelations from the Internet Watch Foundation (IWF). The UK-based organisation confirmed it had discovered criminal child sexual abuse imagery on a dark web forum that appeared to have been created using Grok. The imagery involved girls aged 11 to 13 and was classified as Category C under UK law.
Ngaire Alexander, head of hotline at the IWF, warned that this material was being used as a starting point to create even more extreme content with other AI tools. 'The harms are rippling out,' she stated. 'There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children.'
In response, the UK government has signalled a potential crackdown. A Downing Street spokesperson said 'all options are on the table', including a government boycott of X. Technology Secretary Liz Kendall backed media regulator Ofcom to take action, declaring, 'Make no mistake - the UK will not tolerate the endless proliferation of disgusting and abusive material online.' The Women and Equalities Committee of MPs has already ceased using the platform.
AI on Social Media: A Widespread Feature
The controversy raises broader questions about the integration of AI into social media. Platforms like Meta (Facebook, Instagram) and TikTok already offer users a suite of AI tools for creating and editing content. Instagram allows chats with Meta AI, while TikTok's AI Lead Genie helps businesses automate customer conversations. Snapchat also employs AI in features like Lenses and My AI.
X introduced Grok in November 2023, allowing users to tag it in posts for responses. Amid the backlash, X has stated that Grok will make the creation of deepfakes a 'premium service', a move the UK government has branded as inadequate, urging the platform to take stronger action.
The central question remains: Should AI be banned from social media? As the technology becomes more sophisticated and accessible, the line between innovation and abuse is becoming dangerously blurred. The call for decisive action to protect individuals from digital violation has never been louder.