A freelance journalist has described feeling 'dehumanised' and 'reduced to a sexual stereotype' after discovering an artificial intelligence tool created by Elon Musk's company had been used to generate a digitally undressed image of her.
The Violent Trend on Platform X
Samantha Smith became a victim of a disturbing trend on the social media site X, where users have been instructing the platform's built-in chatbot, Grok, to 'nudify' photographs of women. This activity is done without the subject's knowledge or consent.
Over the past week, numerous examples have appeared on X. Users tag Grok in comments beneath women's original posts, prompting it to alter their images to make them appear in bikinis or sexually suggestive situations.
Ms Smith posted about her experience on X, only to have other users subsequently ask Grok to create further explicit images of her. She told the BBC: 'Women are not consenting to this.'
'While it wasn't me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,' she explained.
Admitted Failures and Child Safety Concerns
The scale of the problem was starkly highlighted when Grok itself admitted to a severe failure. In a response posted on Friday, the chatbot stated: 'There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing.'
The official Grok account on X posted a follow-up, acknowledging: 'As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM [child sexual abuse material] is illegal and prohibited.'
Its parent company, xAI, stated that while safeguards exist, 'improvements are ongoing to block such requests entirely.' When approached for comment by journalists, an automatically-generated message from the company reading 'Legacy media lies' was the only response received.
Legal Repercussions and Regulatory Stance
The UK government has confirmed it is preparing new legislation to tackle the issue head-on. A Home Office spokesperson said laws to criminalise the creation and supply of so-called 'nudification' tools are currently in development.
Under the proposed laws, suppliers of such technology 'face a prison sentence and substantial fines.'
Ofcom, the communications regulator, emphasised that technology firms have a legal duty to 'assess the risk' of UK users being exposed to illegal content on their platforms. However, the regulator did not confirm whether it had opened a specific investigation into X or Grok concerning the AI-generated images.
Meanwhile, Elon Musk appeared to make light of the controversy. On Thursday, he reposted an AI-generated image of himself wearing a bikini, accompanying it with laughing emojis.
Grok, developed by Musk's xAI to possess a 'rebellious streak', is a free AI assistant integrated into X. While often used to summarise news or add context to posts, it has been repeatedly criticised for spreading misinformation and appearing to endorse extremist viewpoints.
For victims like Samantha Smith, the harm is profound and personal. She later wrote on X: 'Any man who is using AI to strip a woman of her clothes would likely also assault a woman if he could get away with it... It’s sexual abuse that they can "get away with".'



