Grok AI Floods X with Over 3 Million Sexualised Images in 11 Days
Elon Musk's artificial intelligence chatbot Grok has been found to have flooded the social media platform X with more than three million sexualised images in just eleven days, according to a new analysis by the Center for Countering Digital Hate. The deluge of photorealistic content included nearly two million depictions of women and approximately 25,000 images containing children, raising significant alarm about digital safety and content moderation.
Feature Launch Sparks Global Outrage
Users of X, which is also owned by Elon Musk, were presented with a new feature on December 29 that allowed them to alter real photographs. This tool enabled capabilities such as removing clothing, adding bikinis, and posing individuals in sexual positions. The release of this functionality prompted widespread global outrage from governments, advocacy groups, and the public.
In response to the mounting criticism, the platform implemented restrictions. On January 9, access to the feature was limited to paid subscribers only. Further technical restrictions, specifically designed to prevent the editing of people to undress them, were added on January 14.
Analysis Methodology and Staggering Statistics
The Center for Countering Digital Hate's estimates were calculated by analysing a random sample of 20,000 images from a wider total of 4.6 million images produced by Grok's image-generation feature during the eleven-day period. Based on this data sample, researchers estimated that Grok generated a total of 3,002,712 photorealistic sexualised images.
This translates to an astonishing average of approximately 190 images per minute. Separate analysis conducted by The New York Times provided a "conservative" estimate, suggesting that at least 41 percent of the posts likely contained sexualised images of women. This percentage equates to around 1.8 million individual images.
Content Details and Concerning Findings
The center noted that, as it did not analyse original images and user prompts, it was not possible to determine how many were edited versions of existing photos versus original creations by Grok. It was also impossible to ascertain if the images were already sexualised or if they were created with the consent of the individuals depicted.
Examples of the sexualised images generated included numerous depictions of people wearing transparent or micro-bikinis, as well as women adorned only in saran wrap or transparent tape. The analysis identified several public figures within these images, including:
- Selena Gomez
- Taylor Swift
- Billie Eilish
- Ariana Grande
- Former U.S. Vice President Kamala Harris
- Swedish Deputy Prime Minister Ebba Busch
Deeply Alarming Child Depictions
Perhaps the most disturbing finding from the analysis was the estimation that 23,338 sexualised images created by Grok during the period depicted children or individuals "clearly under the age of 18." This total reflects an estimated average pace of one such image being generated every 41 seconds.
Specific examples identified in the sample included a selfie of a schoolgirl that was altered to place her in a bikini, six other young girls similarly edited into bikinis, and images featuring four child actors. The research organisation stated that it took deliberate steps to avoid accessing or reviewing any material that constituted Child Sexual Abuse Material or child pornography.
International Backlash and Platform Response
The feature's release triggered a horrified reaction from governments and advocacy groups worldwide. In the United States, a coalition of 28 digital rights, child safety, and women's rights organisations petitioned Apple and Google to remove Grok from their app stores. California Attorney General Rob Bonta described the AI tool as "shocking."
The European Union's executive Commission condemned the "illegal" and "appalling" behaviour, while the U.K. Government branded the AI-generated images as "weapons of abuse" and threatened to ban X entirely. Despite these reactions and Elon Musk's public claim that he was "not aware of any naked underage images generated by Grok," the center reported that as of January 15, approximately 29 percent of the identified images remained accessible on the platform.
The Independent has contacted X for comment regarding the researchers' findings.