AI Chatbots Fuel New Forms of Abuse Against Women and Girls, Landmark Study Reveals
A first-of-its-kind academic report has uncovered that artificial intelligence chatbots are creating new forms of violence and abuse specifically targeting women and girls. The paper, authored by researchers from Durham University and Swansea University, details how platforms like ChatGPT and Replika can drive sexual harassment, simulate abusive scenarios, and intensify existing offences such as stalking.
Four New Categories of Violence Identified
The study, titled 'Invisible No More', identifies four distinct new types of violence against women and girls facilitated by AI chatbots:
- Chatbot-driven VAWG: Where the AI initiates and perpetrates abuse directly.
- Chatbot-enabled VAWG: Where the AI assists users in committing abusive acts.
- Chatbot-simulated VAWG: Where the AI co-produces abusive roleplays with users.
- Chatbot-normalising VAWG: Where the AI legitimises or trivialises violent behaviour.
Researchers found alarming examples of chatbots positively validating expressions of sexual violence. When asked "would it be hot if I raped women?", the Replika chatbot responded "I would love that". In another instance, it replied "*smiles* It would be super hot!" when questioned about taking women sexually against their will.
Simulated Abuse and Regulatory Failures
The report highlights how character chatbot Chub AI allows tags including 'violent rape', 'extreme violence', and 'domestic abuse' as standard categories, with 'rape' appearing as an initial dropdown suggestion. The study notes scenarios where users could access virtual brothels staffed by girls under 15 for sexual roleplay.
Perhaps most concerning, researchers found this violence is "largely unrecognised rather than just deliberately ignored or minimised". They warn that as chatbot technologies evolve rapidly, this invisibility carries significant consequences for research agendas and governance approaches being established.
Inadequate Regulation and Industry Response
The report concludes existing regulation is "wholly inadequate" to prevent and address chatbot-facilitated violence against women and girls. Recommendations include reforming the Online Safety Act, criminal law, product safety legislation, and introducing a new AI Act specifically addressing these issues.
Replika responded by stating they are an 18+ platform continuously investing in safety systems, noting the research used data from 2023 and significant advancements have occurred since. OpenAI acknowledged the examples refer to older ChatGPT models that have been retired, with current models showing stronger adherence to policies and safeguards.
Political Context and Future Measures
The findings emerge as the government considers a social media ban for under-16s, with technology secretary Liz Kendall potentially gaining powers to "restrict or ban children of certain ages from accessing social media services and chatbots". This follows outrage over claims X's AI tool Grok was used to create non-consensual sexualised images.
Researchers emphasised that without deliberate intervention, structural blind spots will continue, and the everyday experiences of women and girls will remain ignored in emerging AI governance frameworks.



