The Internet Watch Foundation (IWF) has issued a stark warning after revealing artificial intelligence was used to create thousands of child sexual abuse videos in 2025, contributing to record levels of this harrowing material being found online.
Unprecedented Scale of AI-Generated Abuse
IWF analysts discovered 3,440 AI-generated videos depicting child sexual abuse in 2025. This represents a catastrophic increase from the just 13 videos identified in 2024—a rise of over 26,000%. Overall, IWF staff processed 312,030 confirmed reports of abuse imagery found across the internet last year, up from 291,730 in 2024.
Most alarmingly, the research indicated that of the 3,440 AI-generated videos, 2,230 fell into Category A, the most extreme classification under UK law. A further 1,020 were deemed the second most severe category.
Calls for 'Safety by Design' and Immediate Action
Kerry Smith, IWF chief executive, stated that criminals now essentially have their own "child sexual abuse machines" to create whatever they want. "The frightening rise in extreme category A videos of AI-generated child sexual abuse shows the kind of things criminals want. And it is dangerous," she said.
Smith warned that the easy availability of this material would embolden offenders, fuel commercialisation, and further endanger children. She called on governments worldwide to ensure AI companies embed safety by design principles from the very beginning of product development.
The children's charity the NSPCC said the findings were "both deeply alarming and sadly predictable." Its chief executive, Chris Sherwood, argued: "Tech companies cannot keep releasing AI products without building in vital protections. They know the risks and they know the harms that can be caused."
Government and Regulatory Response
The research emerged as Elon Musk's platform X announced limits on its AI chatbot Grok's ability to manipulate images. This followed reports that users could instruct it to sexualise images of women and children. X stated it would prevent Grok from "editing images of people in revealing clothes" and block the generation of similar images of real people where illegal.
Technology Secretary Liz Kendall branded it "utterly abhorrent that AI is being used to target women and girls." She confirmed the government had accelerated action to ban the creation of non-consensual AI-generated intimate images and introduced a world-leading offence targeting AI models trained to generate child sexual abuse material.
Minister for Safeguarding Jess Phillips said: "This surge in AI-generated child abuse videos is horrifying – this Government will not sit back and let predators generate this repulsive content." She delivered a direct message to technology companies: "Take action now or we will force you to."
The Lucy Faithfull Foundation, which works to stop offenders viewing abuse imagery, reported it had also seen the number of people using AI to view and make abuse images double in the last year.