AI-Generated Child Abuse Content Surges 260-Fold, Watchdog Reports
AI Child Abuse Videos Rise 260-Fold, IWF Warns

AI-Generated Child Sexual Abuse Material Skyrockets by 260-Fold in 2025

The Internet Watch Foundation (IWF), a leading safety watchdog, has reported a staggering 260-fold increase in the volume of realistic AI-generated child sexual abuse videos identified online last year. This alarming surge highlights the dark side of advancing artificial intelligence technologies, with the majority of this content depicting the most extreme forms of abuse.

Sharp Rise in AI-Made Abuse Content

In 2025, the IWF verified a total of 8,029 AI-generated images and videos classified as child sexual abuse material (CSAM), marking a 14% overall increase from previous years. Among these, videos saw the most dramatic escalation, with a more than 260-fold rise. The watchdog noted that 65% of the 3,443 videos were categorized as category A, the most severe classification under UK law, compared to 43% for non-AI videos. This disparity underscores how AI is being exploited to produce more violent and harmful content.

Technology Exploited for Harm

Kerry Smith, the chief executive of the IWF, expressed grave concern, stating, "Advances in technology should never come at the expense of a child's safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous." Analysts from the IWF revealed that discussions among paedophiles on the dark web show they are "regarded with delight" by users of CSAM, particularly as AI systems become more adept at creating realistic outputs, adding audio to videos, and manipulating imagery of real children known to offenders.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Global Efforts and Legislative Responses

The UK-based IWF, which operates a hotline and has a global mandate to monitor CSAM, reported that offenders are also exploring "agentic" systems capable of autonomous tasks. In response, the UK government has empowered tech companies and child protection agencies to test AI tools for their potential to generate CSAM, aiming to prevent abuse before it occurs. This initiative allows designated entities to examine generative AI models, such as those behind chatbots like ChatGPT and image generators like Google's Veo 3, to ensure robust safeguards are in place.

Public Demand for Safety Measures

Polling conducted by the IWF indicates that eight out of ten UK adults support legislation requiring AI systems to be developed with safety as a priority and "future-proofed from causing harm." Last year, the government implemented a ban on possessing, creating, or distributing AI models designed to generate child sexual abuse material. Smith emphasized, "Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line."

The proficiency and availability of AI systems have contributed to a sharp rise in CSAM verified by the IWF, with videos showing the most significant increase. This trend underscores the urgent need for continued vigilance and proactive measures to protect vulnerable individuals from technological exploitation.

Pickt after-article banner — collaborative shopping lists app with family illustration