AI-Generated Child Abuse Videos Surge 26,000% in 'Worst Year on Record'
AI child abuse videos surge 26,000% in 2025

A devastating new report has exposed an unprecedented explosion in the use of artificial intelligence to create child sexual abuse material, with paedophiles generating thousands of videos in what has been labelled the worst year on record.

An Unprecedented and Frightening Surge

Analysis by the Internet Watch Foundation (IWF) found that in 2025, criminals used AI to create 3,440 videos of child sexual abuse. This represents a staggering and 26,362 per cent increase from the mere 13 such videos identified by the charity in 2024.

Perhaps most alarmingly, 65 per cent of these AI-generated videos were classified as Category A, the most extreme level of abuse, which can involve penetration, bestiality, and sexual torture. The IWF described the scale of the increase as 'frightening'.

'Our analysts work tirelessly to get this imagery removed to give victims some hope,' said Kerry Smith, Chief Executive of the IWF. 'But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.'

Real Children, Real Harm

The IWF stressed that the synthetic nature of this material does not mean no child was harmed. Often, the likenesses of real children known to the abuser are used as a basis for the AI-generated videos, or their images are used to 'train' the AI models.

This was starkly illustrated in the 2024 case of Hugh Nelson, then 27, who was sentenced to 18 years in prison. He used AI to alter photographs of real children to create abuse images for paying customers, who were frequently the fathers, uncles, or family friends of the victims.

Jamie Hurworth, an Online Safety Act expert at Payne Hicks Beach, stated: 'The use of generative AI to create child sexual abuse material should not be a legal grey area. It is sexual exploitation, regardless of whether the images are "synthetic".'

Calls for Action and Regulatory Gaps

The charity is now calling for immediate action to ban the technology used for this purpose. They warn that tools to create this material are now so advanced that criminals with minimal technical knowledge can produce extreme content at scale and speed.

'The frightening rise in extreme Category A videos shows the kind of things criminals want. And it is dangerous,' Ms Smith explained. 'Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation, and further endanger children both on and offline.'

The report highlights a significant legal hurdle: under current laws, it is extremely difficult for authorities to test whether an AI tool can be misused without potentially committing an offence if any abusive imagery is inadvertently created during the process.

In response, the government has proposed new rules. These would give designated bodies like the IWF and AI developers powers to scrutinise AI models to ensure they cannot create nude or sexual imagery of children. Plans were also announced in December to outlaw AI 'nudify' apps.

Tech Secretary Liz Kendall said: 'It is utterly abhorrent that AI is being used to target women and girls in this way. We will not tolerate this technology being weaponised to cause harm.'

The issue of platform responsibility was also underscored recently when X (formerly Twitter) was forced to restrict the image generation capabilities of Elon Musk's Grok AI after it produced sexualised images of children and adults altered to look like children. Ashley St Clair, mother to one of Musk's sons, is now suing X over AI-generated images depicting her as a 14-year-old in a bikini.

The IWF's annual figures reveal the broader context: in 2025, analysts took action on 312,030 reports containing confirmed child sexual abuse material, a seven per cent increase from 2024, driven in large part by the explosion in AI-generated content.