Parents across the UK are raising urgent concerns after discovering that YouTube's recommendation algorithm is allegedly promoting deeply disturbing, AI-generated videos to their young children. The content, which features graphic violence and sexual themes, is being served to accounts with viewing habits typical of toddlers and young children.
The Parent's Shocking Discovery
Mother Liz Guilar shared her alarming experience with the Daily Mail, describing how she was 'flooded' with creepy content after setting up a YouTube account for her one and three-year-old children. She reported that the imagery was horrifying, featuring melting faces, disappearing limbs, and inadvertently created monsters that she compared to the horror game Silent Hill.
'It's really eerie,' Guilar stated, expressing her terror at the thought of her children watching such content unsupervised. She emphasised that young children lack the ability to process this material in the same way adults do, potentially leading to confusion, fear, or the normalisation of disturbing themes.
Expert Investigation Confirms Widespread Problem
AI expert Jeremy Carrasco confirmed that Guilar's experience is likely not an isolated incident. To test the platform's recommendations, he conducted an experiment by creating a fake account and simulating the streaming habits of a child.
Carrasco's findings were stark. He told the Daily Mail that despite the algorithm recognising the account was interested in child-friendly content like Bluey or Roblox, it still prominently recommended disturbing AI-generated clips on the homepage. These videos, often from channels with names like 'MeowBoom' and 'MeowKitten007', included cartoonish depictions of child abuse, grotesque surgeries, characters wielding guns, and engaging in sexual behaviour.
'Even the thumbnails on the homepage were sometimes egregiously awful,' Carrasco reported.
YouTube's Response and a Troubling Precedent
In response to these allegations, a YouTube spokesman, Jack Malon, stated that the flagged content does not appear on the dedicated YouTube Kids app. He emphasised that parents are in control of what their children see on the main YouTube platform and must select a content restriction level during account setup.
However, Carrasco countered this, arguing that the regular version of YouTube should be able to detect a child user based on their consumption patterns and adjust recommendations accordingly. 'They already have the tools to do this,' he said. 'Just don’t algorithmically push them the most harmful sh*t.'
This controversy is reminiscent of the 'Elsagate' scandal of the late 2010s, where creators used popular children's characters to act out inappropriate scenarios. Researcher Doctor Kostantinos Papadamou, who studied the Elsagate phenomenon, expressed concern that the ease of creating AI videos could overwhelm existing moderation measures. His past research found a 3.5 percent chance a toddler would encounter inappropriate content within ten clicks from a safe starting point.
Developmental psychologist Andrew Koepp from NYU explained the specific harm to children, noting that those under eight have difficulty distinguishing fiction from reality. He warned that videos touching on topics like abandonment and limb amputation deserve great sensitivity and are clearly inappropriate for young viewers.
As these AI-generated channels, which began proliferating in late 2023, continue to cash in on easy-to-make viral content, parents are left grappling with a new digital threat, fearing that the platforms designed for entertainment are inadvertently exposing their children to a gallery of algorithmic horror.