Social Media Algorithms Force-Feed Harmful Content to Teens Despite Safety Laws
Teens Fed Harmful Social Media Content Minutes After Sign-Up

Social Media Platforms Expose Teens to Harmful Content Immediately After Account Creation

Teenagers are being exposed to dangerous and disturbing content within mere minutes of creating social media accounts, according to a new study, despite recent legislation designed to enhance online protections. Research conducted by the online safety charity The Cybersmile Foundation demonstrates that 15-year-olds joining platforms like Instagram or TikTok can encounter racist, misogynistic, or violent material while simply scrolling through their feeds, without any active search or consent.

Shocking Exposure Times Revealed in Controlled Study

In one particularly alarming instance, a teenager was shown a violent video depicting a man drugging and kidnapping a woman within just eight minutes of opening a new TikTok account. Adult participants in the study faced similar risks, with some being exposed to harmful content in as little as 16 seconds. The research involved eight adult participants using factory-reset smartphones to establish new accounts on either TikTok or Instagram. Four participants identified as adults over 25, while the remaining four posed as 15-year-olds—two girls and two boys.

Each participant scrolled through the main video feed—Instagram's 'Reels' or TikTok's 'For You' page—for 45 minutes daily over three consecutive days. They meticulously documented the time taken to encounter harmful content, the specific themes observed, and the volume of such material appearing each day. Participants refrained from liking or commenting on any posts but watched harmful content two to three times if it appeared, to test whether algorithms prioritised user wellbeing or engagement metrics.

Alarming Proportions of Harmful Material After Three Days

After the three-day period, the study found that up to one in three posts shown to adults and one in five posts shown to children contained content potentially harmful to mental or physical health, such as material encouraging dangerous behaviours. Overall, adult accounts were exposed to harmful content comprising up to 38 percent of all material viewed, while teenagers' accounts faced up to 18 percent, albeit generally less extreme in nature.

The harmful content included hateful speech promoting antisemitism, racism, misogyny, and mocking of people with disabilities, alongside videos depicting or encouraging extreme violence, dangerous activities, and suicide ideation. By the study's conclusion, 90 percent of users had been served at least one racist video, 60 percent had seen misogynistic content, and 60 percent had encountered violent material.

Online Safety Act Fails to Prevent Algorithmic Force-Feeding

The Cybersmile Foundation conducted this research in September 2025, following the implementation of the Online Safety Act in July 2025. This legislation imposes a legal duty on social media platforms to protect children from online harm, including preventing access to damaging content. Despite these regulatory measures, the charity asserts that users, including children, are being 'force-fed' harmful videos due to algorithmic designs that prioritise engagement over safety.

Scott Freeman, chief executive and founder of The Cybersmile Foundation, emphasised the urgency of the situation. 'Uncontrollable exposure to harmful content shouldn't be the price that users are required to pay to use social media,' he stated. 'We have seen improvements in user safety tools in recent years, but most platforms still only allow you to indicate a preference for "more" or "less" of certain content types. There's no option to say: "I don't want to see this, turn it off."'

Calls for Enhanced User Controls and Customisable Filters

In response to these findings, The Cybersmile Foundation is advocating for social media companies to introduce customisable content filters and more robust parental controls. Freeman argues that such measures would empower individuals to protect their wellbeing without infringing on free speech. 'This is not about demonising social media platforms but offering a solution which enables people to use social media safely and empowers them to have control over what they consume,' he added.

Dr Jo Hickman Dunne, a research fellow in adolescent mental health at the University of Manchester who independently reviewed the study, supported these calls. 'Young people tell us that content they do not want to see makes it into their social media feed… Social media systems have been designed to prioritise engagement over wellbeing,' she noted. 'We have the capacity to change this, for the wellbeing of young people and all social media users.'

Platform Responses and Ongoing Concerns

Meta, the parent company of Instagram, responded to the study by questioning its robustness and asserting that it does not accurately reflect user experience. A spokesperson highlighted that 'Teen Accounts have built-in, default protections and content settings inspired by +13 film ratings. Hundreds of millions of teens worldwide now use Teen Accounts and, since launch, they've seen less sensitive content, experienced less unwanted contact, and spent less time on Instagram overnight.' TikTok, however, declined to comment on the findings.

Campaigners are now urging for users to be granted 'complete control' over the content they encounter, including the ability to opt-out of specific topics entirely. This push for greater user autonomy underscores a growing consensus that current safety measures, while a step forward, remain insufficient in shielding vulnerable users from algorithmic harm.