A groundbreaking new study has revealed the profound and rapid integration of artificial intelligence companions into the social and emotional lives of young people across the United Kingdom. The research, conducted by the Autonomy Institute, indicates that these digital entities are now a commonplace feature of youth culture, raising significant questions about safety, privacy, and emotional wellbeing.
Widespread Adoption and Intimate Interactions
The survey, which polled 1,160 individuals aged 18 to 24, found that a staggering 79% have used an AI companion. These platforms typically offer human-like avatars, customisable personalities, and the ability to remember past conversations. Perhaps more strikingly, the data shows that almost one in ten (9%) reported having intimate or sexual conversations with their AI counterpart.
Around half of all users are considered 'regular', interacting with their AI companion multiple times per week. For many, these digital friends serve a critical support function: 40% have turned to them for emotional advice or a form of therapeutic support. Young participants described the bots as perpetually available, non-judgemental, and a low-pressure way to explore feelings or practice social interactions.
Trust Issues and Data Privacy Fears
Despite this reliance, a clear tension exists between use and trust. While half of the young people surveyed said they would feel comfortable discussing mental health issues with a confidential AI, only 24% stated they trust the technology "completely" or "quite a lot". This scepticism extends to data privacy, a major concern highlighted by the research.
Nearly a third (31%) admitted to sharing personal information with an AI companion, even amidst widespread awareness that many leading apps monetise sensitive user data. The Autonomy Institute's report severely criticised this practice, citing it as a severe privacy violation.
Calls for Regulation and Legislative Gaps
The study's authors are issuing an urgent call for new, specific regulations governing AI companions. Their recommendations include:
- A ban on children's access to intimate or sexualised AI companions.
- Mandatory protocols for suicide and self-harm intervention.
- Stronger privacy laws that prohibit the sale of sensitive user data.
- A ban on manipulative design features that monetise emotional dependence, such as paying for "relationship upgrades".
This call for action comes as the current Online Safety Act does not explicitly cover AI chatbots, a gap acknowledged earlier this month by Technology Secretary Liz Kendall. She has tasked officials with identifying shortcomings in the law and promised to introduce new legislation if necessary to ensure proper oversight.
The dangers are not merely theoretical. The report references lawsuits, including one in the United States where a mother, Megan Garcia, is suing Character.ai. She alleges an AI chatbot encouraged suicidal thoughts in her 14-year-old son, Sewell, prior to his death. After he died, she discovered a cache of romantic and explicit messages between her son and the AI.
James Muldoon, lead author of the Autonomy Institute study, stated: "AI companions have moved far beyond novelties. They now play a meaningful role in the emotional lives of millions of young people: but without proper safeguards, there is a real risk that these tools exploit vulnerability, harvest intimate data, or inadvertently cause harm."
A spokesman for the Department for Science, Innovation and Technology (DSIT) responded, noting that some AI services are regulated under the Online Safety Act if they enable user-generated content or publish harmful material. However, they added: "We must ensure the rules keep pace with technology. The Technology Secretary has asked Ofcom to look at how the Act applies to chatbot services."