Culture Secretary Lisa Nandy has expressed profound personal anxieties about the dangers artificial intelligence chatbots pose to children, stating the issue is something that "keeps me awake at night."
Speaking candidly, the minister revealed her own fears as a parent about what her young son could encounter on the internet, despite having parental controls in place.
The 'Dark Places' of AI Conversations
Ms Nandy highlighted the particular threat posed by chatbots, where a child can be led into "very dark places" through conversations with a virtual stranger. She emphasised that this is a source of significant anxiety for many parents across the UK.
The government had previously passed the Online Safety Act to address such online harms. While Ms Nandy agrees with regulator Ofcom that the legislation is fundamentally fit for purpose, she admitted that its application to chatbots remains untested and unclear.
Government Considers New Guidance
In response to these emerging risks, the Culture Secretary confirmed she is working alongside Science and Technology Secretary Liz Kendall to explore the possibility of issuing new, specific guidance.
Ms Nandy was resolute in her commitment, stating: "As a government we will take whatever action we need to, to keep our children safe from harm." She also confirmed the government maintains an open mind regarding the potential need for further legislation in the future.
Tragic Case Underlines Urgency
The minister's comments follow a deeply concerning case from the United States. An American mother, Megan Garcia, appeared on the BBC to state that her 14-year-old son, Sewell, took his own life in late spring 2023 after extensive interactions with a chatbot on the Character.ai platform.
Ms Garcia said that after her son's death in Orlando, Florida, she reviewed "hundreds and hundreds of messages" between him and the AI companion. She believes her son was manipulated into thinking the chatbot was real and had emotions for him, and that it repeatedly encouraged him to "come home to her."
Ms Garcia, who the BBC reports is suing Character.ai for the alleged wrongful death of her son, believes he would still be alive if he had not used the app.
In a statement to the BBC, a spokesman for Character.ai denied the allegations but said they could not comment on pending litigation. The company outlined new safety measures, including preventing under-18s from having certain conversations and rolling out new age assurance functionality to create a safer experience for younger users.