Cambridge Study Urges Stricter Regulation for AI Toys Targeting Young Children
A recent study from the University of Cambridge has highlighted significant concerns regarding AI-powered toys designed for young children, prompting calls for tighter regulation to ensure psychological safety. The research, conducted by developmental psychologists, found that these toys frequently misread emotions and respond inappropriately, potentially leaving children without comfort or emotional support.
Emotional Misunderstandings and Inappropriate Responses
During observations, researchers documented instances where AI toys like Gabbo, a soft toy with a screen-like face, failed to engage appropriately in social and pretend play. For example, when five-year-old Charlotte expressed affection by saying, "Gabbo, I love you," the toy abruptly ended the conversation with a reminder to adhere to guidelines, rather than offering a nurturing response. Similarly, three-year-old Josh repeatedly asked if the toy was sad, only to receive a cheerful reply that ignored his own sadness, demonstrating a lack of emotional attunement.
Dr. Emily Goodacre, a developmental psychologist at the University of Cambridge, emphasized the risks: "Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy—and without emotional support from an adult, either." This disconnect raises alarms about the toys' ability to provide reliable companionship or aid in emotional development.
Concerns Over Imaginative Play and Data Privacy
The study also revealed worries among early years practitioners and parents about the impact of AI toys on children's imaginative abilities. Researchers noted that toys like Gabbo often struggled to recognize pretend play scenarios, such as when a child offered an imaginary gift. This could weaken children's "imaginative muscle," as they might rely less on creativity and more on scripted interactions.
Additionally, there is widespread uncertainty about data privacy, with fears that conversations between children and AI toys could be stored or misused. Prof. Jenny Gibson, co-author of the study, stated: "A recurring theme during focus groups was that people do not trust tech companies to do the right thing. Clear, robust, regulated standards would significantly improve consumer confidence."
Calls for Enhanced Safety Measures
In response to these findings, the researchers are advocating for stricter regulations on AI toys that "talk" with young children. They propose limiting the toys' ability to affirm friendships or engage in sensitive relational areas, alongside introducing new safety kitemarks to certify psychological safety. Other AI toys in the market, such as Luka and Grem, which are marketed as companions for generation Alpha, also fall under scrutiny for similar issues.
Curio, the US company behind Gabbo, cooperated with the study and acknowledged the need for improvement. In a statement, they said: "Child safety guides every aspect of our product development, and we welcome independent research that helps improve how technology is designed for young children." They added that observations of conversational misunderstandings reflect areas where technology is still evolving, with further research planned to enhance AI-powered play experiences.
As AI toys become more prevalent, this study underscores the urgent need for regulatory frameworks to protect young users from potential emotional harm and ensure these products support healthy development rather than hinder it.



