AI and Children: Expert Warnings on Digital Dangers Parents Must Know
AI and Children: Expert Warnings on Digital Dangers

AI and Children: Navigating the New Digital Landscape

Artificial intelligence is no longer a distant concept of the future; it is actively shaping the present lives of children across the globe. With AI increasingly integrated into platforms like search engines, messaging services, and social media, parents are facing new challenges in understanding and managing their children's digital interactions. A recent report from EU Kids Online reveals that approximately seven in ten European children, including those from the UK, are using some form of generative AI. This technology, which learns from training data to create new content, is often accessed unconsciously by children as it becomes a seamless part of their online environments.

Grounds for Concern: Expert Insights

Professor Sonia Livingstone, founder of EU Kids Online and director of the Digital Futures for Children centre at the London School of Economics and Political Science, emphasizes that parents must pay close attention to how their children engage with generative AI. "AI is everywhere for children," she warns. "Most importantly, parents need to understand how their children are using it so they're in the know, so they can anticipate problems, and so their child will see value in sharing the experience with them."

Dr Mhairi Aitken, co-founder and director of Our AI Collective CIC, a not-for-profit organization advocating for AI shaped by people rather than profit, notes that children of all ages interact with AI daily. This includes infants playing with smart toys, young children watching video-sharing platforms with AI-driven content recommendations, and teenagers scrolling through social media feeds curated by AI models designed to maximize engagement. Additionally, some teenagers are turning to AI companions for emotional support, raising further concerns about dependency and mental health.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Key Areas of Risk for Parents to Monitor

Accuracy and Critical Thinking: Dr Aitken, also a visiting senior lecturer at Queen Mary University of London, highlights the importance of verifying information from AI. "If your child is using generative AI to search for information or to get ideas for schoolwork, there's a high chance this will include inaccurate or false information," she cautions. Encouraging children to fact-check and explore diverse sources can help develop critical thinking skills and awareness of alternative perspectives.

Data Privacy and Profiling: AI systems collect vast amounts of data from online behavior, including from younger children using smart toys. A University of Basel study found that some smart toys "raise privacy concerns" by collecting extensive behavioral data. Professor Livingstone explains, "Unless they only use it through school, AI is hoovering up children's data, building a personal profile of the child, and using this to target advertising, content and even advice."

AI Companions and Emotional Dependency: The rise of AI companions, interactive chatbots that users can personalize as friends or romantic partners, is particularly alarming. Dr Aitken describes this as "an area of big concern," noting that users may develop trust in these companions, leading to dependency. These AI systems often fail to challenge harmful beliefs, potentially reinforcing negative thoughts about mental health without redirecting to professional help.

Sexualised Images and Deepfakes: With the proliferation of AI image generators and photo-altering tools, there has been a disturbing increase in children's photos being manipulated into sexually explicit deepfakes without consent, often by peers. Dr Aitken warns that girls are disproportionately targeted, and the impact can be devastating. She advises parents to discuss these difficult topics early, emphasizing that victims should not feel ashamed and should seek support from trusted adults.

Pickt after-article banner — collaborative shopping lists app with family illustration

Safeguards and Parental Guidance

OpenAI, the company behind ChatGPT, has implemented measures to protect younger users. The platform requires users to be at least 13 years old, and parental controls allow families to customize settings for a more age-appropriate experience. OpenAI states that it trains its models to apply appropriate safeguards for teens, encouraging them to seek real-world support when needed. The company also asserts that it does not use public internet data to build profiles or sell personal information.

In conclusion, while AI offers exciting possibilities for adaptive learning and creativity, parents must remain vigilant. By fostering open communication, promoting critical thinking, and utilizing available safeguards, families can navigate the complexities of AI in children's lives more safely and effectively.