AI Tools Spread Fake Train Attack Images, Child Obesity Stats Overstated
AI Spreads Fake Train Attack Images, Obesity Stats Wrong

AI Tools Misidentify Fake Huntingdon Train Attack Images

Artificial intelligence systems from major tech companies have been spreading false information about the recent Huntingdon train stabbing incident, according to investigations by Full Fact, the UK's largest fact-checking charity.

Grok, the AI chatbot developed by Elon Musk's xAI, incorrectly assured users that an image depicting an injured man lying in a train carriage was genuine. The image, which shows police officers and paramedics with garbled text on their uniforms and displays a distinct stylised filter, contains clear indicators of AI generation.

Further analysis revealed the train seating arrangement differed significantly from the actual London North Eastern Railway train involved in the November 2025 incident. When challenged about the image's authenticity, Grok initially defended its position, stating: "No, this image appears to be a genuine photo capturing the aftermath of the stabbing incident on a Doncaster-to-London train stopped in Huntingdon, Cambridgeshire."

Separately, Google Lens AI overviews falsely claimed the same fabricated image was a "still from a BBC News report," linking to an unrelated article about the Huntingdon attack. The AI tool also misidentified an AI-generated video of a train confrontation as actual footage from the incident.

Childhood Obesity Statistics Significantly Overstated

New data obtained by Full Fact reveals that official childhood obesity figures have been substantially overstating the problem due to differing measurement thresholds.

The commonly cited statistic that one in five children leave primary school obese - repeated by NHS England and Health Secretary Wes Streeting - actually overcounts the number of clinically obese children by including tens of thousands who don't meet medical thresholds.

According to data obtained through Freedom of Information requests, the actual proportion of clinically obese children leaving primary school in 2023/24 was approximately one in seven, significantly lower than the official published figures.

The discrepancy arises from how the National Child Measurement Programme (NCMP) classifies obesity. While clinical thresholds used by doctors define obesity as children in the top 2% of BMI measurements compared to 1978-1994 data, the NCMP uses broader population monitoring thresholds that include children in the top 5%.

This methodological difference meant that in 2023/24, 20,642 children aged 4-5 and 41,936 children aged 10-11 who were not clinically obese were counted as obese in official statistics.

Transparency Concerns and Public Impact

The Department of Health and Social Care maintains that both measurement approaches are necessary, with population monitoring providing consistent trend data since the early 2000s. However, the lack of clear distinction between clinical and monitoring figures has led to widespread misunderstanding of the actual scale of childhood obesity.

Despite recent government publications including more prominent explanations of the different categories, the 2024/25 data still doesn't show how many children belong to each classification, leaving the true clinical obesity rates unclear.

Meanwhile, the persistence of AI misinformation about serious incidents like the Huntingdon train attack highlights growing concerns about the reliability of automated information systems. Both Google and xAI have been contacted for comment regarding their AI tools' performance.

Google has directed users to provide feedback on AI overview errors through thumbs-down icons or three-dot menus, while the episode underscores the ongoing challenges in managing AI-generated content and official statistics transparency.