Experts Demand Ofcom Probe into AI's Role in Fake News After Southport Murders
Ofcom Urged to Investigate AI's Role in Fake News After Murders

Experts Demand Ofcom Probe into AI's Role in Fake News After Southport Murders

Experts are urging Ofcom, the communications regulator, to investigate the role of artificial intelligence in spreading fake news following major incidents, after research revealed AI-generated misinformation played a significant part in injecting divisive falsehoods into public discourse after the Southport murders.

AI-Driven Misinformation for Financial Gain

Researchers at the Alan Turing Institute’s Centre for Emerging Technology and Security discovered that AI software was used to propagate fake news after the Southport murders, primarily to generate income for social media users. They recommend that Ofcom address this issue during its upcoming consultation on fraudulent advertising, scheduled for this summer.

A report published on Wednesday found that Channel3Now, a website that initially published a false name for the suspect, was established using a service provider that "markets itself as using AI to generate content for users seeking passive income". The report also revealed AI was employed to repackage articles, making them appear more credible.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

"This evidence suggests that AI-generated misinformation, with minimal human editorial oversight and monetised through digital ad networks, played a role in injecting divisive falsehoods into the public discourse following the Southport murders," the report stated.

Widespread AI-Generated News Sites and Sensationalism

Current research indicates there are approximately 2,089 AI-generated "news" sites operating across 16 languages, many with "little to no human oversight". The report warns that AI tools generating content based on trending topics, optimised for sensationalism and virality rather than factual accuracy, could have an outsized impact relative to the effort required.

"This suggests that much more focus is needed on undercutting the financial incentives behind advertising networks, which may inadvertently encourage the spread of harmful content," the report added.

Recommendations for AI Chatbots and Crisis Response

The report further recommends that AI chatbots should automatically flag their fact-checking limitations, particularly in the aftermath of major incidents. It cited instances such as Grok incorrectly labelling a Metropolitan Police video of a Unite the Kingdom protest as fake, a false claim that garnered two million views. Grok also erroneously identified a deepfake image of the Bondi Beach shooting as authentic.

To mitigate such errors, the report suggests chatbots should display a pop-up warning users that results cannot be relied upon while an incident is still unfolding. It also calls on the Government to:

  • Establish a crisis response plan for events where an AI "information threat" emerges
  • Issue fact-checking guidance to schools, universities, and the wider public via social media

Expert Commentary on AI Threats and Democratic Resilience

Sam Stockwell, a senior research associate at the Alan Turing Institute’s Centre for Emerging Technology and Security, commented: "Crisis events are unpredictable and volatile scenarios. Combined with a poorly understood AI threat landscape, this means that we are not currently equipped to deal with this growing threat to public safety. Yet while we need to address the critical risks associated with AI tools in this context, we must also recognise that the same technology can help to strengthen democratic resilience in times of crisis. Actioning the recommendations outlined in this report will go a long way in demonstrating that the UK can protect the public against these threats in the event of future AI-driven incidents."

The findings highlight the urgent need for regulatory action and public awareness to combat the spread of AI-generated misinformation, especially during sensitive events that can fuel social division and unrest.

Pickt after-article banner — collaborative shopping lists app with family illustration