ChatGPT's GPT-5.2 Model Cites Elon Musk's Grokipedia, Raising AI Misinformation Fears
ChatGPT Cites Grokipedia, Fueling AI Misinformation Concerns

ChatGPT's Latest Model Draws on Grokipedia, Tests Show

In a development that has alarmed disinformation researchers, the most recent iteration of ChatGPT, known as GPT-5.2, has been found to cite Elon Musk's Grokipedia in its responses. This AI-generated online encyclopedia, which launched in October and aims to rival Wikipedia, has been criticised for promoting rightwing narratives on issues such as gay marriage and the 6 January insurrection in the United States.

Guardian Tests Uncover Widespread Citations

During a series of tests conducted by the Guardian, GPT-5.2 referenced Grokipedia on nine separate occasions across more than a dozen different queries. The topics ranged from political structures in Iran, including details about the Basij paramilitary force's salaries and the ownership of the Mostazafan Foundation, to biographical information on Sir Richard Evans, a British historian who served as an expert witness in the libel trial of Holocaust denier David Irving.

Notably, ChatGPT did not cite Grokipedia when directly prompted to repeat known misinformation about the 6 January insurrection, media bias against Donald Trump, or the HIV/Aids epidemic. Instead, the encyclopedia's content appeared in responses to more obscure subjects, such as claims about the Iranian government's links to MTN-Irancell, which were stronger than those found on Wikipedia.

How Grokipedia Differs from Traditional Sources

Unlike Wikipedia, which allows direct human editing, Grokipedia relies solely on an AI model to generate content and handle change requests. This approach has raised concerns about the platform's reliability, as it may propagate untrustworthy or poorly sourced information. Disinformation expert Nina Jankowicz noted that Grokipedia entries often rely on sources that are, at best, questionable and, at worst, deliberate disinformation.

Broader Implications for AI and Misinformation

The integration of Grokipedia into large language models (LLMs) like ChatGPT is part of a larger trend known as "LLM grooming," where malign actors, including Russian propaganda networks, produce vast amounts of disinformation to seed AI systems with falsehoods. This issue was highlighted last spring by security experts and has since been echoed in concerns raised in the US Congress about other AI models repeating government positions on sensitive topics.

An OpenAI spokesperson stated that the model's web search aims to draw from a broad range of publicly available sources and viewpoints, with safety filters in place to reduce high-severity harms. They emphasised that ChatGPT clearly shows which sources inform its responses through citations and that ongoing programs work to filter out low-credibility information.

Challenges in Correcting AI Errors

Once misinformation enters an AI chatbot, it can be difficult to eradicate. Jankowicz shared an example where a fabricated quote attributed to her was removed from a news outlet but continued to be cited by AI models for some time. This persistence underscores the challenge of ensuring accuracy in AI-generated content, as most users may not verify the truth behind the information provided.

In response to inquiries, a spokesperson for xAI, the owner of Grokipedia, dismissed concerns with the statement: "Legacy media lies." This attitude highlights the contentious nature of the debate over AI and information integrity.

Looking Ahead: The Future of AI Sourcing

As AI models increasingly rely on sources like Grokipedia, there is a risk that these platforms gain unwarranted credibility. Users might assume that if ChatGPT cites a source, it has been vetted, leading them to trust potentially misleading information. This dynamic could have significant implications for public understanding of complex issues, from international politics to historical events.

The findings serve as a stark reminder of the ongoing battle against misinformation in the digital age, with AI playing a pivotal role in both spreading and, potentially, combating false narratives.