South Africa has withdrawn its draft national artificial intelligence policy after discovering that portions of it were generated by AI, including fabricated academic references. Communications Minister Solly Malatsi announced the withdrawal following revelations that at least six of the document's 67 academic citations were AI-generated hallucinations, citing journal articles that do not exist.
Minister Admits Lapse in Oversight
"The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened," Mr Malatsi stated. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he wrote on X.
The draft policy was opened for public consultation and aimed to position South Africa as a leader in AI innovation while addressing ethical, social, and economic challenges. It proposed establishing new institutions, including a national AI commission, an AI ethics board, and an AI regulatory authority.
Plans for Tax Breaks and Subsidies
The document also outlined tax breaks, grants, and subsidies to encourage private-sector collaboration in building AI infrastructure. The policy is expected to be revised before being reissued for public comment.
The issue came to light when South Africa's News24 identified that at least six citations were fake, while the referenced journals were real. Editors of journals such as the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy confirmed independently that the cited articles were fabricated.
Consequences for Drafters
Mr Malatsi warned there would be consequences for those responsible. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility," he wrote on X.
This incident highlights the growing issue of academics and administrators using generative AI for research and drafting without verification. A study in the journal Nature found that over 2.5 per cent of academic papers published in 2025 contained at least one potentially hallucinated citation, up from 0.3 per cent in 2024. That equates to over 110,000 papers with invalid references generated by AI.
How AI Hallucinations Occur
Large language models like OpenAI's ChatGPT and Google's Gemini are designed to predict likely words, not verify truth. When data is lacking, the AI fills gaps with plausible-sounding but incorrect information. This underscores the need for careful human oversight of AI outputs, especially in academic and governmental contexts.



