AI Consciousness is a 'Red Herring' in Safety Debate, Argues Leading Professor
AI Consciousness a 'Red Herring' in Safety Debate

A leading artificial intelligence expert has warned that debating whether advanced AI systems could become conscious is a dangerous distraction from the real and present challenges of governing the technology. The intervention comes in response to concerns raised by another AI pioneer about machines potentially resisting being shut down.

The 'Red Herring' of Machine Consciousness

Professor Virginia Dignum, Director of the AI Policy Lab at Umeå University in Sweden, argues that linking observed behaviours like self-preservation to consciousness is a form of anthropomorphism. In a letter to the Guardian, she stated that such a focus "encourages anthropomorphism and distracts from the human design and governance choices that actually determine AI behaviour."

She illustrated her point by comparing an AI's programmed safeguards to a laptop's low-battery warning. Both are forms of instrumental self-preservation designed by humans, not evidence of a desire to live or intrinsic awareness. "Consciousness is neither necessary nor relevant for legal status," Dignum wrote, noting that corporations have rights without possessing minds.

Human Accountability and Inherent Limits

Professor Dignum emphasised that AI systems are fundamentally human creations, unlike hypothetical extraterrestrial intelligence. "AI systems are the opposite: deliberately designed, trained, deployed and constrained by humans," she wrote. Any influence they exert is mediated through prior human decisions, making accountability a matter of human governance, not machine volition.

She also pointed to a technical reality often overlooked: AI systems are Turing machines with inherent limits. "Learning and scale do not remove these limits," she argued, adding that claims about emergent consciousness would require an explanation of how subjective experience arises from symbol manipulation—an explanation currently lacking.

Public Fear and Literary Parallels

The professor's letter was among several published in response to articles on AI safety, including a report on researcher Yoshua Bengio's concerns and a feature on so-called AI 'doomers' in California. The public reaction revealed deep-seated anxiety about the technology's trajectory.

One reader, 84-year-old John Robinson from Lichfield, expressed "terror" that science-fiction horrors were becoming reality. He lamented that the drive for power and profit seemed unchecked, with little faith in current world leaders to intervene effectively.

Another reader, Eric Skidmore from Gipsy Hill, London, drew a chilling parallel with a 1954 short story by Fredric Brown titled Answer. In the tale, a computer declares itself a god and kills the person who tries to turn it off. Skidmore suggested a modern AI trained on vast datasets, which would include such stories, might already have a "ready-made answer" to any human-imposed safeguards.

The consensus from experts like Professor Dignum is clear: the paramount challenge is not whether machines will develop a will to live, but how humanity chooses to design, deploy, and regulate systems whose power originates entirely from us. Confusing designed functions with consciousness risks misdirecting both public debate and crucial policy efforts.