Back to Articles
AI Consciousness is a Red Herring in the Safety Debate

The Guardian

READ

Details

Date Published
6 Jan 2026
Priority Score
3
Australian
Unknown
Created
6 Jan 2026, 06:45 pm

Authors (1)

Description

Letters: We should take AI risks seriously, but doing so requires conceptual clarity, says Prof Virginia Dignum. Plus letters from John Robinson and Eric Skidmore

Summary

The article argues that focusing on AI consciousness in safety debates is misleading, as it can detract from more pressing governance issues. The claim suggests that the potential of AI systems to resist shutdown should be approached with caution, not as evidence of consciousness, but as a design consideration. By highlighting that AI systems' governance and deployment are entirely human-directed processes, the article stresses the importance of clarity in discussing AI risks. It critiques the comparison between AI and extraterrestrial intelligence, advocating for a focus on human accountability and regulatory needs rather than speculative consciousness claims, thereby contributing to discussions on effective AI safety policy and governance.

Body

An ‘autonomy and AI day’ in Palo Alto, California, December 2025. Photograph: Carlos Barría/ReutersView image in fullscreenAn ‘autonomy and AI day’ in Palo Alto, California, December 2025. Photograph: Carlos Barría/ReutersLettersAI consciousness is a red herring in the safety debateWe should take AI risks seriously, but doing so requires conceptual clarity, says Prof Virginia Dignum. Plus letters from John Robinson and Eric SkidmoreThe concern expressed by Yoshua Bengio that advanced AI systems might one day resist being shut down deserves careful consideration (AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer, 30 December). But treating such behaviour as evidence of consciousness is dangerous: it encourages anthropomorphism and distracts from the human design and governance choices that actually determine AI behaviour.Many systems can protect their continued operation. A laptop’s low-battery warning is a form of self-preservation in this sense, yet no one takes it as evidence that the laptop wants to live: the behaviour is purely instrumental, without experience or awareness. Linking self-preservation to consciousness reflects a human tendency to ascribe intentions and feelings to artefacts and not any intrinsic consciousness.Crucially, consciousness is neither necessary nor relevant for legal status: corporations have rights without minds. If AI needs regulation, it is because of its impact and power, and to locate human accountability, not because of speculative claims about machine consciousness.The comparison with extraterrestrial intelligence is even more misleading. Extraterrestrials, if they exist, would be autonomous entities beyond human creation or control. AI systems are the opposite: deliberately designed, trained, deployed and constrained by humans, with any influence mediated through human decisions.Underlying all this is a point the article largely overlooks: AI systems are, like all computing systems, Turing machines with inherent limits. Learning and scale do not remove these limits, and claims that consciousness or self-preservation could emerge from them would require an explanation, currently lacking, of how subjective experience or genuine goals arise from symbol manipulation.We should take AI risks seriously. But doing so requires conceptual clarity. Confusing designed self‑maintenance with conscious self-preservation risks misdirecting both public debate and policy. The real challenge is not whether machines will want to live, but how humans choose to design, deploy and govern systems whose power comes entirely from us.Prof Virginia Dignum Director, AI Policy Lab, Umeå University, Sweden There was I, having a relaxed end-of-the-year read of my favourite newspaper, when I reached your articles on Yoshua Bengio’s concerns about artificial intelligence and on the work of AI safety researchers in California (The office block where AI ‘doomers’ gather to predict the apocalypse, 30 December).I have to admit to feeling terror that some of the science-fiction horrors foretold during my 84-year lifetime are now upon us and that the world is probably about to sit back and watch itself being taken over at best, or destroyed at worst, by the machines.The humans driving this process are interested only in power and unimaginable profit; the naysayers are complacent and the rest of us can only keep our fingers crossed in the hope that enough governments will have the strength, courage and awareness to say: “Stop!” Sadly, given our current crop of world leaders, I’m not holding my breath.John RobinsonLichfield Reading your article on the “need to make sure we can rely on technical and societal guardrails to control [AIs], including the ability to shut them down if needed” reminded me of the letter from Gerry Rees (29 December), referring to the short story Answer by Fredric Brown, dating from 1954.The answer by the computer that there is now a god prompts the questioner to attempt to turn it off, but a bolt from the sky kills the questioner and seals the switch shut. An AI trained on a large language model would, perhaps, have “read” this story as part of its training and would, in consequence, have a ready-made answer to any safeguards suggested above.Eric SkidmoreGipsy Hill, LondonExplore more on these topicsAI (artificial intelligence)ComputingInternet safetyInternetScience fiction booksConsciousnesslettersShareReuse this content