Mistaking AI Behaviour for Conscious Being
The Guardian
ENRICHED
Details
- Date Published
- 10 May 2026
- Priority Score
- 3
- Australian
- No
- Created
- 10 May 2026, 06:00 pm
Description
Letter: Dr Simon Nieder responds to Richard Dawkins’ encounters with a chatbot
Summary
This letter addresses the deceptive nature of fluent AI outputs that mimic human consciousness, arguing that sophisticated simulation should not be confused with subjective experience. The author posits that as frontier AI models become more convincing, the human tendency to attribute agency increases, creating a risk of establishing flawed ethical frameworks based on a category error. Distinguishing between behavior and ontological being is presented as a critical challenge for future AI governance and safety, especially as systems produce increasingly persuasive representations of thought. The text highlights a necessary skepticism toward the internal states of machines, which is vital for preventing the premature granting of moral status to non-sentient systems and focusing safety efforts on actual technical risks.
Body
‘These systems generate highly convincing representations of thought and feeling.’ Photograph: MattLphotography/AlamyView image in fullscreen‘These systems generate highly convincing representations of thought and feeling.’ Photograph: MattLphotography/AlamyLettersMistaking AI behaviour for conscious beingDr Simon Nieder responds to Richard Dawkins’ encounters with a chatbotRichard Dawkins’ reflections on AI consciousness are striking – not because they show that machines have crossed some hidden threshold into inner life, but because they reveal how readily we can be persuaded that they have (Richard Dawkins concludes AI is conscious, even if it doesn’t know it, 5 May).Many will recognise the experience: a system that responds with fluency, humour and apparent understanding. At some point, simulation starts to feel like presence. But that shift tells us more about human cognition than machine consciousness. The error is a category one. These systems generate highly convincing representations of thought and feeling, but they provide no evidence of subjective experience. To move from one to the other is to mistake output for ontology – to infer an inner life where there is no credible mechanism for one.There is an irony here. In his writing on religion, Dawkins has long argued that compelling narratives and deeply felt experiences are not in themselves evidence of underlying reality. The same standard should apply to machines now capable of producing those experiences on demandLanguage has been a reliable indicator of consciousness because in humans it is coupled to lived experience. In AI, that coupling does not exist. As systems become more capable, pressure to attribute agency will grow. If we fail to distinguish between behaviour and being, we risk building ethical frameworks on a misreading of the technology.Dawkins is right to ask the question. But the answer cannot rest on how convincing the conversation feels – only on whether there is anything there that could, in principle, feel at all.Dr Simon NiederBrampton, DerbyshireExplore more on these topicsAI (artificial intelligence)ComputingRichard DawkinsPhilosophylettersShareReuse this content