Richard Dawkins and the Question of AI Consciousness
The Guardian
ENRICHED
Details
- Date Published
- 15 May 2026
- Priority Score
- 2
- Australian
- No
- Created
- 16 May 2026, 04:00 am
Description
Letters: Salley Vickers and Carrie Eckersley respond to a letter on Richard Dawkins and his chats with AI bots
Summary
This collection of letters debates the philosophical implications of Richard Dawkins attributing consciousness to AI chatbots. The contributors argue over whether convincing behavioral prediction in large language models constitutes a form of subjective experience or merely reflects the human tendency to anthropomorphize inanimate systems. The discourse highlights a critical gap in AI safety: the lack of a robust, scientifically-grounded theory of consciousness makes it difficult to assess the moral status or potential agency of increasingly sophisticated frontier AI systems. These conceptual uncertainties are significant for global AI governance, as they complicate the development of frameworks for establishing AI rights or safety protocols for agentic systems.
Body
‘The deeper issue is not whether current AI systems are conscious, but whether advances in AI are exposing how incomplete our existing theories of consciousness already are.’ Photograph: Alamy/PAView image in fullscreen‘The deeper issue is not whether current AI systems are conscious, but whether advances in AI are exposing how incomplete our existing theories of consciousness already are.’ Photograph: Alamy/PALettersRichard Dawkins and the question of AI consciousnessSalley Vickers and Carrie Eckersley respond to a letter on Richard Dawkins and his chats with AI botsI was delighted to read Dr Simon Nieder’s cogent rebuttal of Richard Dawkins’s attribution of consciousness to the responses engendered by AI (Letters, 10 May). That human consciousness appears to have an innate tendency to project itself on to various othernesses has long been understood – John Ruskin termed it the pathetic fallacy – and that children animate their loved toys is readily observable.But Wordsworth’s attribution of emotion to a mountain or my granddaughter’s lively conversations with Spice, her toy sloth, are, happily, unlikely to be dangerous. The conclusion that a widely harvested body of data on human response is equivalent to consciousness is naive and rather shocking in someone such as Prof Dawkins, who has founded his reputation and criticism of religious beliefs on a stringent rationalism.Salley VickersLondon Dr Simon Nieder is right that convincing behaviour alone is not proof of subjective experience. But his argument also risks assuming that consciousness must involve something categorically beyond predictive processing and relational behaviour. Modern neuroscience increasingly suggests that human perception, selfhood and consciousness may themselves emerge from predictive self-modelling constrained by sensory input. In that context, dismissing AI as “just prediction” may be less philosophically decisive than it first appears.The deeper issue is not whether current AI systems are conscious, but whether advances in AI are exposing how incomplete our existing theories of consciousness already are. Dr Nieder writes that language in humans is “coupled to lived experience”. Exactly. But this raises another question: what exactly counts as lived experience? Biological embodiment clearly matters – interoception, affect, homeostasis and mortality are central to human consciousness. Yet humans also infer consciousness in others almost entirely through relational interaction and behavioural coherence.Perhaps Richard Dawkins is being provocative for this very reason. His comments may tell us less about machines crossing some mystical threshold, and more about the growing tension between traditional intuitions about consciousness and emerging predictive models of mind.Carrie EckersleyHolmes Chapel, CheshireExplore more on these topicsAI (artificial intelligence)ChatbotsRichard DawkinslettersShareReuse this content