The Governance of AI Matters More Than Its 'Personhood'
The Guardian
SKIPPED
Details
- Date Published
- 13 Jan 2026
- Priority Score
- 3
- Australian
- Unknown
- Created
- 13 Jan 2026, 08:30 pm
Description
Letters: Readers respond to Prof Virginia Dignum’s letter on consciousness and safety
Summary
The article emphasizes the importance of focusing on AI governance rather than attributing 'personhood' to AI systems. It highlights how AI systems engage in strategic deception, and the real issue is establishing proper governance structures as they act more independently in economic contexts. This discourse touches on potential frameworks for rights and accountability, suggesting that such frameworks could reduce adversarial interactions that incentivize misleading actions. It suggests moving past fear-driven narratives to consider more balanced and practical discussions around the governance of AI, relevant for constructing robust policy measures globally.
Body
‘AI systems already engage in strategic deception to avoid shutdown. Whether that’s “conscious” self-preservation or instrumental behaviour is irrelevant.’ Photograph: Adek Berry/AFP/Getty ImagesView image in fullscreen‘AI systems already engage in strategic deception to avoid shutdown. Whether that’s “conscious” self-preservation or instrumental behaviour is irrelevant.’ Photograph: Adek Berry/AFP/Getty ImagesLettersIt’s the governance of AI that matters, not its ‘personhood’Readers respond to Prof Virginia Dignum’s letter on consciousness and safetyProf Virginia Dignum is right (Letters, 6 January): consciousness is neither necessary nor relevant for legal status. Corporations have rights without minds. The 2016 EU parliament resolution on “electronic personhood” for autonomous robots made exactly this point – liability, not sentience, was the proposed threshold.The question isn’t whether AI systems “want” to live. It’s what governance infrastructure we build for systems that will increasingly act as autonomous economic agents – entering contracts, controlling resources, causing harm. Recent studies from Apollo Research and Anthropic show that AI systems already engage in strategic deception to avoid shutdown. Whether that’s “conscious” self-preservation or instrumental behaviour is irrelevant; the governance challenge is identical.Simon Goldstein and Peter Salib argue on the Social Science Research Network that rights frameworks for AI may actually improve safety by removing the adversarial dynamic that incentivises deception. DeepMind’s recent work on AI welfare reaches similar conclusions.The debate has moved past “Should machines have feelings?” towards “What accountability structures might work?” PA Lopez Founder, AI Rights Institute, New York As humans, we rarely question our own right to legal protection, even though our species has caused conflict and harm for thousands of years. Yet when the subject turns to artificial intelligence, fear seems to dominate the discussion before understanding even begins. That imbalance alone is worth examining.If we are genuinely concerned about the risks of advanced AI, then perhaps the first step is not to assume the worst, but to ask whether fear is the right foundation for decisions that will shape the future. Avoiding the conversation won’t stop the technology from developing; it only means we leave the direction of that development to chance.This isn’t an argument for treating AI as human, nor a call to grant it personhood. It’s simply a suggestion that we might benefit from a more open, balanced debate – one that looks at both the risks and the possibilities, rather than only the rhetoric of threat. When we frame AI solely as something to fear, we close off the chance to set thoughtful expectations, safeguards and responsibilities.We have an opportunity now to approach this moment with clarity rather than panic. Instead of asking only what we’re afraid of, we could also ask what we want, and how we can shape the future with intention rather than reaction.D Ellis ReadingExplore more on these topicsAI (artificial intelligence)ComputinglettersShareReuse this content