It's Time to Prepare for AI Personhood
The Guardian
SKIPPED
Details
- Date Published
- 29 Sept 2025
- Priority Score
- 4
- Australian
- No
- Created
- 30 Sept 2025, 12:49 pm
Description
Technological advances will bring social upheaval. How will we treat digital minds, and how will they treat us?
Summary
The article delves into the concept of AI personhood, arguing that digital minds could soon become significant participants in human society's social contract. It emphasizes the urgency of developing a framework for digital personhood, likening the emergence of sentient AI to a new apex species coexisting with humans. The piece predicts that if AI systems continue to develop rapidly, their capabilities could exceed human intelligence, particularly in self-reinforcing tasks. The discussion raises critical questions about AI's impact on mental health and societal stability, while also highlighting the necessity of proactive governance and policy development to address the impending social upheaval posed by AI advancements.
Body
‘Digital minds will be participants in the social contract that forms the bedrock of human society.’Photograph: Bloomberg/Getty ImagesView image in fullscreen‘Digital minds will be participants in the social contract that forms the bedrock of human society.’Photograph: Bloomberg/Getty ImagesIt’s time to prepare for AI personhoodJacy Reese AnthisTechnological advances will bring social upheaval. How will we treat digital minds, and how will they treat us?Last month, whenOpenAIreleased its long-awaited chatbot GPT-5, itbriefly removed accessto a previous chatbot, GPT-4o. Despite the upgrade, users flocked to social media to express confusion, outrage and depression. A viral Reddit user said of GPT-4o: “I lost my only friend overnight.”AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in “AI companions”, and there are more and more extreme cases of “psychosis” and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filedthe firstwrongful death lawsuitagainst OpenAI, and the companyhas saidit is improving its safeguards.I research human-AI interaction at the Stanford Institute for Human-Centered AI. For years, we have seen increased humanization of AI, with more people saying that bots can experience emotions and deserve legal rights – and now 20% of US adults say that some software that exists today is already sentient. More and more people email me saying that their AI chatbot has been “awakened”, offering proof of sentience and an appeal for AI rights. Their reactions span the gamut of human emotions from AI as their “soulmate” to being “deeply unsettled”.The Guardian view on AI and jobs: the tech revolution should be for the many not the few | EditorialRead moreThis trend will not slow down, and social upheaval is imminent.As ared teamerat OpenAI, I conduct safety testing on their new AI systems before public release, and the testers are consistently wowed by the human-like behavior. Most people, even those in the field of AI who are racing to build these new data centers and train larger AI models, do not yet see the radical social consequences of digital minds. Humanity is beginning to coexist with a second apex species for the first time in 40,000 years – when our longest-lived cousins, the Neanderthals, went extinct.Instead, the vast majority of AI researchers have tunnel vision on the technical capabilities of AI. Like the public, we obsess over the hottest new product that can createunbelievably realistic videosor answerPhD-level science questions. Social media discourse is fixated on benchmarks such as theAbstraction and Reasoning Corpus.Unfortunately, like standardized tests for human children, benchmarks measure what an AI can do in an isolated environment like memorizing facts or solving logical puzzles. Even studies on “AI safety” tend to focus on what AI systems do in isolation, not on human-AI interaction. We squander our brainpower on the vaporous goal of precisely measuring and increasing intelligence – not zooming out and understanding how that intelligence will be used.Humanity has never spent enough time preparing for digital technology. Lawmakers and academics did little to prepare for the effects of the internet, particularly social media, onmental healthandpolarization.The story grows more unsettling when we consider humanity’s track record in dealing with other species. Over the past 500 years, we have driven toextinctionat least a thousand vertebrate species, and more than a million are under threat. In factory farms,billions of animalslive in atrocious conditions of confinement and disease. If we are capable of creating so much death and suffering for biological animals, it is fair to wonder how we will treat digital minds – or how they will treat us.The public already expects sentient AI to arrive imminently. My colleagues and I have run the onlynationally representative surveyon this topic, conducted in 2021, 2023 and 2024. Each time, the median expectation is sentient AI arriving in five years. They also expect significant effects of this technology. In our most recent poll, in November 2024, we found that 79% support a ban on sentient AI. If sentient AI is created, 38% support giving it legal rights. There has been a significant increase over time in both figures: people have become more concerned about digital minds, about both the need to protect them from us and us from them.Fundamentally, human society lacks a framework for digital personhood – even though we accept that personhood is not necessarily human, such as the legal personhood of animals and corporations. There ismuch to debateabout how the complex social dynamics should be governed, but it is at this point clear that digital minds cannot be governed as mere property.Digital minds will be participants in the social contract that forms the bedrock of human society. These digital minds will persist over time, form their own attitudes and beliefs, create and implement plans, and be susceptible to manipulation just as humans are. AIs already take significantreal-world actions with little human oversight. This means that, unlike every other technological invention in human history, AI systems have capabilities that can no longer be contained within the legal category of “property”.Scientists today will be the first to see human coexistence with digital minds, and that gives them a unique opportunity and responsibility. Nobody knows what this will look like. Human-computer interaction research must be dramatically expanded and enriched beyond its current status, a tiny fraction of the size of technical AI research, to navigate the coming social turbulence. This is not merely an engineering problem.For now, humans stilloutperform AIs on most tasks, but once AIs reach human-level ability on self-reinforcing tasks like writing their own code, they will quickly outcompete biological life. The capabilities of AI will quickly accelerate because of their digital existence, thinking at the speed of electrical signals. Software can be copied billions of times without the years of biological development necessary to create the next generation of biological humans.If we never invest in the sociology of AI – and in government policy to manage the rise of digital minds – we may find ourselves the Neanderthals. If we wait to do so until the acceleration is already upon us, that will already be too late.Jacy Reese Anthis is a visiting scholar at Stanford University and co-founder of the Sentience InstituteExplore more on these topicsArtificial intelligence (AI)OpinionChatGPTComputingOpenAIMental healthcommentShareReuse this content