We Must Take Control of AI Now, Before It’s Too Late
The Guardian
READ
Details
- Date Published
- 29 Dec 2025
- Priority Score
- 3
- Australian
- Unknown
- Created
- 29 Dec 2025, 08:01 pm
Description
Letters: Anja Cradden proposes ways of managing tech companies before we reach crisis point. Plus letters from Mike Scott and Gerry Rees
Summary
The series of letters in The Guardian underscores the urgency of implementing governance structures over AI technologies before they become uncontrollable. Contributors such as Anja Cradden and Mike Scott argue for preemptive measures to prevent tech companies from exacerbating social inequalities, akin to the 2008 financial crisis. These letters highlight the potential existential risks posed by unregulated AI, indicating that current development speeds could outpace our ability to implement necessary controls. The content suggests alternatives like government interventions to avert wealth concentration and leverage AI's potential for public good, emphasizing the critical need for robust regulatory frameworks to mitigate catastrophic risks.
Body
‘In the foreseeable future, AI will certainly be able to sabotage attempts to close down or redirect it, and by then it will be too late.’ Photograph: Matt L Photography/AlamyView image in fullscreen‘In the foreseeable future, AI will certainly be able to sabotage attempts to close down or redirect it, and by then it will be too late.’ Photograph: Matt L Photography/AlamyLettersWe must take control of AI now, before it’s too lateAnja Cradden proposes ways of managing tech companies before we reach crisis point. Plus letters from Mike Scott and Gerry Rees “When the AI bubble bursts, humans will finally have their chance to take back control”, says the headline on Rafael Behr’s article (23 December). I think it’s more likely that when the AI bubble bursts, the creators of the crisis, along with other wealthy economic actors, will be in the rooms with the politicians telling them how to “rescue” us all by transferring wealth in some way from average citizens to the already extremely wealthy. Just like they did during the financial crisis of 2008.We need to be ready with alternative plans. For example, world governments could coordinate to buy, for suitably low prices, majority shares in any crashing tech company that actually produces something useful, ensuring that those shares come with full voting rights.Governments could then, acting as majority shareholders, instruct these monopolies to divide themselves back into national companies, paying local rates of taxation in one single country for all their activities and obeying all local content and copyright laws.Governments could spend money on infrastructure and wages in any area of these companies that is actually useful and sell the shares for a profit once they are making money again.That’s just one idea. Another might be that we just shut them all down and conserve our power and water for human beings, and close down or refuse to build the datacentres. We should be generating lots of ideas so that, when the time comes, nobody can say “there is no alternative” to the plans that will be proposed behind closed doors by the super-rich and then presented to the rest of us as a fait accompli.Anyone with thoughts on this matter needs to make it clear, as a matter of urgency, that there are plenty of possible plans that don’t just transfer more wealth to the super-rich.Anja CraddenEdinburgh Rafael Behr is right to be concerned about the rapid development of AI, but he seems to be suggesting that we all just wait until the bubble bursts before taking any action to control it. Apart from the potentially catastrophic impact on jobs, the sort of issues raised by an article you published earlier this month (‘It’s going much too fast’: the inside story of the race to create the ultimate AI, 1 December) mean that we can’t assume we’ll get the chance.In the foreseeable future, AI will certainly be able to sabotage attempts to close down or redirect it, and by then it will be too late. Insiders are already worrying that AI could herald the end of the road for humanity, and we must begin the fight to control it now, before it begins to control us.Mike ScottNottingham Reading Rafael Behr’s piece on AI reminded me of a short story by the American author Fredric Brown. Scientists build a supercomputer and ask it the final question: “Is there a God? The computer cogitates and answers: “There is now.” Prescient science fiction or what?Gerry ReesWorcesterExplore more on these topicsArtificial intelligence (AI)ComputingTechnology sectorInternetQuiz and trivia gamesThe super-richlettersShareReuse this content