Pentagon–Anthropic Brawl Demands Rethink of AI Industry
The Strategist
READ
Details
- Date Published
- 2 Mar 2026
- Priority Score
- 4
- Australian
- Unknown
- Created
- 3 Mar 2026, 06:00 am
Description
Imagine we found a way to build gods—or demons. Would we want private companies to have sole responsibility and control over the almighty? Imagine the workload on their legal teams. Fine, they’re dramatic questions, but ...
Summary
This article critically examines the implications of the dispute between the Pentagon and Anthropic, arguing that it highlights a fundamental challenge in who should control powerful AI technologies. It posits that the growing capabilities of AI, which could soon underpin global strategic power and governance, necessitate a reevaluation of the balance between private companies and the state. The author suggests that current models, where companies develop and governments regulate, are insufficient for AI due to its potential for self-improvement and the concentration of power it could create. The piece advocates for a more integrated approach, possibly involving public-private partnerships or even nationalization, to ensure AI development aligns with public interest and accountability, drawing on statements from tech leaders who themselves are raising these profound questions.
Body
SHARE
Share to Facebook
Share to Twitter
Share to LinkedIn
Share to Email
Print This PostWith ImagesWithout ImagesImagine we found a way to build gods—or demons. Would we want private companies to have sole responsibility and control over the almighty? Imagine the workload on their legal teams.Fine, they’re dramatic questions, but they’re pressing ones after the past week’s blow-up between the Pentagon and artificial intelligence company Anthropic, which ended in severe penalties for the AI lab.This fight was about much more than one company’s right to veto two narrow uses of its models—fully autonomous lethal strike and domestic mass surveillance—by the US military. It’s about who controls technology that will increasingly wield enormous power over human lives, not just in military settings but across every realm.The balance we accept as natural between the private sector and the state will need to evolve for the AI era ahead—perhaps into new chimeric partnerships. It’s a conversation for all of us because the Pentagon–Anthropic stoush hasn’t been an edifying first swing at the problem.The gods-and-demons allusion plays on the AI industry’s fondness for quasi-religious language to capture the magnitude of its ambitions to build machines smarter than people. Analogies aside, the technology’s power is objectively growing such that it could soon provide the foundations to meet our material needs, determine global strategic power and deliver governance, services and administration more effectively than a government run by people.So, who should control an effective superpower?In a familiar story, the industry is ahead of the public on the discussion. OpenAI boss Sam Altman called over the weekend for a debate about ‘whether we should prefer a democratically elected government or unelected private companies to have more power’ over advanced AI.There’ll be some cynicism about Altman’s posts, because OpenAI has just signed a US defence contract while, confusingly, claiming the same safeguards which Anthropic was seeking and which the Pentagon rejected. But Altman deserves credit for a thoughtful take that even raised the prospect of nationalisation and concluded that governments and companies may need closer partnerships, given the importance of the technology.‘It has seemed to me for a long time it might be better if building AGI [artificial general intelligence] were a government project,’ he wrote.Nationalisation is a scary word, but the alternatives are hardly reassuring. Let’s say we maintain roughly the present balance between the private sector and the state. Industry would build AI systems that are sold in a competitive market, with no company dominating, while governments regulate only as much as necessary.That’s fine for most products, but AI has aptitudes in coding that put it on track to improve itself and perhaps even to build its own successor. A small advantage to one company could multiply, enabling it to streak ahead and grab an insurmountable lead. Even if no monopoly emerges, even an oligopoly seems an unacceptable concentration of power.Of course, private companies can be socially responsible. But they are not conceived under our current system to take complete ownership of such consequential change to humanity. They first and foremost compete in a market by making products they sell to customers, generate profits, deliver returns to investors and, along the way, observe regulations set by governments.A state, by contrast, exists to serve all of its people and—at least for democratic states—is accountable to those people. A state has legitimacy that a company as we currently think of it cannot.The Pentagon’s position, which amounts to, ‘You make it, and we’ll decide how to use it,’ is defensible at the present level of the technology and for specific uses in which the military or any other arm of the state is accountable to civilian democratic oversight. But it doesn’t scale—as Silicon Valley would say. As AI becomes integral not just to warfare but to all forms of security and civil life, the stakes for people are going to become too high for governments to act only as customers and occasional regulators.The concerns about autonomous lethal weapons alone are enough to show this is not just any commercial tool. It is already capable of deciding to target and kill a human being. That’s not a tool; it’s an agent and its capacity for power over our lives will only grow. How many other decisions and actions will future AI systems take that affect us, either in government or commerce?Again, listen to the industry. Jack Clark, Anthropic co-founder and head of policy, told the Ezra Klein Show podcast last week that the stated goals of the major AI companies were ‘to build the most capable technology ever, which eventually gets deployed everywhere’.He continued: ‘Eventually AI becomes indistinguishable from the world writ large.’Clark embraced the idea that governments, academia and society overall had a stake and a right to say how AI was used. It’s an overture we cannot afford to ignore.