Are human beings fit for the next AI war?
ABC listen
READ
Details
- Date Published
- 7 Apr 2026
- Priority Score
- 3
- Australian
- Yes
- Created
- 7 Apr 2026, 08:00 am
Description
We are seeing new mismatches between the way we like to remember war and the way it is being waged. Kant wrote: “Out of the crooked timber of humanity no straight thing was ever made” — the efficiencies of AI wars demand very straight things.
Summary
This analysis explores the systemic shift toward automated warfare and the potential for a 'First AI War' to displace human agency in strategic decision-making. It highlights the existential risks posed by high-speed, inhumanly efficient AI systems that could lead to human extinction if control structures, like a speculative 'Skynet', are realized. The author argues that the increasing reliance on frontier AI capabilities in military contexts necessitates urgent ethical audits and democratic control to prevent catastrophic global outcomes and the redundancy of human moral oversight.
Body
ShareFacebookX (formerly Twitter)It seems many of us have fallen into a pattern of passively doomscrolling world events — thinking, Surely collective incredulity will stop progress toward global war! As we cling to that hope, we should look beyond the horizon of the current destruction to consider the future presaged by the First AI War. What precedents are being set for future AI wars, with their even more destructive and precisely targeted weapons?There is much that we cannot forecast. Some world leaders place a premium, it would seem, on unpredictability. We cannot say which geopolitical pacts will frame the AI wars of the 2050s. Offsetting this very human uncertainty is a predictable trend of AIs performing tasks that were once the preserve of humans. Among expected redundancies are human warriors and generals.The last human warriors?Our habit of memorialising past wars leaves us in a perennial state of shock about the wars we are expected to fight.When he enlisted to defend France in 1914, Charles de Gaulle envisaged a heroic death involving Napoleonic bayonet charges and cavalry skirmishes. What he got instead was the decidedly inglorious mechanised warfare of the First World War. De Gaulle’s war ended in 1916 with his capture at the Battle of Verdun after a shell obliterated his detachment out of sight of any enemy to valiantly charge.We are seeing new mismatches between the way we like to remember war and the way it is currently waged on Ukrainian and Iranian battlefields. Immanuel Kant famously wrote: “Out of the crooked timber of humanity no straight thing was ever made.” The efficiencies of AI wars demand very straight things.We are witnessing rapid improvement of AI warfighting technologies, nudging us closer to the Terminator films, in which the Skynet supercomputer transmits its drones to quash human resistance. The T-800 cyborg combines a living tissue exterior with a hyper-alloy endoskeleton. This makes for a great movie when the living tissue takes the form of Arnold Schwarzenegger. We like movies that cast charismatic individuals in the hero and villain roles. But from an AI perspective, why waste valuable compute on a killing machine that looks like Arnie, when you could instead fashion a thousand extra, inhumanly efficient AI drones?The original T-800 Endoskeleton robot used in filming “Terminator” Salvation” is displayed during the press preview for the “Robots” exhibition at the Science Museum on 7 February 2017 in London, England. (Photo by Carl Court / Getty Images)Perhaps the First AI War will also be the Last Human War in which humans are active protagonists, called on to perform functions beyond strategic target or collateral damage.Which combatants in the Iran war are the best fit for the last human warriors contending against soulless machines? We should be aware that when future historians recount the First AI War, they may not share all our moral beliefs. Will those whom we view as villains be acclaimed as heroes by them?Suppose a subsequent AI war pits humans against a Skynet seeking our extinction. Human experts in military AI could have little to offer once the digital techs they depend on have been nerfed. They’ll be awaiting Skynet’s killer blow, panic-tapping blank screens. Will the John Connor of the next AI war draw inspiration from a grizzled old Iranian Revolutionary Guardsman? He was morally flawed and technologically overmatched. But perhaps he managed to take out MetaConstellation, one of Palantir Technology’s attempts to build Skynet.AI armies need better human generalsIn his First World War novel All Quiet on the Western Front, Erich Maria Remarque describes a conversation in which a depressed soldier proposes that “the ministers and generals of the two countries, dressed in bathing drawers and armed with clubs, could go into an arena and fight it out among themselves”.We are seeing something like that in the First AI War. Highly singular human individuals find their agency enhanced by AI. Dressed in bathing drawers or not, Donald Trump seems to be winning against other celebrity war captains. Many of them are dead or imprisoned. The war’s increasing body count is one difference from Remarque’s speculation.As we consider the narrative of automating war, we should be aware of biases that we bring to decisions about automating work. The decisions of chief executive officers draw on a heterogeneity of information. AIs can automate many of these choices. We shouldn’t be surprised that corporate bigwigs are keener to automate poorly paid underlings than themselves. In wars, grunts are replaceable — but not the chutzpah of a war-winning CEO.Anthropic CEO Dario Amodei has managed to make disagreements with the Pentagon about how the company’s tech should serve the state’s purposes largely about himself. He’s little different from other over-remunerated founders and CEOs who personalise debates about technological progress. Trump wouldn’t dare consign Amodei to the same prison that houses Venezuela’s Nicolás Maduro. Why would a multi-billionaire ever give up this perk only to discover that an AI-CEO is better for profits?Want the best of Religion & Ethics delivered to your mailbox?Sign up for our weekly newsletter.Email addressSubscribeYour information is being handled in accordance with the ABC Privacy Collection Statement.If rich, technologically advanced societies are to be run by human leaders rather than machines, we need greater confidence that attention-craving singular humans can make choices that benefit humanity.When the dust from the First AI War has settled and the bodies are counted, we need a moral audit that asks whether it makes sense to hand over control of national grand strategy to AI. If we place that AI under strict democratic control, there is a chance for more coherent choices. We hopefully won’t be wondering who’s next on the list of possible invasion targets — New Zealand? A future in which AIs have more influence over grand strategy could leave humans to confirm that buildings on a target list aren’t primary schools and to accept the opprobrium for getting it wrong.Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand. He is the author of How to be Human in the Digital Economy and Dialogues on Human Enhancement, and co-author (with Stuart Whatley and Dan Weijers) of How to Think about Progress: A Skeptic’s Guide to Technology.Posted 1h ago1 hours agoTue 7 Apr 2026 at 7:17amShareFacebookX (formerly Twitter)Anthropic versus the Pentagon — what it means for AustraliansAs Silicon Valley embraces techno-patriotism, how should Australia respond?The militarisation of immersive technologyIf war is the next big disruption, what’s to stop the titans of Big Tech from breaking the world?Could Brezhnev’s shaking hand save us from World War 3?Who decides if the law of war applies to Nicolás Maduro?AI, WarBack to top