Back to Articles
Artificial Intelligence at War

The Strategist

SKIPPED

Details

Date Published
20 Aug 2024
Priority Score
4
Australian
Yes
Created
8 Mar 2025, 02:41 pm

Authors (1)

Description

There’s a global arms race under way to work out how best to use artificial intelligence for military purposes. The Gaza and Ukraine wars are now accelerating this. These conflicts might inform Australia and others ...

Summary

This article examines the accelerating arms race to leverage artificial intelligence in military operations, with a focus on recent conflicts in Gaza and Ukraine. It highlights the AI-driven transformation of military strategy, such as the Israeli AI system 'Lavender', which illustrates the risks of automation and action bias in tactical decisions. The discussion also touches on implications for Australia as AI-fueled warfare might extend to areas closer to home, particularly in the context of Chinese 'intelligentization' strategies. The article underscores AI's potential to change the operational balance between offense and defense, impacting both global and regional military strategies.

Body

SHAREShare to FacebookShare to TwitterShare to LinkedInShare to EmailPrint This PostWith ImagesWithout ImagesThere’s aglobal arms raceunder way to work out how best to use artificial intelligence for military purposes. The Gaza and Ukraine wars are now accelerating this. These conflicts might inform Australia and others in the region as they prepare for a possible AI-fuelled ‘hyperwar’ closer to home, given that China envisages fighting wars usingautomated decision-makingunder the rubric of what it calls ‘intelligentization’.The Gaza war has shown that the use of AI in tactical targeting can drive military strategy by encouraging decision-making bias. At the start of the conflict, an Israeli Defence Force AI system called Lavender apparentlyidentified37,000 people linked to Hamas. Its function quickly shifted from gathering long-term intelligence to rapidly identifying individual operatives to target. Foot soldiers were easier toswiftly locate and attackthan senior commanders, so they dominated the attack schedule.Lavender created a simplified digital model of the battlefield, allowing dramatically faster targeting and much higher rates of attacks than in earlier conflicts. Human analysts did review Lavender’s recommendations before authorising attacks, but they quickly grew to trust it, considering itmore reliable. Humans often spent only 20 seconds considering Lavender’s target recommendations before approving them.These human analysts displayedautomation biasandaction bias. Indeed, it could be said that Lavender was encouraging and amplifying these biases. In a way, the humans offloaded their thinking to the machine.Human-machine teams are considered by many, including theAustralian Defence Force, to be central to future warfighting. The way Lavender’s tactical targeting drove military strategy suggests that the AI machine part should be designed to work with humans on the task they are undertaking, not be treated as a part able to be quickly switched between different functions. Otherwise, humans might lose sight of the strategic or operational context and instead focus on the machine-generated answers.For example, the purpose-designedUkrainian GIS Arta systemtakes a bottom-up approach to target selection by giving people a well-fused picture of the battlespace, not a recommendation derived opaquely of what to attack. It’s described as ‘Uber for artillery’. Human users apply the context as they understand it to decide what is to be targeted.Ukraine offersfurther insightsinto the application of AI for knowing what is happening on the battlefield. Advanced digital technology has made the close and deep battlespacealmost transparent.Strategy is now formedaround finding enemy forces while fooling their surveillance systems to avoid being targeted. The result is that the frontline between the two forces, out to about 40km on either side, is now avery deadly zonethrough which neither side can break through to win.Thistactical crisisappears likely to deepen as present semi-autonomousair,landandseasystems are progressively updated byUkraineandRussiawith AI. This will make these robots much less vulnerable to electronic warfare jamming and allow them to autonomously recognise a hostile target and attack. Sensing the significant battlefield advantages, the US has launched the large-scaleReplicator programaiming to field ‘autonomoussystems at scaleof multiple thousands, in multiple domains, within the next 18 to 24 months’.Given AI’s use in Gaza and Ukraine, it appears likely that in a potential war with China the principal utility of AI similarly will befind-and-fool. Consider clashes over thefirst island chain, which runs from Indonesia to Taiwan and through Okinawa to mainland Japan. With China to the west and the United States to the east, military forces would use AI’s ability to quickly find items within a background full of clutter while attempting to fool the enemy’s AI systems.Helped by AI, US-ledcoalition kill websandChinese kill webswill readily find and target hostile air and naval forces on their respective sides of the island chain. The first island chain might then become a stabilised but very dangerous land, sea and air battlespace, with US and allied forces dominating on the eastern side and Chinese forces dominating on the western side. The island chain would become a no man’s land that neither side could pass through without suffering prohibitive losses.How to win in a war so driven and influenced by AI may bethe major questionfacing defence forces today. The Ukraine war suggests some strategies: wearing the other side down in aprotracted attrition battle; usingmass frontal attacksto overwhelm the adversary in a weakly defended area; infiltrating usingsmall assault groupswith heavy firepower support; or quickly exploiting somefleeting technological advantageto break through. Such options may become practicable as more and more AI-enabled weapon systems enter service.The operational balance seems to have swung to favour defence over offence, to the advantage of status quo powers, such as India, Japan, South Korea, Taiwan, Singapore and Australia. But this may prompt a revisionist power like China to seize territory before others can respond, making it difficult to push back. As Japanese Prime Minister Fumio Kishidawarned, ‘Ukraine of today may be East Asia of tomorrow.’