Back to Articles
Economic espionage in the AI age demands new responses | The Strategist

The Strategist

ENRICHED

Details

Date Published
10 Mar 2026
Priority Score
0
Australian
Unknown
Created
11 Mar 2026, 06:00 pm

Authors (1)

Description

Artificial intelligence is transforming economic cyber-espionage, but the protection of commercially valuable assets has not kept up. Governments and industry cannot rely on pre-AI defences to confront a more scalable, covert and structurally different threat. ...

Summary

Error processing article with AI.

Body

SHARE Share to Facebook Share to Twitter Share to LinkedIn Share to Email Print This PostWith ImagesWithout ImagesArtificial intelligence is transforming economic cyber-espionage, but the protection of commercially valuable assets has not kept up. Governments and industry cannot rely on pre-AI defences to confront a more scalable, covert and structurally different threat.Two shifts are urgently needed. First, AI-enabled economic espionage should be treated as a distinct national-security priority, not folded into generic cybercrime or AI ethics debates. Second, AI supply chains—including training data, model architectures and cloud dependencies—should be integrated explicitly into critical infrastructure and economic-security frameworks. Without these adjustments, advanced economies risk the quiet transfer of strategic innovation to competitors prepared to extract it.Economic cyber-espionage is not new. Companies and universities lose billions each year through the theft of advanced manufacturing processes, pharmaceutical research, defence technologies and proprietary software. G20 governments have committed to a norm refraining from intellectual property (IP) theft via cyber means. G7 members have condemned the practice. The Australian Security Intelligence Organisation has described foreign interference and technology theft as among the country’s most significant long-term security threats; the Australian Institute of Criminology estimated espionage cost the economy A$12.5 billion in 2023–24.But AI is reshaping the targets and the methods of theft.As organisations embed large language models and other AI systems into core operations, the proprietary logic, fine-tuning and training data behind them have become high-value assets. Traditionally, stealing advanced technology required breaching corporate networks to exfiltrate source code or trade secrets. That risk remains. But AI-as-a-service alters the equation. When models are deployed through cloud interfaces, actors may not need to penetrate networks at all. Through repeated, structured querying, they can approximate a model’s behaviour—a technique known as model extraction or distillation—effectively replicating key capabilities without accessing underlying code or weights. This risk was raised in a February report by the Google Threat Intelligence Group, and both OpenAI and Anthropic have recently accused Chinese rivals of ‘distilling knowledge’ from their models. Meanwhile, AI firms themselves have been accused of copyright infringement.AI also industrialises the espionage process itself. Machine-learning systems can automate reconnaissance, identify high-value personnel, generate convincing spear-phishing campaigns and analyse stolen data at scale. What once required teams of human analysts can now be partially automated. Espionage becomes faster, cheaper and more persistent.At the same time, AI systems create new vulnerabilities. Prompts, training data, model weights and cloud interfaces can all become attack surfaces. Adversaries can poison datasets, manipulate outputs or induce models to leak sensitive information. Autonomous agents integrated into enterprise systems may chain together reconnaissance and lateral movement with minimal human oversight. AI is both a target and a vector.Despite this structural shift, policy responses remain largely incremental. Governments run awareness campaigns and share threat indicators. They occasionally attribute malicious activity or issue indictments. These measures matter. But if AI-enabled economic espionage is now more scalable and less dependent on overt network intrusion, defensive frameworks should evolve accordingly.The first step is conceptual. AI-facilitated IP theft should be recognised as a distinct economic-security problem. Debate around AI governance often focuses on safety, ethics or misinformation; far less attention is paid to model extraction, dataset theft and AI-assisted reconnaissance. As Australia invests heavily in AI research, data centres and digital infrastructure, its innovation ecosystem becomes more attractive to adversaries. AI capability should be treated not only as a productivity tool but as strategic intellectual capital requiring protection.Existing IP frameworks were not designed for AI-specific issues such as model distillation, text-and-data mining or the replication of functionality without copying code. Distillation reproduces behaviour rather than source material, complicating traditional infringement claims. This legal ambiguity reinforces the need for security and regulatory responses, not reliance on after-the-fact litigation.The second step is structural: Australia’s critical infrastructure framework should explicitly account for AI supply chains. The Security of Critical Infrastructure Act regulates sectors and asset classes but is largely silent on vulnerabilities unique to AI systems, including training data provenance, model dependencies and third-party software components. As AI becomes embedded across finance, defence, energy and healthcare, this silence creates uncertainty over responsibility for managing upstream exposures, particularly given Australia’s reliance on a small pool of global cloud providers and open-source components. Addressing this gap requires clarifying whether existing obligations apply to high-impact AI systems, introducing targeted transparency requirements and implementing risk-tiered auditing proportionate to systemic importance.Finally, operational defences need to adapt. Organisations deploying advanced AI should monitor for anomalous querying indicative of model extraction. Intelligence sharing should expand beyond malware signatures to include emerging AI exploitation techniques. And when state-sponsored economic espionage is identified, coordinated diplomatic and economic responses should reinforce commitments such as the G20 norm against cyber-enabled IP theft, making clear that AI does not create a grey-zone exemption.Economic cyber-espionage has always been about securing advantage. In the age of AI, that advantage rests increasingly in models, data and algorithms. If governments fail to update their frameworks, the loss will not always appear as dramatic breaches. It will appear as the steady replication of innovation by those prepared to extract it at scale.