Back to Articles
Neural Notes: Inside Anthropic’s AI deal with the Australian government

SmartCompany

SKIPPED

Details

Date Published
31 Mar 2026
Priority Score
5
Australian
Yes
Created
1 Apr 2026, 12:00 am

Authors (1)

Description

Anthropic has inked an AI deal with the Australian government that offers something a bit different to OpenAI and Microsoft before it.

Summary

Anthropic has formalised a memorandum of understanding with the Australian Government, establishing a partnership focused on AI safety research, model evaluations, and risk testing through the fledgling Australian AI Safety Institute. The agreement includes technical exchanges to build shared understanding of emerging frontier capabilities and $3 million in research support for Australian institutions. Uniquely, the deal grants policymakers access to Anthropic’s Economic Index data, providing direct visibility into how Claude is being utilised across key sectors to monitor workforce impacts and autonomous task execution. This partnership represents a significant step in aligning international frontier AI labs with Australia's national safety and governance frameworks.

Body

Welcome back to Neural Notes, a weekly column where I look at how AI is influencing Australia. In this edition: Anthropic inks an AI deal with the Australian government that offers something a bit different to OpenAI and Microsoft before it. The top-line news is that Anthropic has signed a memorandum of understanding (MOU) with the Albanese government, formalising a partnership on AI safety, research, and workforce impacts as it expands its presence in Australia. Related Article Block Placeholder Article ID: 332999 Neural Notes: Anthropic data shows how AI is actually showing up at work Tegan Jones The agreement was signed during a visit to Canberra by Anthropic CEO Dario Amodei and includes collaboration with the AI Safety Institute on model evaluations, risk testing, and broader safety work. It also comes with $3 million in research support for Australian institutions. The MOU is framed as part of Australia’s new National AI Plan and explicitly positions AI as a driver of economic growth, scientific progress and better public services. It also includes commitment from Anthropic to uphold Australian laws and values and to maintain social licence for its investments.  It plugs Anthropic into the fledgling AI Safety Institute through ongoing “technical exchanges and collaborations” to build a shared understanding of emerging capabilities, opportunities and risks, in line with similar models in the US and UK. Smarter business news. Straight to your inbox. For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free. * indicates required Email Address * By continuing, you agree to our Terms & Conditions and Privacy Policy. Alongside those headline elements, the deal also commits Anthropic to share its Economic Index data with the government, tracking how tools like Claude are being used across the economy and what that may mean for jobs, productivity and skills.  The MOU singles out natural resources, agriculture, healthcare and financial services as sectors of particular interest. This kind of structured usage data has not typically been a visible part of Australia’s AI partnerships to date, at least in public agreements. From AI frameworks to actual usage data Anthropic’s Economic Index is designed to map how AI is being adopted in practice. It draws on large volumes of model interactions, classifying activity by task, sector and occupation, and distinguishing between AI used as an assistant and AI used more autonomously. Related Article Block Placeholder Article ID: 328451 Neural Notes: Australia’s big AI plan has a small business problem Tegan Jones In the Australian MOU, that work is initially focused on sectors like resources, agriculture, healthcare and financial services. As I wrote in a recent Neural Notes column, Anthropic has already used this framework to track where AI is actually showing up in workplaces and which jobs are most exposed. That research also hinted at early hiring shifts, with fewer young workers entering highly exposed jobs even as broad-based unemployment effects have yet to materialise. That matters in an Australian context where the policy conversation has so far been dominated by frameworks. Over the past two years, the focus has been on responsible AI principles, assurance mechanisms and governance structures. Those are necessary foundations, but they have been built with relatively limited direct visibility into how AI tools are actually being used inside businesses. A structured feed of usage data helps to close that gap. In theory, it gives policymakers a more immediate view of where AI is being applied first, which types of work are being augmented or automated, and where pressures on skills and workforce transition may emerge.  In practice, that kind of insight has been difficult to obtain in real time, with most analysis relying on surveys, lagging indicators or international research. A different approach to Canberra The inclusion of Economic Index data also highlights how differently major AI providers are approaching Australia. OpenAI has largely focused on infrastructure and access. A small initial contract with Treasury last year was widely seen as a foothold for larger, whole-of-government deals, while its broader local strategy has centred on partnerships and capacity.This includes backing a multi-billion-dollar data centre development in Western Sydney as part of a $7 billion AI campus with NEXTDC. Microsoft’s recent whole-of-government agreement follows a similar pattern, built around cloud, software, and productivity tools, alongside training commitments and pricing frameworks for public sector use. Related Article Block Placeholder Article ID: 333824 Xero partners with Anthropic to bring Claude AI into small business accounting Tegan Jones 2 Those deals are tangible and politically straightforward, tied to infrastructure, skills, and economic activity.  In comparison, Anthropic places more emphasis on access to information, both through safety collaboration with the AI Safety Institute and the sharing of internal usage data.  It also includes language about supporting a “vibrant domestic ecosystem”, from startups to skills and APS capability, and aligning with the government’s expectations for data centres and AI infrastructure developers. AI visibility… but with limits There are, however, clear limits to what this kind of arrangement provides. The Economic Index reflects Anthropic’s own systems and user base, rather than the full spectrum of AI use across the economy.  It captures what Claude is being used for, how those interactions are structured, and how the company chooses to classify them. That means it offers a partial view rather than a complete one. It may also shape how AI’s economic impact is framed, depending on which tasks and sectors are most visible within Anthropic’s data. And while the Economic Index research so far has not found a clear, systemic spike in unemployment in highly exposed occupations, the past few years would beg to differ. We have seen wave after wave of AI-linked layoffs and restructures across global tech and software. This most recently included Atlassian’s decision to cut around 1,600 jobs as it “self-funds” further investment in AI. Even so, this deal represents a step change from the position Canberra has been in. Until now, much of the discussion around AI’s effect on jobs and productivity has been driven by external research or industry claims, rather than direct, ongoing data flows into government. While it won’t settle the debate over AI and jobs, it will hopefully force Canberra to argue from data rather than vibes. Stay in the know Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.