Back to Articles
Anthropic Leak Lands Hours After Australian Government’s AI Deal

SmartCompany

SKIPPED

Details

Date Published
2 Apr 2026
Priority Score
4
Australian
Yes
Created
2 Apr 2026, 02:00 am

Authors (1)

Description

Anthropic is scrambling to contain a Claude Code leak hours after signing an Australian AI deal, raising fresh questions over safety and trust.

Summary

Anthropic accidentally leaked internal source code for its Claude Code tool shortly after signing a safety research memorandum of understanding with the Australian government. The leak revealed blueprints for advanced 'always-on' autonomous agent capabilities, highlighting rapid advancements in frontier AI agency and the associated security risks of model-adjacent software. This incident underscores critical tensions between the safety claims of leading frontier labs and their operational security practices, which directly impacts the credibility of international AI safety governance frameworks and public-private partnerships.

Body

Anthropic is scrambling to contain a leak of internal source code for its Claude Code tool, just hours after announcing a safety-focused partnership with the Australian government. The company said the leak was caused by a “release packaging issue” that accidentally included internal files in a software update, rather than a security breach. No customer data or credentials were exposed, according to an Anthropic spokesperson. Even so, the incident has raised fresh questions about the company’s internal controls, particularly given its positioning as a leader in AI safety. Related Article Block Placeholder Article ID: 334126 Neural Notes: Inside Anthropic’s AI deal with the Australian government Tegan Jones The leak, which spread rapidly across GitHub before takedown requests were issued, reportedly included nearly 2,000 files and more than 500,000 lines of code. Developers quickly copied and redistributed the material, with versions continuing to circulate despite attempts to remove it. This has included rewritten versions designed to evade takedowns. Inside the code, early blueprints for more advanced agent-style features were uncovered, including an “always-on” coding assistant designed to operate with a higher degree of autonomy.  Smarter business news. Straight to your inbox. For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free. * indicates required Email Address * By continuing, you agree to our Terms & Conditions and Privacy Policy. The timing of the leak is certainly unfortunate. On Wednesday, Anthropic signed a memorandum of understanding (MOU) with the Albanese government covering AI safety research, workforce impacts, and collaboration with Australia’s AI Safety Institute. That agreement positions Anthropic as a key partner in shaping how AI is governed and deployed locally, with a focus on transparency, risk management and real-world usage data. The leak also lands amid a period of increased scrutiny for the company globally. In recent weeks, US authorities have flagged Anthropic as a potential supply chain risk, a designation the company is currently challenging in court.  Related Article Block Placeholder Article ID: 332633 Neural Notes: Australian AI startups may face the same defence dilemma as OpenAI, Anthropic Tegan Jones However, this dispute directly followed Anthropic’s refusal to relax safeguards preventing its models from being used for domestic mass surveillance or fully autonomous weapons, leading up to the Middle East Crisis. This stance resulted in it losing its position as a preferred frontier-model provider for US government departments, as ordered by President Trump. It also follows separate reports of internal Anthropic materials being stored in publicly accessible systems in recent weeks. This latest incident does not appear to undermine its Australian partnership directly. But together, the developments highlight the tension between Anthropic’s safety-first positioning and the operational and political pressures facing AI companies as they scale. Stay in the know Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.