Back to Articles
The Shadow AI Risk Every Australian Business Should Be Watching

SmartCompany

SKIPPED

Details

Date Published
13 Oct 2025
Priority Score
3
Australian
Yes
Created
16 Oct 2025, 11:51 am

Authors (1)

Description

Shadow AI is exposing Australian businesses to rising cyber risks as staff use unapproved AI tools without oversight or training.

Summary

The article highlights the growing cybersecurity threat posed by 'shadow AI' in Australian businesses, referring to the unauthorized use of AI tools like ChatGPT without organizational oversight. This practice has reportedly led to significant data breaches, costing companies an average of $1.03 million per incident due to compromised sensitive data such as passwords and financial information. The article critiques the current governance frameworks, suggesting that instead of banning, businesses should implement comprehensive oversight strategies to mitigate these risks. This issue is presented as a critical concern for Australia, revealing the challenge companies face in balancing AI innovation with data security.

Body

Welcome back toNeural Notes, a weekly column where I look at how AI is affecting Australia. In this edition: what is shadow AI and why is it causing cybersecurity havoc?The rise of generative AI has changed how many Australians work. Consequently, it’s a new slew of cyber threats, as well as a new term: shadow AI. This is the unauthorised use of tools like ChatGPT, code copilots, or image generators without company oversight.Related Article Block PlaceholderArticle ID: 32346181% of employees confess to sharing confidential business info with free AI toolsJennifer Dudley-NicholsonThis quiet use of AI tools at work is nothing new. It’s beenwidely reported onsince the early ChatGPT boom. What has changed is scale and consequences.What began as casual experimentation has, in some cases, evolved into reliance. From there, it has reportedly blown out into a structural cybersecurity problem, with research showing shadow AI can significantly increase the cost and impact of data breaches.Palo Alto Networks’ latestUnit 42 Threat Frontierreportwarnsthe rapid spread of generative AI has created a “perfect storm” for cybersecurity. Just as unmonitored cloud assets drove 40% of cloud security incidents in 2024, unsanctioned AI is now introducing similar vulnerabilities.Smarter business news. Straight to your inbox.For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.*indicates requiredEmail Address*By continuing, you agree to ourTerms & ConditionsandPrivacy Policy.Workers are said to be deploying AI tools beyond the reach of IT teams, exposing sensitive company data and weakening organisational control.Some of the most common typesof sensitive data includepasswords, Word documents, and client and employee details — including financials.It’s also affecting the bottom line. IBM’s 2025Cost of a Data Breach Reportfoundincidents involving shadow AI cost an average of $1.03 million (US$670,000) more than other breaches. About one in five global cyber incidents are now linked to unauthorised AI use, often exposing personal information or intellectual property across multiple cloud environments.Related Article Block PlaceholderArticle ID: 311369The top five sensitive data types employees are feeding AITegan JonesAustralian Cybersecurity Magazinereportsshadow AI is already accelerating burnout among local security teams, who are defending against both AI-powered attacks and internal misuse. Sophos’ Asia-Pacific report adds that resource constraints and lagging governance have made this a permanent feature of the cybersecurity landscape, not a passing trend.New Microsoftresearchin the UK shows the problem is widespread, with 71% of employees there having used unapproved consumer AI tools at work, and more than half continuing to do so weekly.While these tools have saved an estimated 12 billion work hours, worth around $426.43 billion (£207 billion), only a third of users expressed concern about data privacy or system security.Employees cite convenience and familiarity as key reasons for turning to public AI, with many saying their companies offer no approved alternatives. It’s the same paradox now facing Australian businesses: enthusiasm without oversight.A September studyby HP and Microsoft found 81% of employees who use free AI tools admit to sharing confidential company information, often through public versions of ChatGPT, Copilot or Gemini.One in three Australian businesses is using free generative AI tools, and more than half of SMEs have introduced AI into their operations. Yet only one in 10 managers believes their teams are properly trained to use it safely.Industry surveys reflect similar risks. Research by SaaS management firm Josysfound 36% of Australian workers had uploaded sensitive data such as financial reports, source code, and strategy documents to public AI platforms.Related Article Block PlaceholderArticle ID: 322602Only one in 10 managers say their staff is ready for AIJennifer Dudley-NicholsonWhat workplace governance for AI could look likeOutright bans on AI tools rarely work. Employees simply move activity to personal devices or accounts, taking the risk underground.Security experts say the goal isn’t to ban AI tools but to bring them into the light. For example, running discovery and governance audits, creating secure sandboxes for staff experimentation, enforcing least-privilege data access, and building AI oversight into board-level risk frameworks.Some also suggest a “secure AI by design” strategy involving embedding governance and access controls early rather than reacting after a breach.It’s also perhaps worth not simply pointing the finger at staff and instead considering how shadow AI became a problem in the first place. AI hype, headcount reductions, and the push for AI-driven productivity without the training, guidance, or guardrails to match are all part of the equation.Until leaders stop blindly evangelising the brilliance of AI on LinkedIn without actually closing that gap internally, employees will keep turning to whatever is available. And the risks will keep multiplying.Stay in the knowNever miss a story: sign up toSmartCompany’sfree daily newsletterand find our best stories onLinkedIn.