Body
Last week a fresh parliamentaryinquiry report into the use and governance of AI systems within the public sectorwas launched.The 73-page report – released by the Joint Committee of Public Accounts and Audit – makes it abundantly clear thatAustralia’s approach to AI governance is lagging, and the risks of inaction are mounting.The committee, chaired by Linda Burney, warns technology is outpacing regulation, leaving public sector agencies without the necessary oversight to manage AI-driven decision-making.While this might sound like a problem for government agencies alone, the consequences ripple far beyond the public sector.For businesses, particularly SMEs and startups, unclear AI governance means regulatory uncertainty, ethical dilemmas, and potentialcompliance issuesdown the line.Chris O’Connor is the co-founder of AtomEthics, a startup that assists businesses with ethical AI and machine learning applications. O’Connor agrees regulation isn’t keeping up with AI’s rapid evolution.“AI is outpacing regulation, and we’re already seeing the consequences in Australia. Trust and adoption are being impacted, and the lack of clarity isn’t helping,” O’Connor said toSmartCompany.“Australia has stalled on implementing mandatory guardrails, relying instead on policy settings that simply aren’t strong enough.”This lack of oversight isn’t a new problem. Government agencies have already been using AI and automated decision-making for years without sufficient regulations.The most infamous and far-reaching example was the Robodebt Royal Commission, which highlighted what O’Connor called the “complexity and incohesiveness” of Australia’s legislative approach.Just last month, lawmakerscalled for AIguardrails to prevent a ‘private sector Robodebt’.O’Connor warns that despite everything learned from Robodebt, little has changed.“If government agencies, despite clear case studies of failure, struggle to course-correct, self-regulation isn’t going to cut it,” O’Connor said.This should be a red flag for businesses relying on AI-driven tools. The risks aren’t just about inefficiencies but the potential for large-scale failures.“Generative AI –using Copilotto draft reports or classify documents – has its issues, but the risks are mostly contained,” O’Connor said.“What worries me more are in-house AI models built to accelerate compliance, running on vast citizen data sets. If we don’t fix this now, we’re not just looking at inefficiencies; we’re looking at the very real possibility of another Robodebt-scale failure.”The governance gap is more than just bureaucratic red tapeOne of the most striking findings from the committee’s report is that many government agencies already use AI in decision-making but lack formal governance frameworks. This isn’t just slow-moving bureaucracy at work – it reflects deeper systemic issues.“AI governance is much more than just a set of rules to follow. It requires robust controls and continuous oversight throughout the entire lifecycle – from inception to deployment and eventual retirement,” O’Connor said.He highlights several core challenges the government and businesses face when implementing AI. Many project teams lack the cross-functional skills to assess AI quality and ethics properly.Relying on a traditional risk matrix assumes that organisations can anticipate every possible outcome, which isn’t always the case.Governance frameworks are often incomplete, contradictory, or disconnected from existing regulations and industry best practices. Without proper records, it’s difficult to audit AI decisions, validate outcomes, or hold organisations accountable.For businesses, these governance shortcomings should serve as a cautionary tale. AI systems require continuous assessment and recalibration — not just a one-time ethics review.Bias in AI is a business risk, notjust a government problemOne of the key risks flagged in the committee report isAI bias, particularly in hiring, compliance, and service delivery.Related Article Block PlaceholderArticle ID: 294915Employment Hero debuts SmartMatch amid concerns over AI and recruitment biasTegan JonesBusinesses, especially SMEs, are not immune to these risks. While flawed datasets contribute to bias, O’Connor points out that bias can creep in at multiple points during an AI project’s lifecycle — how the system is designed, who it serves, and when it is used.“One often-cited solution is keeping a ‘human in the loop’ to check for bias. While human oversight is crucial, we have to acknowledge that people bring their own biases too,” he said.“If an AI system makes a recommendation to a hiring manager late at night or over the weekend, will they be in the best frame of mind to critically assess it,” O’Connor asks.“You can outsource the risk, but you cannot outsource accountability. AI bias isn’t just a technical problem; it’s an organisational responsibility.”Should the private sector be more involved with AI governance?The committee recommends setting up an AI governance working group within 12 months. But O’Connor argues time is of the essence.“A year seems like a very long time, noting the committee’s grave concerns. I’d get practical and start by creating an inventory of AI models to use across the government. We have no idea how many models are in use right now.”Beyond this, the private sector is already developing robust AI compliance frameworks, raising the question: should the government be looking to the industry for guidance?“There are experts among us who devote our time to researching and interpreting leading practices and evolving regulations,” O’Connor said.AI transparency and vendor lock-inAnother major concern raised in the report is the ‘black box’ nature of AI decision-making. Can AI systems ever be fully transparent? O’Connor believes transparency is both possible and necessary.“This is not wishful thinking — it is a matter of commitment to ethical principles, robust governance frameworks, and technological innovation,” he said.He outlines steps businesses and government should take to improve transparency, including clear documentation of the data sources used in AI models, the parameters informing decisions, how those parameters are combined and weighted, and an accessible appeals process for those affected by AI decisions.For SMEs that rely on AI tools from major tech providers, the report’s warning about ‘vendor lock-in’ is also pertinent.O’Connor points to the government’s recent adoption of Microsoft Copilot as an example of how the government often defaults to familiar vendors, sidelining potential local providers.“That said, choosing a vendor you already deal with comes with advantages. Single billing, admin rights, consistency in the UI reducing the need for training — these are all benefits,” O’Connor said.But SMEs need to remember the AI tool is an extension of them and their business. They cannot outsource accountability for how data is managed and how decisions are made.”The need for AI regulation that works for businessWhen asked about the biggest AI governance challenge in the next five years, the report points to the complexity of navigating different regulatory environments.“Any organisation that works across borders will have very different regulatory frameworks to deal with. Continuously monitoring and interpreting these changing rules is a challenge,” O’Connor said.Ultimately, AI governance isn’t just a government problem – it’s a business imperative.While regulation may seem like a hurdle to innovation – a common argument worldwide, O’Connor argues the opposite.“Regulation, when thoughtfully designed, does not hinder innovation. It guides it toward responsible and ethical outcomes.”Never miss a story: sign up toSmartCompany’sfree daily newsletterand find our best stories onLinkedIn.