Back to Articles
Government Agencies Fail AI Transparency Test
Information Age
SKIPPED
Details
- Date Published
- 10 Mar 2025
- Priority Score
- 3
- Australian
- Yes
- Created
- 11 Mar 2025, 04:00 pm
Authors (2)
- Jeremy NadelENRICHED
- Jeremy NadelENRICHED
Description
Developers, auditors, human rights advocates want more details.
Summary
The article critiques the insufficient transparency in AI use disclosures by 55 Australian government agencies, sparking concern among developers, auditors, and advocates. These agencies' vague 'transparency statements' fail to provide meaningful insights into AI's role in critical areas like visa processing, tax compliance, and disability support. This lack of detail raises substantial issues about privacy, ethical bias, and potential misuse, aligning closely with existential risks surrounding AI governance. Calls for more comprehensive data disclosure highlight the need for improved AI safety and governance frameworks in Australia.
Body
Government agencies fail AI transparency testDevelopers, auditors, human rights advocates want more details.By Jeremy Nadel on Mar 11 2025 10:28 AMPrint articleTransparency statements of 55 agencies were required to provide an "overview of their intended AI use". Photo: ShutterstockGovernment agencies’ high-level descriptions of their artificial intelligence (AI) use cases in new mandatory registers have failed to quell developers, auditors and disability and privacy advocates’ concerns about the inability to assess their impacts.The “transparency statements” of the 55 non-Corporate Commonwealth Entities that metthe 28 February deadlinewereonly requiredto provide “an overview of their intended AI use”, Lucy Poole, the Digital Transformation Agency (DTA) officer overseeing the policy, toldInformation Age.Poole said agencies’ responses would help “build public trust…and confidence”, but the broad summaries raised more questions than answers about whether or not and how AI impacts high-stakes systems used in disability support decisions, tax compliance, visa programs and other government functions.Founder and chair of Digital Rights Watch Lizzie O'Shea noted the lack of detail about models’ training data and said that the transparency standards should “address the data inputs” and not just “outputs”.“Such models are fundamentally built upon mass invasion of privacy and any policy intervention must reckon with this.”Technology consultantJustin Warren, a critic of the discontinued RobotDebt scheme and Centrelink’s more recent AI trials, added that agencies’ responses exemplified their failure to realise “trust is earned…too few agencies want to do the work to earn that trust.”AI in visa programsThe AI“transparency statement”from TheDepartment of HomeAffairs’ (DHA), an agency using AI for “predicting risks in visa program[s]”, did not clarify whetherAI is incorporatedinto “prioritisation and allocation tools" used to more efficiently process its backlog of character-based migration decisions or“the use of computerised decision-making” that bans certain “visa holder[s]” from “undertaking critical technology-related study” posing a national security risk.However, Freedom of Information (FOI) documents have previouslyconfirmed DHA uses AIfor "identifying indicators of fraud in documentation supplied to support visa applications".Parliament’s powerful audit committeerecommended on Tuesdaythat more “detailed questions on the use and understanding of AI systems” be incorporated into…[the] annual APS census”, noting “AI systems may not fully grasp…decisions directly impact[ing] individuals’ lives – for example, in areas like welfare benefits, criminal justice, or immigration.”Concern about how AI could impact disability supportThe agency administering the National Disability Insurance Scheme (NDIS) was one of several that was“encouraged”, but not mandated, to “apply this policy”.The National Disability Insurance Agency (NDIA) is yet to follow the lead of other exempt agencies, like the Australian Federal Police, in voluntarily publishing an AI Transparency statement.NDIAposted a tender last monthfor technology to “accurately predict support needs” specifying that “data derived from the assessment tool(s)” enables “scores for each domain and/or support need area” and clarified to software providers that they could propose “artificial intelligence” and/or an “AI capability” in their responses."Cat Walker, a disability advocate in her third year of an FOI case for secret systems used in calculating NDIS plans for autistic people said that she was disappointed NDIA was “only ‘encouraged’ to apply the policy.”Ministers responsible for NDIS have long said thatemerging technology only informs NDIS decisions without making them, but Walker toldInformation Agethat “discretion isn’t enough if we can’t interrogate business rules for ethical issues, bias or other faulty assumptions.”Australian Taxation Office’s 43 AI modelsThe 43 AI systems that the Australian Taxation Office (ATO) operates, or is developing have been under scrutiny since a February audit was more critical of ATO’s AI safeguards and governance than the other 19 agencies assessed.The ATO’s transparency statementprovided only broad descriptions of AI’s role in “tax time nudging", “fraud detection” and “communications content”.Founder of accounting and finance software platform TaxTank Nicole Kelly toldInformation Agethat DTA should have required “more comprehensive details from agencies”.The ATO has“extensive data matching programs” with both agencies and the private sector“but there is no transparency” over whether or “how AI is being applied across this data or the security risks involved,” Kelly said.“Without clear disclosures on the ATO’s AI datasets, taxpayers cannot know if the data is fair or accurate.”Jeremy NadelJeremy Nadel is a Melbourne-based freelancer whose work has been featured inThe Guardian, The Saturday PaperandCrikey.He was most recently a staff reporter forITnewsandCRNreporting on the tech channel and enterprise IT. In 2023, he was awarded Best New Journalist at the Australian IT Journalism Awards. For story tips, you can email Jeremy at[email protected].Tags:ai transparency statementsafpndisdtagovernmentaustralia