Commonwealth Procurement Changes Praised by Local Tech Companies
ARN
SKIPPED
Details
- Date Published
- 21 Oct 2024
- Priority Score
- 4
- Australian
- Yes
- Created
- 8 Mar 2025, 01:04 pm
Description
The Digital Transformation Agency (DTA) is piloting an artificial intelligence (AI) assurance framework as part of its exploration into AI technologies used by government agencies, ensuring they meet standards designed to promote safe and responsible use. Under DTA’s draft assurance framework, agencies will complete an initial threshold assessment covering the basic information of the use […]
Summary
The Digital Transformation Agency (DTA) in Australia is testing an AI assurance framework aimed at ensuring government agencies employ AI technologies responsibly and safely. This framework mandates that agencies conduct an initial risk assessment and consider non-AI alternatives for proposed use cases to ensure they choose the most effective and secure solutions. Notably, if any risks are identified as moderate or higher, a comprehensive evaluation following Australia's AI Ethics Principles is required. The framework focuses on human oversight, ethical governance, and community welfare, reflecting broader trends in AI policy intended to mitigate potential AI risks while enhancing governance and transparency.
Body
The Digital Transformation Agency (DTA) is piloting an artificial intelligence (AI) assurance framework as part of its exploration into AI technologies used by government agencies, ensuring they meet standards designed to promote safe and responsible use.
Under DTA’s draft assurance framework, agencies will complete an initial threshold assessment covering the basic information of the use case.
It will also focus on the challenges it aims to solve and the expected benefits that the AI solution will provide. Agencies will also need to have a potential non-AI alternative that could deliver similar solutions and benefits.
‘We want agencies to carefully consider viable alternatives,’ said DTA’s general manager of strategy and planning Lucy Poole. “For instance, non-AI services could be more cost-effective, secure, or dependable.”
According to Poole, the DTA believes that evaluating these options will help agencies understand the advantages and limitations of implementing AI.
“This enables them to make a better-informed decision on whether to move forward with their planned use case,” she claimed.
When it comes to the assessment process, all risks in the initial assessment are low and the assessment contact officer and executive sponsor are satisfied, then a full assessment will not be required.
However, if one or more risks are rated as medium or above, they will need to proceed to a full assessment.
According to the DTA, the full assessment will require agencies to document how the use case measures up against Australia’s AI Ethics Principles.
These include, but are not limited to, fairness, reliability and safety.
The DTA has also provided suggestions for how agencies should consider ensuring the reliable and safe delivery and use of AI systems.
It particularly focuses on data suitability, Indigenous data governance, AI model procurement, testing, monitoring, and preparedness to intervene or disengage, as well as privacy protection and security.
“Our approach to AI assurance prioritises human oversight and the rights, wellbeing, and interests of people and communities,” stated the DTA.
From November 2024, DTA will also hold participant feedback sessions, interviews, and analyse survey responses to inform updates to the framework and guidance.
“Our goal is to provide a unified approach for government agencies to engage with AI confidently,” said Poole. “It establishes baseline requirements for governance, assurance, and transparency, removing barriers to adoption and encouraging safe use for public benefit.”