Risky AI Tools to Operate Under Mandatory Safeguards as Government Responds to Rapid AI Rise
ABC News
SKIPPED
Details
- Date Published
- 16 Jan 2024
- Priority Score
- 5
- Australian
- Yes
- Created
- 8 Mar 2025, 12:05 pm
Description
The federal government will follow the European Union and several other nations to develop a risk-based response to the rapid rise in use of artificial intelligence, setting down stricter rules for risky AI, while staying out of the way of low-risk tools.
Summary
Australia's federal government is adopting a risk-based approach to AI regulation, imposing mandatory guidelines on high-risk AI technologies, such as self-driving vehicles and predictive policing tools. This initiative is set against a backdrop of public concern over AI's impact on jobs, discrimination, and social harms. Industry Minister Ed Husic emphasized the need for balance, encouraging innovation while implementing safety measures like independent assessments, audits, and mandatory AI content labeling. This response aligns with global movements, notably the EU's AI Act, highlighting Australia's proactive stance towards safeguarding against potential AI-induced risks. The plan is vital in building public trust and ensuring AI advances contribute positively to the economy and society.
Body
Risky AI tools to operate under mandatory safeguards, as government lays out response to rapid rise of AIBy political reporterJake EvansTopic:Artificial IntelligenceTue 16 Jan 2024Tuesday 16 January 2024Tue 16 Jan 2024 at 6:51pmThe government will legislate mandatory rules for the riskiest AI technologies.(Supplied: Pixabay)The federal government has introduced its plan to respond to the rapid rise in use of artificial intelligence (AI) technologies, which will impose hard rules on the highest risk technologies, while minimising interventions in low risk AI to allow its growth to continue.Key points:The government will introduce a risk-based system to protect against the worst potential harms of AIRisk technologies will have mandatory rules applied to them, including possible independent assessments and auditsThe government will avoid impeding the growth of low risk AI, largely focusing on voluntary standardsThe industry minister has also flagged plans for AI-generated content to be labelled so it can't be mistaken as genuine.AI has the potential to add hundreds of billions to the Australian economy, improve pay packets and worker wellbeing, but there is low public trust in the AI technologies being designed, and the government received widespread concern in its consultations about risks to jobs, discrimination, and other social harms.An International Monetary Fund study released this week found AI was poised to impact about 60 per cent of all jobs in advanced economies — with about half of those likely to benefit from AI boosting productivity, while the other half would be negatively impacted.Industry Minister Ed Husic on Wednesday laid out the government's initial response, committing to a "risk-based" approach that would be able to respond to AI technologies even as the landscape continues to shift.Mandatory rules for risky techUnder the government's proposal, mandatory "safeguards" would be applied to high risk AI, such as self-driving vehicle software, tools that predict the likelihood of someone reoffending, or that sift through job applications for an ideal candidate.High risk AI could require independent testing before and after release, ongoing audits and mandatory labelling where AI has been used.Dedicated roles within organisations using high risk AI could also be mandated, to ensure someone is made responsible for ensuring AI is used safely.The government will also begin work with industry on a possible voluntary AI content label, including introducing "watermarks" to help AI content be identified by other software, such as anti-cheating tools used by universities.Mr Husic said he was prepared to make AI content labels and watermarks if necessary."The technology will evolve, we understand that, and while a lot of people will want to use the technology for good, there is always going to be someone motivated with ill-will, bad intent, and we're going to have to shape our laws accordingly," Mr Husic said."So if it does require a more mandatory response we will do so."The risk-based approach will also allow government to stay out of the way of innovation in the sector, so that Australia can make the most of new technologies.Ed Husic says Australians have expressed a clear desire for guardrails on AI technologies.(Supplied)AI is already covered under privacy, copyright, competition and other laws, but the government said it was clear existing laws did not adequately prevent harms from AI before they occur.Mr Husic said the government was listening to the concerns of Australians.“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI," Mr Husic said.“These immediate steps will start building the trust and transparency in AI that Australians expect."Tech Council of Australia CEO Kate Pounder said the government's proposal struck a good balance between enabling innovation, while ensuring AI was developed safely.Ms Pounder said Australia must also look beyond regulation towards ensuring the workforce is skilled for AI, research is funded and digital literacy in the community is improved.An expert advisory committee will be established to guide the development of mandatory rules for high-risk AI, as the government consults on details to prepare legislation.The government remains open to whether to amend existing laws or introduce an EU-style "AI Act".The government's response noted other jurisdictions were moving to ban some of the highest risk technologies, such as real-time facial recognition technologies used in law enforcement, but did not comment on whether Australia would ultimately follow that path.It also identified "frontier" AI models such as ChatGPT, which were greatly more powerful than previous generations of AI, may require targeted attention, since they were developing at a speed and scale that could outpace existing legislative frameworks.