Government Should Regulate 'High-Risk' Uses of AI, Inquiry Finds
iTnews
SKIPPED
Details
- Date Published
- 3 Dec 2024
- Priority Score
- 4
- Australian
- Yes
- Created
- 10 Mar 2025, 10:27 pm
Description
AI developers should ‘appropriately licence’ and pay for copyrighted datasets, committee finds.
Summary
A Senate inquiry in Australia has recommended extensive regulation for 'high-risk' uses of artificial intelligence, suggesting new legislation to address these risks. The committee emphasized the need for transparency in AI models using copyrighted data and highlighted specific oversight for advanced models such as large language models, including ChatGPT. This initiative underscores a broader legislative effort to manage AI's rapid expansion and potential risks, particularly focusing on privacy rights and automated decision-making in government services. The recommendations reflect significant developments in Australian AI governance aimed at mitigating catastrophic risks associated with AI technology deployment.
Body
The federal government has been asked to regulate “high-risk” uses of artificial intelligence as part of a raft of measures recommended by a senate inquiry.
The report from the Select Committee on Adopting Artificial Intelligence (AI) follows a consultation into the “opportunities and impacts for Australia arising out of the uptake of AI technologies".
The committee primarily called for a “new whole-of-economy, dedicated legislation to regulate high-risk uses of AI” that is “supplemented by a non-exhaustive list of explicitly defined high-risk AI uses”.
These uses must include so-called “general-purpose” AI models, such as large language models (LLMs) such as ChatGPT.
Amid the committee’s 13 recommendations was a requirement for AI developers to be “transparent” about the use of copyrighted works in their training datasets, and use of such “works is appropriately licensed and paid for”.
The committee also called for the government to implement recommendations made last year in a review of the Privacy Act, in particular “an individual’s right...to request meaningful information about how automated decisions are made.”
Meanwhile, as states such as Queensland look to implement their own automated decision-making (ADM) guardrails, the committee called for a federal “legal framework covering ADM in government services".
This, the report said, should be informed by the Attorney-General's Department's ongoing consultation into the use of ADM, after agreeing to 38 recommendations from a previous consultation last year.
Lastly, the report said the government should take a “coordinated, holistic approach to managing the growth of AI infrastructure in Australia.
First launched in March 2024, the inquiry began consulting with members of the public and industry in May with an original deadline date of September 19.
However, the committee’s reporting date was pushed to November 26 to allow it to “consider” the impact of generative AI on the federal election in the United States, held on November 5.