Time-Consuming and Costly: Divisions Emerge as Husic Weighs AI Laws
The Age
SKIPPED
Details
- Date Published
- 16 Oct 2024
- Priority Score
- 4
- Australian
- Yes
- Created
- 8 Mar 2025, 12:05 pm
Description
There’s overwhelming support for regulatory action on AI in high-risk settings, but it’s less clear which path the government should take to get there.
Summary
The article highlights the Australian government's push to regulate AI, following overwhelming support for such measures in high-risk settings. However, divisions exist on how to implement these regulations, as some stakeholders, like human rights organizations, advocate for an EU-style AI Act to mitigate human rights risks, while tech companies suggest a more integrated approach. The discussion underscores the urgent need for AI governance to balance innovation with safety, reflective of global challenges in AI policy. Notably, the deliberations could signal significant shifts in Australia's approach to AI regulation, contributing to broader global dialogues on AI safety and governance.
Body
ByDavid SwanOctober 16, 2024 — 5.00amSaveLog in,registerorsubscribeto save articles for later.Save articles for laterAdd articles to your saved list and come back to them any time.Got itNormal text sizeLarger text sizeVery large text sizeThe federal government has received overwhelming support to rein in the use of AI in high-risk settings, but divisions have emerged on the approach it should take, according to Industry and Science Minister Ed Husic.The government has received 300 submissions to its proposals paper on introducing mandatory AI guardrails, according to Husic, who on Wednesday will make 256 of those submissions public.The majority of submissions reflect a sense of urgency to regulate AI, including from the arts and journalists’ union, the MEAA, and the Australian Human Rights Commission which said mandatory guardrails are urgently required to mitigate the human rights risks associated with AI.The federal government has received overwhelming support for taking regulatory action on AI in high-risk settings but division has emerged on the approach it should take, according to Industry and Science Minister Ed Husic.Credit:Oscar ColmanHuman Rights organisations, media companies, peak groups and many academic institutions support the establishment of a standalone EU-style AI Act. However, tech companies like Atlassian and Australia’s big four banks, are not in favour of such a mechanism.The MEAA said in its submission that generative AI is already being used to steal the work of Australia’s artists, journalists and creative workers. “The regulatory response must ensure the strongest possible protections for workers citizens and consumers, must be clear and simple and enforceable,” it said.Loading“The regulatory response must be comprehensive and work seamlessly with other economy-wide regimes such as workplace, copyright and privacy laws.“On this basis, we endorse an economy-wide AI Act. This option is the only one that will deliver on the ambition to deliver safe and responsible AI in Australia in line with the criteria mentioned above.”On the other hand, the Technology Council of Australia, which represents the likes of Atlassian, Canva, Culture Amp and Telstra, said it does not support the creation of new overarching AI Act or single regulator for AI.Advertisement“[A new regulator] would not only be time-consuming and costly, it is also likely to discourage the important but necessary broader uplift required in capability-building for addressing digital regulation issues, including on matters beyond AI,” the TCA said in its submission.“It is also likely to result in siloed expertise and capability across government entities, and limit our overall capacity to adapt and consider innovative ways to evolve our regulatory architecture and coordination mechanisms.”Damian Kassabgi is the new CEO of the Tech Council of Australia.Credit:Danielle SmithThe EU’s AI Act comes into effect in stages over the next two years and prohibits some types of AI considered presenting an ‘unacceptable risk’, such as emotion-recognition systems in the workplace or in educational settings. AI systems designed to purposefully manipulate or mislead will also be prohibited, as will AI systems that exploit vulnerabilities, such as age, disability or socioeconomic situation.The Australian government is yet to settle on its approach. Husic is appearing on stage alongside flying cars and robots at SXSW in Sydney on Wednesday, kicking off ‘Aussie AI month’, where he will encourage executives to get hands on with implementing AI into their businesses, regardless of the government’s regulatory approach.Canva CEO Melanie Perkins on stage at SXSW this week.Credit:Oscar Colman“We know AI can play a huge role in making the world a better place, but it’s crucial that Australian businesses are equipped to develop and use the technology safely and responsibly,” he said.“We have 650 Australian AI companies and we’re in the top four countries in the OECD for our capacity to attract and retain highly skilled and educated workers.”Visitors at the SXSW event can receive free specialist AI training, get an AI health check, use AI to write a symphony or create their own dream city.“SXSW Sydney is one of my favourite events of the year,” Husic added.The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion.Sign up to get it every weekday morning.SaveLog in,registerorsubscribeto save articles for later.License this articleAIEd HusicDavid Swanis the technology editor for The Age and The Sydney Morning Herald. He was previously technology editor for The Australian newspaper.Connect viaTwitteroremail.Loading