Study Reveals Record-Low Public Trust in Artificial Intelligence as New Scorecard Puts AI on the Ballot
ABN Newswire
SKIPPED
Details
- Date Published
- 29 Apr 2025
- Priority Score
- 4
- Australian
- Yes
- Created
- 22 June 2025, 07:12 pm
Description
Australians' trust in AI has hit a record low, with widespread fear about its misuse fueling urgent calls for government action as the federal election approaches, according to new research and a political scorecard released today.
Summary
The article highlights a significant decline in public trust in artificial intelligence among Australians, as reported by a new survey conducted by the University of Melbourne and KPMG. The research shows that only a third of Australians trust AI systems, with fear about AI misuse and election manipulation prevalent. In response, the 'Australians for AI Safety Scorecard' analyzes political parties' support for an AI Safety Institute and an AI Act as the federal election nears, identifying party stances on developing robust regulatory frameworks. The discussion underscores the urgent need for government action to establish independent testing and regulation to enhance trust and manage AI's potential risks, positioning Australian AI safety policy within a global context of comparable measures in countries like Japan, Korea, and the UK.
Body
Australians for AI Safety Scorecard: See who supports expert-recommended AI policies.Survey shows Australians' trust in AI at an all-time low; election scorecard compares parties on backing an AI Safety Institute and AI Act.CANBERRA, AUSTRALIA, April 30, 2025 /EINPresswire.com/ -- Australians' trust in AI has hit a record low, with widespread fear about its misuse fueling urgent calls for government action as the federal election approaches, according to new research and a political scorecard released today.Anew global surveyby the University of Melbourne and KPMG confirms Australiansâ confidence in artificial intelligence has sunk to its lowest level on record:- Only a third of Australians trust AI systems.- Half of Australians have personally experienced or seen harm from AI.- Nearly 3 in 5 fear that elections are being manipulated by AI-generated content or bots.- More than three-quarters want stronger regulation, and less than a third think existing safeguards are adequate.- Nine in ten support specific laws to curb AI-driven misinformation.Australians for AI Safety has released its2025 Federal Election AI Safety Scorecard, comparing every major partyâs stance on two expert-endorsed AI policies:- An Australian AI Safety Institute â a well-funded, independent body that can test frontier models, research risks and advise government.- An Australian AI Act â legislation that places mandatory guardrails and clear liability on developers and deployers of high-risk and general-purpose AI.The scorecard shows that only the Australian Greens, Animal Justice Party, Indigenous-Aboriginal Party of Australia and Trumpet of Patriots fully back both expert-endorsed policies. Senator David Pocock and other independents have also endorsed them. The Libertarian Party generally opposed the policies, referring to them as âgovernment schemesâ.âThe scorecard shows who is prepared to match rapid AI progress with equally rapid safeguards,â said Greg Sadler, Australians for AI Safety spokesperson. âAustralians tell us they want leaders with a real vision for the future. If we expect that kind of vision, we need to vote with future-focused issues like AI in mind.âThe Coalition's response to the scorecard highlighted perceived government inaction: "We need to be alive to the risks associated with this technology... [T]he Albanese Labor Government has completely failed to take decisive action or provide clarity and policy direction. Holding roundtables, commissioning reports and announcing advisory bodies is not the dynamic action that is required on such a critical issue."However, the Coalitionâs response did not outline a clear position on the expert-recommended policies for an AI Safety Institute or an AI Act.âThis is exactly whatpolicrastinationlooks like â one party accusing the other of inaction while not proposing action of its own," said Taylor Hawkins from Foundations for Tomorrow. "Australians are tired of politicians delaying and dodging hard decisions and putting important policies in the too-hard basket.â"As an AI governance researcher, I know it's critical that Government gets these AI policies right to safeguard the extraordinary benefits of highly-capable, general purpose AI," said Alexander Saeri, AI governance researcher at The University of Queensland and director of the MIT AI Risk Index. "This new research is consistent with what weâve known since at least 2020 - Australians want stronger safeguards now.""Seeing advanced AI models demonstrate deceptive capabilities shortly after my child was born was a wake-up call. Itâs clear government isnât taking this seriously," said consultant-turned AI researcher Michael Kerrison. "I left my job to focus on AI governance because this is serious, and it's happening now. As a new parent, I find the lack of safeguards unacceptable. The major parties need to step up; government must take swift action to give Australian families a fighting chance."Australians for AI Safety argues that robust safety regulation allowed the aviation sector to flourish. Similarly, AI innovation will only thrive once independent testing and clear statutory duties build public trust. Comparable institutes are already operating in Japan, Korea, the United Kingdom and Canada. Alongside the EU AI Act, they set a clear benchmark for Australia. Australia committed to creating an AI Safety Institute but has yet to do so.The full party and candidate results are available atwww.australiansforaisafety.com.au/scorecard. The scorecard allows voters to dive into the particular responses of major parties and expert evaluations of how they stack up.About Australians for AI SafetyAustralians for AI Safety is a civil-society initiative coordinated by the charity Good Ancestors. It promotes evidence-based policy that allows Australians to realise AIâs benefits while guarding against catastrophic risks.Mr Gregory SadlerGood Ancestors Policy+61 401 534 879email us hereVisit us on social media:LinkedInLegal Disclaimer:EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.Related CompaniesEIN PresswireSocial MediaShare this ArticlePrint This ArticleABN NewswireThis Page Viewed: (Last 7 Days: 13) (Last 30 Days: 57) (Since Published: 249)Related Industry Topics:#Government & Policy#IT General