Back to Articles
Robodebt Dampening Public Servants' AI Enthusiasm

The Mandarin | Public sector news & government learning

READ

Details

Date Published
3 June 2025
Priority Score
4
Australian
Yes
Created
9 June 2025, 01:18 pm

Authors (1)

Description

Public servants are reflecting on past government tech failures when considering the risks and benefits of using AI for policy development.

Summary

The article highlights a report from UNSW Canberra's Public Service Research Group examining the cautious stance of Australian public servants towards AI following the Robodebt scandal. It emphasizes both the opportunities and risks associated with generative AI in policy development, noting concerns about trust and bias. Though public servants recognize the efficiency benefits of AI, they are wary of its application in sensitive areas without human oversight. This reflects a significant moment in Australian AI governance where historical tech failures influence current AI integration in public sectors, showcasing the importance of balancing innovation with public trust and oversight.

Body

Public servants are weighing the costs of using AI to speed through their work, according to a report by UNSW Canberra’sPublic Service Research Group.Released yesterday, the report examines the current use of generative AI (genAI) in policy development.The authors interviewed senior public servants across 22 state, territory, and federal government agencies to gather perspectives across the public sector.While some saw genAI as a valuable tool to increase efficiency, others thought the risks outweighed the potential benefits.Among the concerns expressed were a loss of trust in government, a lack of human touch in policy development, the environmental cost of running large language models, and potential biases in the datasets on which models are trained.The report notes two things almost all agreed on — public servants won’t be replaced by AI, and the current uses were largely focused on administrative tasks that didn’t involve sensitive data or public interactions.Co-author Helen Dickinson said the public service’s historical challenges with technology were clearly at the forefront of many public servants’ minds.“Learnings from the massive failures identified in the robodebt scheme are influencing how senior public servants perceive advancements in the application of genAI,” she said.“Public servants are wary of anything that might compromise citizen trust and confidence in government, and so there is hesitance in allowing genAI to be used in areas like external-facing service delivery.“As such, there is widespread agreement that the use of AI in policy work requires adequate human oversight.”This concurs with comments by Minister for Industry and Innovation and Minister for Science Tim Ayres earlier this week.The risk for policy developmentThe report highlights that some public servants are already using AI in complex policy work.Applications ranging from idea generation, information discovery, document summaries and creating agency-specific knowledge bases are already in use.The report suggests agencies are relying on tools they perceive as “safer”, like CoPilot, and eschewing those they saw as risky, like Claude and ChatGPT. But CoPilot is not without its risks.One participant spoke of the dangers of anthropomorphising genAI.“The risk that… ChatGPT is kind of like turned into a buddy that you can have a conversation with… it sounds like a human, so maybe it behaves and thinks like a human. It’s not a human, it’s a machine.”The risk of this, according to the report, is that too much trust could lead to underexamination of the validity, legality, or reality of outputs from the machine.Professor Dickinson said it was imperative that governments formalise the intended role of genAI in policy work.“A statement regarding why governments are investing in genAI is critical to building understanding of its intended contribution to high-quality policy work,” she said.“Agencies must also ensure that human policy crafting skills are maintained as current AI tools can only provide content based on historical data, and some complex policy issues cannot be solved by what has been done before.”READ MORE:Ayres signals AI must drive national capability, not just profit through job cuts and offshoring