Back to Articles
Parliament Published a Submission Containing AI-Drafted Inaccuracies

iTnews

SKIPPED

Details

Date Published
28 May 2024
Priority Score
4
Australian
Yes
Created
8 Mar 2025, 02:41 pm

Authors (1)

Description

Hopes to find a fact-checking tool.

Summary

The Australian Parliament had to retract a submission due to false information generated by AI, raising concerns over dependency on external, AI-assisted submissions. This incident highlights the risks posed by AI-generated content in official documents, stressing the need for improved fact-checking tools. While parts of the submission drafted without AI proved accurate, the mix of AI and human-generated content complicates reliability assessments. The Senate is exploring tools like Microsoft Copilot for M365 to identify AI-generated text, underlining the balance between leveraging AI utility and safeguarding information accuracy. This development is crucial for global AI policy, as it underscores potential flaws in governance frameworks when integrating AI into public systems.

Body

A parliamentary submission published online had to be withdrawn after it was found to contain false information about a third-party that had been generated using artificial intelligence. The incident appears to have exposed a risk for parliamentarians and committees when relying on outside submissions to inform their work. “We did have a circumstance, not in a senate committee, but in a joint committee with a submission being made using AI that turned out to have false information," senator Richard Colbeck told senate estimates. “The capacity of our secretariats to go through and check those things is clearly limited. "It wasn’t until the submission was published and the party that had false allegations made against them said, ‘Hey, this wasn’t us’ and ‘This didn't happen’ that it got sorted. Complicating matters for parliamentarians, the offending submission contained a mix of AI-generated and human-drafted material. Noting that parts of the submission created without AI were sound, Colbeck added: “My concern is the submission was received, published under parliamentary privilege and a party then has to go through a process saying, ‘None of this happened’.” Senate clerk Richard Pye noted that the submission contents had "seemed plausible" and presented no obvious red flags. He said there is not currently any routine use of "diagnostic tools" to spot AI-generated portions of submissions, but he hoped a trial of Microsoft Copilot for M365 might lead to a technical way to identify and flag such submissions as they are received. “I think we do need to be more aware of the circumstances that people are using AI," Pye said. He added that AI-generated content in submissions may not all be of concern. "I would also say we should be cautious about dismissing products that are generated with the assistance of AI out of hand because they can be a good way for people, who don’t necessarily have the tools, to bring their submission to parliament," Pye said. "This is a facilitative technology.” ChatGPT pop-up Pye also revealed that the Department of Parliamentary Services has started using a pop-up to deter parliamentary staff from inputting non-public information into generative AI tools like ChatGPT. “The pop-up comes up and warns that only publicly available information should be entered into the tools and results should be checked carefully, and it gives you a contact if you accidentally put sensitive information,” he said. The move comes after a series of questions were placed on notice that sought to understand what protections different departments and agencies had in place around staff use of publicly accessible generative AI tools.