Australian Federal Police Experiment with AI Chatbots to Navigate Complex Rules
The West Australian
SKIPPED
Details
- Date Published
- 24 Feb 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 01:04 pm
Description
The Australian Federal Police is experimenting with using artificial intelligence chatbots to help officers navigate its often complex governance rules.
Summary
The Australian Federal Police (AFP) are engaging in an experimental trial with artificial intelligence (AI) chatbots to assist officers in navigating complex governance rules. This initiative explores the potential of AI to streamline and clarify the decision-making processes surrounding sensitive tasks such as executing search warrants and conducting politically sensitive investigations. The project is part of a broader response to the rapid evolution of AI technologies and aims to build confidence in using new tools within non-operational environments. While the trial's outcome could guide future investments and strategies, the use of large language models in this context underscores the challenges and potential risks associated with integrating AI into sensitive policing frameworks.
Body
The Australian Federal Police is experimenting with using artificial intelligence chatbots to help officers navigate its often complex governance rules. This may include offering answers on how to make decisions around the execution of search warrants or what constitutes a politically sensitive investigation. A small AFP artificial intelligence team is halfway through a year-long trial of the emerging technology, The West Australian can reveal. But it plans to trash the “large language model” at the end of the experiment Government departments have been grappling with the rapid rise of AI and tools that use large language models, such as ChatGPT. Many have banned them outright, while others allow staff access for uses such as developing graphics for internal newsletters, writing social media posts or generating ideas for networking events, as The West revealed in January. The AFP is actively testing out how it can add a chatbot-style tool to systems used to help its staff. The experiment began in July and runs for a year under the auspices of the AFP’s innovation fund. “The AFP is using commercially available LLM (lare language model) for the experiment, which will occur in an isolated environment,” a spokesperson told The West. “Conducting these types of safe experiments on non-operational data is important in building our members’ confidence in the use of new technology.” It is using only publicly available information, such as the myriad of national guidelines, with the aim of helping AFP staff more easily find the information they need in particular situations and increase the useability of those governance instruments. The force has found it increasingly challenging to navigate the huge volume of rules and guidelines for staff as well as the complexity of how they interact with each other and different pieces of legislation. For example, the AFP has guidelines on topics ranging from the purchase and use of police vehicles or handling of evidence, to integrity reporting and conflicts of interest, or how to decide what is a “sensitive investigation” that require additional oversight. The rules around sensitive investigations — those that involve or affect politicians, media, prominent Australians or parliaments — have come under public scrutiny in recent years after raids on journalists and allegations of sexual assault involving political staffers. Sensitive investigations require briefings to senior AFP commanders and often the relevant minister. The AI team is testing whether a chatbot could be given all these guidelines and then asked questions of the type that police might run into during their ordinary work. During the testing, they are examining how accurate the answers returned are, to see whether it’s a viable way of using artificial intelligence. The plan is to decommission the large language model used when the experiment comes to an end in June, but use the outcomes and insights from the tests to see whether the technology would be useful for the AFP in the longer term. “The objective of this LLM experiment is to explore and validate the feasibility of our proposed ideas and solutions,” the spokesperson said. “This will inform our future capability planning and shape our future investment strategies and decisions.”