Back to Articles
NSW Police to Establish AI Centre

iTnews

ENRICHED

Details

Date Published
22 Feb 2026
Priority Score
3
Australian
Yes
Created
22 Feb 2026, 08:45 pm

Authors (0)

No authors linked

Description

Will lead risk reviews in first instance.

Summary

The New South Wales Police Force is initiating the development of an AI center to oversee the adoption and governance of AI technologies. This effort ties into the revised NSW government's artificial intelligence assessment framework, aimed at ensuring AI systems are developed and used safely and ethically. By focusing on alignment with this framework, the center intends to manage AI risk and ensure responsible AI integration within the police force. The center reflects the increasing role of AI in public safety, highlighting both opportunities for improved operations and concerns about biases in AI policing tools.

Body

NSW Police Force is in the early stages of setting up an artificial intelligence centre that will oversee all aspects of the technology’s adoption. The centre, to be based in Parramatta and within the Force’s technology and communication services command, is recruiting its inaugural manager. The manager has a broad remit, from defining and enforcing governance and risk thresholds internally, to vendor oversight, according to the position description. Governance, in particular, is emphasised, with the initial focus of the centre trained on alignment with the NSW government’s revised artificial intelligence assessment framework or AIAF. The AIAF is intended to ensure AI systems used by state agencies are “designed, developed, procured, and used in a safe, ethical, and responsible manner”, according to the state’s Office of AI. The framework was recently updated, shifting away from having agencies self-assess risk levels associated with their AI use, to having it automatically designated as low, medium or high risk, based on responses to questions. The managerial position description for the AI centre notes they will “lead the review of AIAF risk assessments, collaborating with privacy, security, ethics, legal, records, data and AI experts, to drive future ready, safe and responsible AI usage”, and “oversee record keeping for decisions related to managing the risk of AI solutions to ensure the results of applying the AIAF support and assure risk mitigation.” More broadly, the centre is intended to produce a capability uplift across the Force around AI risk management. Additionally, it has a stated responsibility to “oversee AI system development” and to “oversee vendors and third-party AI solutions” that NSW Police adopt. NSW Police CTO Suzy Mann wrote in a LinkedIn post that the manager - and the establishment of the centre more broadly - “reinforces the importance of disciplined, transparent and accountable approaches to AI in a mission‑critical public‑sector environment.” Given the early-stage status, iTnews was unable to get an accurate picture of how the centre is to be resourced, in terms of its intended headcount and whether it has a specific budget allocation. A spokesperson told iTnews that the manager will “lead the development of NSW Police Force policy and strategy with regard to AI”, including “the identification of positions and structures required to allow the [Force] to adapt to the rapidly evolving AI environment.” The spokesperson added that AI governance and management responsibilities “currently sit with NSW Police Force executive leadership positions”, which could indicate these are in line to be transferred to the centre. On its website, NSW Police disclose a broad-spectrum interest in how AI might assist with its operations. This includes “ways of integrating generative artificial intelligence technologies into policing”, such as for “suspect sketching, the automation of documentation and paperwork, the automatic processing of high volumes of data for the identification of relevant laws, procedures, and precedents etc.” It also includes “novel ways in which artificial intelligence can be used to prevent/reduce crime.” It is the latter, in particular, which has drawn scrutiny from digital rights groups and government committees in recent years, as it is often unclear how AI tools work, and the extent to which in-built biases could impact policing work.