OAIC Creates Guides to Ensure Privacy Laws for AI Projects
ARN
SKIPPED
Details
- Date Published
- 21 Oct 2024
- Priority Score
- 4
- Australian
- Yes
- Created
- 8 Mar 2025, 01:04 pm
Description
The Office of the Australian Information Commissioner (OAIC) has created two guides to help businesses navigate how Australian privacy law applies to artificial intelligence (AI) and sets out the regulator’s expectations. “AI products should not be used simply because they are available,” said OAIC Commissioner Carly Kind. One of the guides outlines for businesses to […]
Summary
The Office of the Australian Information Commissioner (OAIC) has released two guides to assist businesses in complying with Australian privacy laws as they pertain to artificial intelligence. These guides emphasize the importance of 'privacy by design,' encouraging businesses to conduct privacy impact assessments and ensure transparent usage notifications. Furthermore, the guides provide developers with principles for managing personal information used in AI models, advocating for robust governance to build community trust. This initiative is significant within the context of Australia's efforts to align AI development with strong privacy safeguards, addressing global concerns about the ethical use of AI with personal data.
Body
The Office of the Australian Information Commissioner (OAIC) has created two guides to help businesses navigate how Australian privacy law applies to artificial intelligence (AI) and sets out the regulator’s expectations.
“AI products should not be used simply because they are available,” said OAIC Commissioner Carly Kind.
One of the guides outlines for businesses to comply with their privacy obligations when using commercially available AI products and help them to select an appropriate product.
According to the guide, organisations considering the use of AI products should take a "privacy by design" approach, which includes conducting a privacy impact assessment.
Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI.
“They should establish policies and procedures for the use of AI systems to facilitate transparency and ensure good privacy governance,” the guide stated.
The second guide meanwhile sets out privacy procedures to help developers using personal information to train generative AI models.
“How businesses should be approaching AI and what good AI governance looks like is one of the top issues of interest and challenge for industry right now,” said Kind.
According to the guide, developers using large volumes of information to train generative AI models should consider whether the information includes personal information (inferred, incorrect or artificially generated information produced by AI models), particularly where the information is from an unclear source and where it is about an identified or reasonably identifiable individual.
“Robust privacy governance and safeguards are essential for businesses to gain advantage from AI and build trust and confidence in the community,” Kind said.
These guides align with the OAIC’s focus on promoting privacy when it comes to emerging technologies and digital initiatives.
“Australians are increasingly concerned about the use of their personal information by AI, particularly to train generative AI products,” stated Kind.
While the guides addresses the current situation, an important focus remains on how AI privacy protections can be strengthened.
“With developments in technology continuing to evolve and challenge our right to control our personal information, the time for privacy reform is now,” said Kind. “In particular, the introduction of a positive obligation on businesses to ensure personal information handling is fair and reasonable would help to ensure uses of AI pass the pub test.”