Back to Articles
Google Reports Scale of Complaints About AI Deepfake Terrorism Content to Australian Regulator

iTnews

SKIPPED

Details

Date Published
5 Mar 2025
Priority Score
4
Australian
Yes
Created
11 Mar 2025, 04:01 pm

Authors (1)

Description

Gemini used to create material.

Summary

Google has reported to Australian regulators that its AI software has been used to create deepfake content, including terrorism-related material, indicating a significant global issue with over 250 complaints received. This disclosure highlights the potential misuse of AI technologies like Google's Gemini, raising important concerns about the regulatory frameworks needed to manage and mitigate such risks effectively. While the article underscores the importance of AI regulation in preventing harmful uses, it also points to gaps in current governance structures that need addressing to curtail potential security threats. The information is particularly relevant to Australia's policy development in AI safety and governance.

Body

Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material. The tech giant also said it had received dozens of user reports warning that its AI program, Gemini, ... Hi! You've reached one of our premium articles. This is available exclusively to subscribers. It's free to register, and only takes a few minutes. Once you sign up you'll have unlimited access to the full catalogue of Australia's best business IT content, as well as a daily news bulletin delivered straight to your inbox. Register now Already have an account? Log in to read this article.