Tech Companies Advised to Label and 'Watermark' AI-Generated Content
ABC News
SKIPPED
Details
- Date Published
- 29 Nov 2025
- Priority Score
- 4
- Australian
- Yes
- Created
- 1 Dec 2025, 10:59 am
Description
With AI deepfakes causing confusion and harm online, the federal government tells tech developers that AI content should be clearly labelled, ahead of a National AI Plan due to be released.
Summary
The article reports on the Australian government's new guidance for AI developers to label and watermark AI-generated content in order to prevent deception and harm associated with deepfakes. This move is part of a broader National AI Plan, set to introduce mandatory guardrails to mitigate AI risks while encouraging economic growth. The guidance is aimed at increasing transparency and trust in AI content, particularly in preventing its misuse for fraud, misinformation, and other harmful activities. The article highlights the creation of an AI Safety Institute and potential legislative developments, underscoring Australia's proactive approach towards AI safety and governance.
Body
Tech companies advised to label and 'watermark' AI-generated contentBy political reporterJake EvansTopic:AISun 30 NovSunday 30 NovemberSun 30 Nov 2025 at 6:42pmThe government is advising AI developers to make clear their content is AI-generated.(Reuters)In short:AI developers have been issued guidance that content should be clearly identifiable as AI-generated using labels and watermarks.The government has been urged to introduce laws that would prohibit "deepfakes" and make AI content clearly identifiable.What's next?The federal government is due to release the National AI Plan, a comprehensive response to the rise and spread of generative AI.Artificial intelligence developers have been told they should "watermark" content so that people can clearly tell it is AI-generated, as the government seeks to address concerns the technology is being used to mislead and harm Australians.There is no legal requirement to identify content as AI-generated, which has allowed generated content to be confused as real, known as deepfaking.In guidance to developers and content creators, the federal government has advised that AI content should be "clearly identifiable" by including labels that note content is AI-generated or embedding information to trace the origins of content — a process known as watermarking, which is more difficult to manipulate or remove than labels.It notes that transparency tools are particularly important where AI-generated content could be used to "adversely affect" people, and become more important the more heavily AI has been involved in creating the content."AI is here to stay. By being transparent about when and how it is used, we can ensure the community benefits from innovation without sacrificing trust," Industry Minister Tim Ayres said in a statement."That's why the Albanese government is urging businesses to adopt this guidance. It's about building trust, protecting integrity, and giving Australians confidence in the content they consume."Some companies, including Google, already watermark AI content.Tim Ayres says AI content is here to stay, and developers should be transparent about it.(ABC News: Matt Roberts)The rapid spread of generative AI has fuelled fears that the technology could be used for fraud, misinformation, blackmail, or to exploit people by creating convincing fake content that misrepresents what a person has said or done.The eSafety Commission has warned thatdeepfake image-based abuseis happening at least once a week in Australian schools.On Monday, independent senator David Pocock introduced a private senator's bill toprohibit digitally altered or artificially generated content depicting an individual's faceor voice without their consent.Senator Pocock said the federal government had been too slow and had failed to comprehensively respond since beginning itsreview into responsible AImore than two years ago.National AI Plan due to be releasedThe new AI guidance has been given to industry ahead of the government releasing a National AI Plan, which is the culmination of several years of consulting and isexpected to introduce "mandatory guardrails"to protect against the worst impacts of AI.The plan will also respond to ideas raised at the government's productivity roundtable in August, where AI was a central focus of discussion on how to boost the economy and lift wages.TheProductivity Commission warned against mandatory guardrails being introducedat that roundtable, saying it could strangle a $116 billion opportunity for the economy, urging that any legislative response be paused until gaps in the law were properly identified.But while the government seeks to strike a balance between the risks of AI and a potential economic boom, its recent measures have focused on addressing deep concerns in the community about safety.Senator Ayres last week announced the government would create an AI Safety Institute, which could monitor and respond to "AI-related risks" and help to build trust in the technology.Former industry minister Ed Husic, who began the consultations on a federal response to the growth of AI, has called for a dedicated AI Act that could provide a framework to flexibly respond as the technology develops.