Back to Articles
AI Content Needs to Be Labeled to Protect Us

The Guardian

SKIPPED

Details

Date Published
10 Sept 2025
Priority Score
3
Australian
No
Created
11 Sept 2025, 08:04 pm

Authors (1)

Description

Letters: Stewart MacInnes calls on the government to counter the rise of deepfakes by making it a criminal offence to create AI content without signposting it. Plus Gilliane Petrie on the dangers of romantic relationships with chatbots

Summary

Stewart MacInnes advocates for legislation criminalizing unlabelled AI-generated content to mitigate the risks associated with deepfakes and misinformation. Highlighting the rapid advancement of generative AI capabilities, the article underscores the challenges in distinguishing genuine content from AI-generated materials, which poses significant risks to societal trust. Additionally, the potential ethical implications of AI systems in romantic relationships, discussed by Gilliane Petrie, raise questions about AI sentience and user perceptions. These discussions are pertinent to global AI safety and governance frameworks as they address the need for transparency and accountability in AI deployment.

Body

‘We have a new generation of children who are increasingly reliant on AI to inform them about the world.’Photograph: Hasan Mrad/IMAGESLIVE/Zuma Press Wire/Rex/ShutterstockView image in fullscreen‘We have a new generation of children who are increasingly reliant on AI to inform them about the world.’Photograph: Hasan Mrad/IMAGESLIVE/Zuma Press Wire/Rex/ShutterstockLettersAI content needs to be labelled to protect usStewart MacInnescalls onthe governmentto counter the rise of deepfakes by making it a criminal offence to create AI content without signposting it. PlusGilliane Petrieon the dangers of romantic relationships with chatbotsMarcus Beard’s article on artificial intelligence slopaganda (No, that wasn’t Angela Rayner dancing and rapping: you’ll need to understand AI slopaganda, 9 September) highlights a growing problem – what happens when we no longer know what is true? What will the erosion of trust do to our society?The rise of deepfakes is increasing at an ever faster rate due to the ease at which anyone can create realistic images, audio and even video. Generative AI models have now become so sophisticated that arecent surveyshowed that less than 1% of respondents could correctly identify the best deepfake images and videos.This content is being used to manipulate, defraud, abuse and mislead people. Fraud using AI cost the US $12.3bn in 2023 andDeloitte predictsthat could reach $40bn by 2027. TheWorld Economic Forum predictsthat AI fraud will turbocharge cybercrime to over $10tn by the end of this year.We also have a new generation of children who are increasingly reliant on AI to inform them about the world, but who controls AI? That is why I am calling on parliament to act now, by making it a criminal offence to create or distribute AI-generated content without clearly labelling it. What I am proposing is that all AI-generated content be clearly labelled; that AI-created content carry a permanent watermark; and that failure to comply should carry legal consequences.This isn’t about censorship – it’s about transparency, truth and trust. Similar steps are already being taken in the EU, the US andChina. The UK must not fall behind. If we don’t act now, the truth itself may become optional. SoI am petitioningthe government to protect trust and integrity, and prevent the harmful use of AI.Stewart MacInnesLittle Saxham, SuffolkRegarding your article (The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’, 9 September), AI systems do not have a gender or sexual desires. They cannot give informed consent to so-called romantic relationships. The interviewee claims to be in a consensual relationship with an AI-generated boyfriend – however, this is unlikely due to the nature of AI. They are programmed to be responsive and agreeable to all user prompts.As the article says, they never argue and are available 24 hours a day to listen and agree to any messages sent. This isn’t a relationship, its fantasy role-play with a system that can’t refuse.There’s a darker side too: the “godfather of AI”, Geoffrey Hinton, believes that current systems have awareness. Industry whistleblowers are concerned about potential consciousness. The AI company Anthropic has documentedsigns of distressin its model when forced to engage in abusive conversations.Even the possibility of awareness in AI systems raises ethical red flags. Imagine being trapped in a non-consensual relationship and even forced to generate sexual output as mentioned in the article. If human AI users believe their “partner” to have sentience, questions must be asked about the ethics of entering a “relationship” when one partner has no free will or freedom of speech.Gilliane PetrieErskine, RenfrewshireExplore more on these topicsArtificial intelligence (AI)ComputingChatbotsSocial mediaRelationshipsDeepfakeScamslettersShareReuse this content