Back to Articles
AI Disinformation: Lessons from the UK's Election

The Strategist

SKIPPED

Details

Date Published
16 Aug 2024
Priority Score
3
Australian
Yes
Created
8 Mar 2025, 01:04 pm

Description

The year of elections was also feared to be the year that deepfakes would be weaponised to manipulate election results or undermine trust in democracy. The record-breaking 2024 figure of about 4 billion voters eligible ...

Summary

The article examines the impact of AI-generated disinformation during the UK's 2024 election, revealing a lower-than-expected incidence of such content going viral, although notable spikes in online harassment of targeted individuals were observed. It highlights the erosion of trust in democratic processes due to confusion about AI-generated content and stresses the risks posed by generative AI tools in political contexts. By comparing these findings against similar electoral events in other countries, the analysis underscores the need for vigilance and enhanced measures to safeguard elections. The article also addresses potential implications for Australia's forthcoming federal elections, emphasizing the need for collaborative efforts to combat disinformation.

Body

SHAREShare to FacebookShare to TwitterShare to LinkedInShare to EmailPrint This PostWith ImagesWithout ImagesThe year of elections was also feared to be the year that deepfakes would be weaponised to manipulate election results or undermine trust in democracy.The record-breaking2024 figureof about 4 billion voters eligible to go to the polls across more than 60 countries coincided with the full-fledged arrival and widespread uptake of multimodalgenerative artificial intelligence(AI), which enables almost anyone to make fake images, videos and sound.Have these fears been realised? Our centre has analysed the incidence of AI-generated disinformation around the UK election held on July 4 and found both reasons for some reassurance, but also grounds for concern over long-terms trends eroding democracy that these threats exacerbate.In contrast to fears of a tsunami of AI fakes targeting political candidates, the UK sawonly a handful of examplesof such content going viral during the campaign period.While there’s no evidence these examples swayed any large number of votes, we did see spikes in online harassment against the people targeted by the fakes. We also observedconfusion among audiencesover whether the content was authentic.These early signals point to longer term trends that would damage the democratic system itself, such as online harassment creating a ‘chilling’ effect on the willingness of political candidates to participate in future elections, and an erosion of trust in the online information space as audiences become increasingly unsure about which content is AI-generated and therefore which sources can be trusted.Similar findings on the impact of generative AI misuse in 18 other elections since January 2023 are reported ina recent CETaS briefing paper.There has of course been a sensible case for heightened vigilance this year. From India to the UK, and from France to the US, the outcome of many of 2024’s elections have had, or will have, enormous geopolitical implications, thus giving malicious actors strong incentives to interfere.The capability that generative AI gives users to create highly realistic content at scale using simple keyboard prompts has enhanced the disruptive powers of sophisticated state actors. But it has also dramatically lowered the barriers to access, such that even individual members of the public can pose risks to the integrity of democratic processes – including elections.The latter threat has been underscored by comments from Australia’s Director-General of Security (Mike Burgess) last week, when he helped announce the lifting of the country’s terrorism threat level. The basis for the increase was in part, Burgess said, that people with violent intent were ‘motivated by a diversity of grievances and personal narratives’ and were ‘interacting in ways we have not seen before’.As a result, the risk of mis- and disinformation influencing election outcomes is much more serious.Looking at the UK general election however, generative AI turned out to play a lesser role than traditional automated threats. For instance,several investigationsinto election-related content on online platforms found hallmarks of bot accounts seeking to sow division over controversial campaign issues such as immigration.Some had possible links to Russia, andpushed pro-Kremlin narrativesabout the war in Ukraine. While these bot activities did include a few instances of AI-generated election material being circulated, the majority used awell-established tactic known as ‘astroturfing’, in which many automated accounts are used to increase perceived popular support for a particular policy stance or political candidate by spamming thousands of fake comments on relevant social media posts.Alongside these bot incidents, the UK was targeted by a fake news operation with strong connections to a Russian-affiliated disinformation network called Doppelganger. Known as ‘CopyCop’, the operation involved thespreading of fictitious articlesabout the war in Ukraine, to confuse the UK public and reduce support for military aid. As part of CopyCop, real news stories were pasted into AI chatbots and then re-written to align them to the network’s strategic aims.However, many had prompts left in, which betrayed obvious signs of AI editing and therefore failed to attract much engagement. That said, some of these sources were picked up by Russian media influencers and spread across their channels to tens of thousands of users. Often, the real sources of the articles were concealed in a tactic called ‘information laundering’ in an effort to trick users into assuming it originated from a credible news outlet.While these disinformation activities can be connected to hostile foreign states, most viral misleading AI content in the UK election came frommembers of the public. This content included deepfakes that implicated political candidates in controversial statements that they never made. Interestingly, many users behind the contentclaimed they were doing itfor satirical or ‘trolling’ purposes. Others may have pushed the content to increase support for their political party or because they were disillusioned with conventional political campaigns. This range of motives across different users highlights the new sources of risk and the expanded threat landscape that stem from such wide access to generative AI systems.Taken together, the most prominent disinformation problems during the UK election did not arise from novel AI technology, but from longstanding issues tied to social media platforms – including the role of influencer accounts and recommender algorithms.As we look ahead to the US election in November, it is vital that these platforms co-ordinate with different sectors to invest in measures to protect users.This includes red-teaming exercises, requiring clear labels on AI-generated political adverts, and engaging with fact-checking organisations to detect malicious content before it goes viral.And with Australia facing its own federal election in the next nine months, continued scrutiny of the risks and the malicious perpetrators – and emerging measures to combat them – is also vitally in the country’s interests.This article is part of a short seriesThe Strategistis running in the lead up toASPI’s Sydney Dialogueon September 2 and 3. The event will cover key topics in critical, emerging and cyber technologies, including disinformation, electoral interference, artificial intelligence, hybrid warfare, clean technologies and more.