AI Deepfakes: A Symptom of Declining Trust in Politics
Crikey
SKIPPED
Details
- Date Published
- 22 Feb 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 02:41 pm
Description
Ask not why people are inclined to believe fake content, but instead why they distrust those telling them it's real.
Summary
The article explores the intersection of AI-generated deepfake technology and declining trust in political institutions. It highlights the potential for AI-generated content to confuse and mislead, especially during elections, as seen in Slovakia and Argentina. The emergence of generative AI tools like OpenAI's Sora underscores the rapid technological advancements that outpace regulatory measures, complicating efforts to verify digital information authenticity. This phenomenon is considered symptomatic of a broader mistrust in traditional truth-bearing institutions rather than a standalone issue, suggesting an urgent need for better governance and policy responses to manage AI's impact on society and democracy.
Body
This year is a landmark one for democracy, withmore than 4 billion peoplein more than 60 countries eligible to vote in elections. There’s a grim poetry in this historic milestone coming at a point when our information ecology is at its most fractious.At the end of last week, American artificial intelligence company OpenAI announcedSora, its text-to-video model. Sora, when it is publicly available, will allow users to generate lifelike video from text prompts — with example outputs on the website ranging from sweeping drone shots of a patch of rocky Californian coastline to a woman walking down a densely crowded street in Tokyo. These videos may not stand up to close scrutiny, but amid the endless firehouse of content on today’s internet, it’s hard to imagine them receiving that level of examination.Sora is the latest in a legion of new generative-AI tools that have emerged from recent advances in neural network development, many spearheaded by researchers at OpenAI. These tools, now also offered by major big tech firms like Meta, Google and Microsoft, leverage vast reservoirs of computing power to enable millions of users to frictionlessly conjure text, video and audio from the digital ether. The outputs of these new platforms at this point have spread far faster than our ability to develop and implement systems to verify their artificial nature.It’s a problem of scale: by bringing the marginal cost of producing content to near zero, the volume of that content will increase exponentially.Related Article Block PlaceholderArticle ID: 1078441Why we need to regulate artificial intelligence before it’s too lateLeslie CannoldMost generative-AI deceptions exist on a harmless continuum from amusing to annoying. In March last year, a photo went briefly viral that depicted Pope Francis decked out in analarmingly stylishBalenciaga puffer jacket in lieu of his usual papal vestments. It was an AI fake, generated with the latest release from generative-AI startupMidjourney.But there are good reasons to be concerned about political impacts on states other than the Vatican. In 2023’s elections inSlovakiaandArgentina, for example, deepfaked audio spread on social media depicting political candidates and government figures saying things they did not say. The actual lasting impact of these generative-AI interventions is hard to quantify, but it demonstrates an obvious point: if you make it vastly easier to fake images, audio and video, then bad actors will avail themselves of the opportunity.Generative AI also poisons the well when it comes to things thatdidoccur. Politicians now have a readymade excuse when confronted with video or audio evidence of misdeeds: it’s a deepfake. Last year, a Taiwanese lawmaker suggested a grainy video that purportedly depicted him engaged in an extramarital affairwas AI-generated. In July, a politician from India’s ruling Bharatiya Janata Party mounted asimilar defencewhen audio of him accusing his own political faction of corruption leaked online. Despite reporting, the actual truth of the matter in both of these examples remains unresolved.There’s an argument to be made that generative AI is a symptom of a broader collapse in our traditionally truth-bearing institutions, rather than some new and unique problem for democracy. The past decade has seen numerous destabilising political events, with misinformation and disinformation blamed as the culprit. In 2016, Brexit and the election of Donald Trump led to an international discourse about fake news,social media “filter bubbles”and state disinformation campaigns. Populist anger at COVID-19 lockdowns and vaccinations wassimilarly blamedon online misinformation, with institutions like the World Health Organization mounting public information campaigns against the pithily named “infodemic”.It may well be the case that this is a slow death for the existing media and political establishment — or “regime”, as the new torchbearers of free speech would say — under a technological onslaught that began in earnest when Google started indexing and ranking the web for public consumption. We might ask not why people are inclined to believe fake images and videos that cross their internet feeds, but instead why they distrust anyone telling them otherwise.