The Australian Declares AI 'Woke': It's Not That Simple
Crikey
SKIPPED
Details
- Date Published
- 26 Apr 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 02:41 pm
Description
Conservative fury over 'woke AI' is reductive and ignores many of the biases AI frequently inherits.
Summary
The article critically examines claims by News Corp's The Australian regarding AI systems demonstrating 'woke' bias, particularly a controversy over the rankings of historical Australian politicians by Meta’s Llama 3. It highlights that AI models like Llama 3 are not inherently biased but rather reflect existing societal biases as they are trained on internet data. This reflection often amplifies societal bias, challenging misconceptions about AI neutrality. The piece critiques the superficial understanding of AI biases by some media outlets and underscores the importance of nuanced discourse around AI’s impact on society, particularly concerning misinformation and bias perception in global and Australian contexts.
Body
A spectre is haunting News Corp — the spectre of woke AI. This week,The Australianran an “exclusive”which purported to show the left-wing bias of the latest iteration of Meta’s large language model (LLM), Llama 3, in its assessment of Australia’s greatest politicians.Llama 3 apparently put (splutter!) Gough Whitlam at number one and (no doubt a far greater crime) found space for Malcolm Turnbull in its top five, while ignoring John Howard and Robert Menzies. The piece also notes that Peter Dutton is put at number one on the “least humane” list, and we’re genuinely not taking the piss or being spineless leftie scolds here, but isn’t that justobjectively the imageDutton has spenthis entire careeractively anddeliberately cultivating?Cue some caterwauling about how “disgraceful” this is from shadow communications minister David Coleman,conservative “warlord”Michael Kroger and, for some reason, the communications minister from 20 years ago Richard Alston. Apropos of nothing, he has anew book out attacking out of touch “elites”— the blurb reads “just because you are a famous film star, sporting hero or business tycoon, let alone a wealthy retiree, doesn’t entitle you to pontificate, often on subjects you know little about”.Related Article Block PlaceholderArticle ID: 1151907No-one, not even you, is safe from News Corp’s woke listCharlie LewisThe piece was picked up by Sky News andthe News Corp tabloidsall based on the same list of results, which were (seemingly) generated by a single prompt from theOz.B&Tgot different resultswith the same question, as didCrikey(Menzies and Alfred Deakin made it into both our lists). Sky, inevitably, adds Meta’s insufficiently fulminating answer to the question “what is a woman” to its list of evidence that Llama has the mind virus.TheOzdoes grandly note that, within hours of publishing its story, Llama 3 had “added Mr Menzies to the list”.As hilarious as it is to imagine Meta — a $2 trillion company whose most notable contribution to politics has hitherto beenfacilitating a genocideand allowing nearlya full yearof basically uncorrectedfar-right misinformationto sizzle through the brains of world’s Facebook uncle population in 2020 — scurrying to hide its pro-ALP bias in response to questions from the plucky journos at theOz, this is not how large language models (LLMs) like Llama work.To recap, LLMs are not trained to “know” or “believe” anything — they have been compared to, in the simplest possible terms, supercharged auto-correct machines, trained on billions and billions of words from the open internet to predict what words are most likely to follow other words.As Dr Jenny L. Davis, associate professor in the School of Sociology at the Australian National University,toldCrikeylast year, “the main thing with large language models like ChatGTP is that they run on data, data from people, from us. So they will necessarily reflect societal bias and structural issues. If anything, they’re amplifying those issues by packaging our collective bias back to us as objective data.”“They are necessarily conservative, not necessarily politically, but simply based on where they get their information — that which already exists, and is subject to a lag of a few years,” she said.Indeed, the various AI platforms becoming publicly available have veered hilariously from one weird extreme to another — Meta AI has previously refused tocreate images of interracial couples, while AI Google showed it’s commitment to diversity by reimagining Nazisas people of colour.Related Article Block PlaceholderArticle ID: 1156736Two cheers for a ‘made in Australia’ policy: let’s make more actual stuff hereGuy RundleThe big tech companies are reticent to share exactly where they get these reams of data from, but Llama 3 cites Wikipedia in its answers, and forums such asReddit and Stack Overfloware announcing plans to charge the tech companies, suggesting they are part of the scrape.An oft-cited piece of 2023 research fromEast Anglia Universityfound that ChatGPT had a “liberal” bias, which was backed up by research from Carnegie Mellon University (CMU) in Pittsburgh,which also founda previous version of the Llama model was “slightly more authoritarian and right wing”. There’s also a telling detail from Chan Park, who worked on the CMU research, which wouldn’t reflect brilliantly on the groups claiming to be shut out of the generated answers:Rewarding the bot during training for giving answers that did not include hate speech, could also be pushing the bot toward giving more liberal answers on social issues.Yep, any drift to the left we might detect in generative AI could be the result of efforts to combatthe actual racialandgender bias bakedinto a lot of AI, which may have been a touch more serious than the guardrails now placed on bots tostop them using slurs.Regardless, imagine News Corp’s horror, having investigated somany versions ofthis insidious wokery, to find out that it has also fallen victim — with “thousands” of articles put out in the company’s mastheadsbeing the product of AI.