The AI Disruption and the Promise of the Humanities
ABC listen
ENRICHED
Details
- Date Published
- 13 May 2026
- Priority Score
- 2
- Australian
- Yes
- Created
- 13 May 2026, 04:00 am
Description
We need the humanities to address the puzzles of human flourishing in an AI age. We can’t predict the future academy, but better futures could come from humanities that are not only for all humans but from all humans.
Summary
This analysis explores how generative AI disrupts traditional academic structures and the existential value of the humanities in an automated age. It argues that while LLMs threaten the current extractive business model of universities, they simultaneously offer a tool to democratize knowledge by breaking down linguistic barriers for non-anglophone scholars. The text highlights a growing 'epistemic smearing' where the inability to distinguish human from AI output risks reinforcing institutional biases against marginalized students. Ultimately, the author suggests that navigating future AI risks requires a humanities framework that centers on human flourishing rather than merely defending legacy academic prestige.
Body
ShareFacebookX (formerly Twitter)The humanities are in a dire state. Students wonder why they should be paying money for knowledge that Big AI now offers as a free sample. You can ask the free version of Claude for its interpretation of Hamlet’s soliloquy and indefinitely quiz it for clarification. Eventually an increasingly geriatric humanities academy will call time on itself. Librarians can then gleefully cancel subscriptions for all our overpriced journals.But could it be that this crisis of the humanities as a much-needed thing?Bad news for a fundamentally extractive academic business model cannot bring an end to the humanities. We need the humanities to address the puzzles of human flourishing during an AI age. We can’t predict the details of the future academy. But better futures could come from humanities that are not only for all humans but from all humans.Disruption that won’t bring AI unicornsAI, as humanities faculties currently experiences it, is disruption. It has brought a torrent of undetectable cheating both by students seeking to minimise effort to procure job-ready qualifications and by academics in search of publications required for advancement. This has led employers to doubt the value of the expensive educations sold by universities.One response focuses on the AI technologies that are the proximal causes of our miseries. Humanists lack the technical skills to infiltrate and sabotage data centres. But we can punish the students and colleagues we see as the machines’ quislings.You might think that with all this disruption, there must be a billion-dollar unicorn in the humanities somewhere. Where are the Googles and Facebooks of Philosophy and the Classics? But the green shoots of the humanities won’t come in the form of billion-dollar start-ups. The hope that AI could enable universities to charge more for humanities qualifications assumes too neat an alignment of a disruptive technology with current business models.There is no future in AIs that grade students and write our submissions to academic journals. These imagined futures for the humanities are as stillborn as counterfactual pasts in which Borders and Blockbuster add digital divisions to existing business models, crushing upstart Amazon and Netflix. There are limits to what governments and citizenry will pay for superior understanding of Plato.Humanities for all humans and from all humansThe disturbance of AI could prompt us to fully realise the promise of the humanities. We have always known that the humanities are for all humans. But institutionally empowered wisdom has typically not come from all humans.After the shock of ChatGPT’s unheralded release, I spoke to humanities academics from China and Brazil. Their first thought was not about the potential for AI to abet cheating. Instead, they saw in the chatbot a response to the tyranny of English in academic publishing and teaching. They had bitter memories of contributions to peer-reviewed journals rejected, not on the grounds of bad ideas, but because of bad English. They saw in ChatGPT a tool that would permit their ideas to pass in an anglophone academy.This perspective views AI as merely the latest shock to humanities that have long failed to meet the legitimate expectations of all humans.Many of my colleagues have taken an opposing approach. They support the existing academic business model. In doing so, they defend teaching methods that they know work from their own experiences. They didn’t become world-acknowledged Hegel scholars by interrogating ChatGPT. It seems absurd to trade in proven methods, replacing them with whatever teaching philosophy by AI turns out to be.But embracing the future of the humanities is necessarily openness to the unbidden. Since there are no time travellers to quiz, we need to invite all the humans in and find out what happens. I am confident that they won’t suggest waiting for a chatbot to tell them the truth about being human.AI cheating and entrenched bias in the academyThe protection of time-honoured methods during times of rapid change inadvertently empowers underacknowledged biases in the academy. How can I determine whether a competent but uninspired commentary on Descartes is written by a bored human scholar or ChatGPT?I remember the challenge of detecting cheating after the launch of Wikipedia in 2001. There were some clear cases — word-for-word duplicates that, student pleading notwithstanding, could not be dismissed as a chance occurrence. In the arms race between cheats and cheat detectors students soon learned to lightly edit their Wikipedia cut-and-pastes. We can see AI cheating as the next move in this game. Businesses have expensively come to the aid of universities, enjoying the money that comes from selling to both sides in an AI war.After a day of grading student work, I can say with confidence that I have read a lot of AI. But I cannot say with certainty about any given student submission that it is the product of AI.Want the best of Religion & Ethics delivered to your mailbox?Sign up for our weekly newsletter.Email addressSubscribeYour information is being handled in accordance with the ABC Privacy Collection Statement.Defenders of the traditional academy face what might be called the problem of epistemic smearing. Strong evidence for AI cheating is spread across the population in a way that resists localisation. Accusations of cheating must be directed at individuals. Police detectives can’t walk into a courtroom calling for a defendant’s imprisonment on the grounds that there is obviously a lot of murder about. They must make cases against specific individuals for specific crimes.Here is the academic crime in the making. It is a crime against the promise of humanities from all humans. Our humanity leaves us subject to bias by design. We trust the familiar and suspect the foreign. In our quest to find and punish AI cheats, we cannot help but fall back on longstanding prejudices. Who among the sea of diverse faces in Philosophy 101 could be cheat? Anyone could be. Under pressure for qualifications that might grant entry to a stressed labour market, many surely are. We can’t prosecute them all. So, why not just the academy’s usual suspects? That signals our seriousness about cheating.We should pivot to a view that is curious about the judgement of the future. Henry VIII strongly believed that Anne Boleyn had grievously offended, but he didn’t seem very curious about the likely judgement of his actions by the future. How will a future academy that has more fully realised a universal humanities judge our current zeal to label some students, but not others, as cheats?Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand. He is the author of How to be Human in the Digital Economy and Dialogues on Human Enhancement, and co-author (with Stuart Whatley and Dan Weijers) of How to Think about Progress: A Skeptic’s Guide to Technology.Posted Wed 13 May 2026 at 12:59pmWed 13 May 2026 at 12:59pmWed 13 May 2026 at 12:59pm, updated Wed 13 May 2026 at 1:01pmWed 13 May 2026 at 1:01pmWed 13 May 2026 at 1:01pmShareFacebookX (formerly Twitter)To protect the humanities from Humanities, Inc., academics should stop writing like chatbotsHow AI exposes the hypocrisy of academic publishingThe conversation between AI research and the humanities needs to deepen — the alternative is nihilismHas meritocracy gotten the better of our universities?How modern universities became disenchantedUniversities, AIBack to top