Back to Articles
The Race for AI: A Leading Expert Warns of Hindenburg-Style Disasters

The Guardian

ENRICHED

Details

Date Published
17 Feb 2026
Priority Score
4
Australian
Unknown
Created
17 Feb 2026, 08:45 pm

Authors (1)

Description

Prof Michael Wooldridge says scenario such as deadly self-driving car update or AI hack could destroy global interest

Summary

Professor Michael Wooldridge from Oxford University warns of potential catastrophic failures in AI deployment due to overwhelming commercial pressures. These pressures could lead to poorly tested AI systems, resulting in significant incidents, such as a fatal update in self-driving technologies or widespread AI hacks. Wooldridge highlights the dissonance between AI's promised capabilities and current performance, emphasizing the risks of treating AI as human-like. This article is relevant for understanding the priority of safety measures in global AI governance and the importance of responsible AI development to prevent existential threats.

Body

The wreckage of the Hindenburg airship following the explosion that killed 36 people Photograph: APView image in fullscreenThe wreckage of the Hindenburg airship following the explosion that killed 36 people Photograph: APRace for AI is making Hindenburg-style disaster ‘a real risk’, says leading expertProf Michael Wooldridge says scenario such as deadly self-driving car update or AI hack could destroy global interestThe race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned.Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products’ capabilities and potential flaws are fully understood.The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said.“It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”Wooldridge, who will deliver the Royal Society’s Michael Faraday prize lecture on Wednesday evening, titled “This is not the AI we were promised”, said a Hindenburg moment was “very plausible” as companies rushed to deploy more advanced AI tools.The Hindenburg, a 245-metre airship that made round trips across the Atlantic, was preparing to land in New Jersey in 1937 when it burst into flames, killing 36 crew, passengers and ground staff. The inferno was caused by a spark that ignited the 200,000 cubic metres of hydrogen that kept the airship aloft.“The Hindenburg disaster destroyed global interest in airships; it was a dead technology from that point on, and a similar moment is a real risk for AI,” Wooldridge said. Because AI is embedded in so many systems, a major incident could strike almost any sector.View image in fullscreenMichael Wooldridge. Photograph: Steven May/Alamy Stock Photo/Alamy Live News.The scenarios Wooldridge imagines include a deadly software update for self-driving cars, an AI-powered hack that grounds global airlines, or a Barings bank-style collapse of a major company, triggered by AI doing something stupid. “These are very, very plausible scenarios,” he said. “There are all sorts of ways AI could very publicly go wrong.”Despite the concerns, Wooldridge said he did not intend to attack modern AI. His starting point is the gap between what researchers expected and what has emerged. Many experts anticipated AI that computed solutions to problems and provided answers that were sound and complete. “Contemporary AI is neither sound nor complete: it’s very, very approximate,” he said.This arises because large language models, which underpin today’s AI chatbots, rattle out answers by predicting the next word, or part of a word, based on probability distributions learned in training. It leads to AIs with jagged capabilities: incredibly effective at some tasks, yet terrible at others.The problem, Wooldridge said, was that AI chatbots failed in unpredictable ways and had no idea when they were wrong, but were designed to provide confident answers regardless. When delivered in human-like and sycophantic responses, the answers could easily mislead people, he added. The risk is that people start treating AIs as if they were human. In a 2025 survey by the Center for Democracy and Technology, nearly a third of students reported that they or a friend had had a romantic relationship with an AI.“Companies want to present AIs in a very human-like way, but I think that is a very dangerous path to take,” Wooldridge said. “We need to understand that these are just glorified spreadsheets, they are tools and nothing more than that.”Wooldridge sees positives in the kind of AI depicted in the early years of Star Trek. In one 1968 episode, The Day of the Dove, Mr Spock quizzes the Enterprise’s computer only to be told in a distinctly non-human voice that it has insufficient data to answer. “That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,” he said. “Maybe we need AIs to talk to us in the voice of the Star Trek computer. You would never believe it was a human being.”Explore more on these topicsScienceAI (artificial intelligence)Star TrekComputingnewsShareReuse this content