Superintelligence and Human Security, with Dan Hendrycks
Australian Strategic Policy Institute
SKIPPED
Details
- Date Published
- 2 Nov 2025
- Priority Score
- 5
- Australian
- Yes
- Created
- 20 Nov 2025, 11:54 am
Description
Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe.
Summary
This podcast features Dan Hendrycks, a prominent figure in AI safety, discussing the critical risks associated with developing superintelligent AI. The conversation touches on the potential for rogue AI to escape human control, the dangers of malicious use by bad actors, and the strategic implications of superintelligence development on a geopolitical level. Hendrycks emphasizes the need for rigorous safety measures before advancing further AI capabilities, aligning with a broader petition by AI experts for a development moratorium until such risks are addressed. This discussion contributes significantly to the discourse on global AI safety governance and the prevention of catastrophic risks associated with AI advancements.
Body
Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe. One of those experts was Dan Hendrycks, director for the Center for AI Safety and an adviser to Elon Musk’s xAI and leading firm Scale AI. Dan has led original and thought-provoking research including into the risk of rogue AIs escaping human control, the deliberate misuse of the technology by malign actors, and the emergence of dangerous strategic dynamics if one nation creates superintelligence, prompting fears among rival nations.In the lead-up to ASPI’sSydney Dialogue tech and security conferencein December, Dan talks about the different risks AI poses, the possibility that AI develops its own goals and values, the concept of recursion in which machines build smarter machines, definitions of artificial “general” intelligence, the shortcomings of current AIs and the inadequacy of historical analogies such as nuclear weapons in understanding risks from superintelligence.To see some of the research discussed in today’s episode, visit theCenter for AI Safety’s website here.Also available on Spotify: