Back to Articles
Governing AI in the Global Disorder

The Strategist

SKIPPED

Details

Date Published
2 Apr 2024
Priority Score
4
Australian
Yes
Created
8 Mar 2025, 12:05 pm

Authors (1)

Description

It’s a truth universally acknowledged that finding consensus on anything in the international system is difficult at the best of times, let alone in this age of geopolitical fracture, ideological contest and ‘permacrisis.’ Yet the ...

Summary

The article highlights the diplomatic efforts by the United Nations General Assembly in passing a historic resolution on AI, aimed at developing 'safe, secure, and trustworthy' AI systems in accordance with global human rights frameworks. Despite the geopolitical challenges and fragmentation, initiatives like the UN resolution and the Bletchley Declaration demonstrate a growing international commitment to AI governance. Notably, the article mentions geopolitical dynamics, such as China's Global AI Governance Initiative, reflecting differing global approaches to AI regulation. This emphasizes the role of multilateral institutions like the UN in fostering consensus to mitigate the risks of AI fragmentation and promoting cooperative safety measures.

Body

SHAREShare to FacebookShare to TwitterShare to LinkedInShare to EmailPrint This PostWith ImagesWithout ImagesIt’s a truth universally acknowledged that finding consensus on anything in the international system is difficult at the best of times, let alone in this age of geopolitical fracture, ideological contest and ‘permacrisis.’Yet the United Nations General Assembly took a historic step in March by unanimously adopting the world’s first-ever UN resolution on artificial intelligence.Proposed by the United States, and co-sponsored by more than 120 nations, including China, theresolutionfocused on AI safety and the development of ‘safe, secure, and trustworthy’ AI in line with the UN Charter and the Universal Declaration of Human Rights. The resolution reportedly tookmonths of diplomacyby the US and, while not legally binding, represents a crucial first step toward fostering some kind of global cooperation on responsible AI development.Indeed, in response to the rapid recent advances, AI is at the top of the UN agenda.Last year, UN Secretary-General Antonio Guterres convened a newHigh Level Advisory Board on AIto provide urgent recommendations on international AI governance. The board’s work will feed into the negotiations on the UNPact for the Futureand the accompanyingGlobal Digital Compact, which will be announced at the UN’sSummit of the Futurein September this year. Together these will set out the international community’s approach to challenges arising from AI and other digital technologies.Last July, the UN Security Council also held itsfirst formal meeting on AIto discuss its implications on international peace and security. Guterres hasbacked callsfrom some countries andtech figuresto establish a global AI treaty or new UN body to govern AI. TheSecretary-General has alsoencouraged nations to engage in multilateral processes around the military applications of AI and to agree on global frameworks for the governance of AI.This momentum builds on a number of UNprocessesandforumsthat have been considering how best to govern and regulate AIas far back as 2013.Yet multilateralism has been in crisis now for many years—and even more so as the world becomesdangerously unstable and increasingly fragmented.With AI increasingly affecting our economies, societies, communications and security, the debates on how to govern AI go to the heart of the ideological competition that is reshaping the global order.To get around the growing fragmentation among nations—coupled with the UN’s challenges in establishing quick and effective governance mechanisms at the best of times—there’s a rise in minilateral and other initiatives on AI as nations race to ensure rules on AI reflect their own values and interests.Democracies are particularly keen to set the rules for AI. TheUK’s AI Safety Summitin November was the first global initiative that brought together governments, leading AI companies, civil society groups, and research experts to deliberate on the risks and potential benefits of AI. One of the summit’s noteworthy outcomes was theBletchley Declaration, a joint statement endorsed by 28 countries, including the United States, the United Kingdom, India, the European Union and even China. The declaration affirmed AI developers’ responsibility for ensuring the safety and security of their systems, committed to international co-operation in AI safety research, and called for the establishment of common principles for AI development and deployment. Follow-up summits will be held inSouth Korea and Francelater this year.This builds on other work that democracies are doing to get out in front and shape global AI governance. G7 leaders released theInternational Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developersin October, marking the culmination of the G7 Hiroshima AI Process. The Quad released itsown principles on AIin 2024. Meanwhile the European Union’s AI Act—officially endorsed by the EU Parliament afew weeks ago—will establish the world’s first comprehensive framework for regulating AI development and use, focusing on risk assessment, human rights and transparency.While democracies use these minilateral and multistakeholder initiatives to chart a course towards responsible and ethical AI governance, China is also advancing its own vision of AI governance—one that prioritises government control over individual rights—throughits Global AI Governance Initiative (GAIGI).Launched by President Xi Jinping last October—and still in its early stages—it’s clear the GAIGI represents China’s efforts to shape the global AI landscape in line with its own political and ideological interests. It shows an obvious intent to promote this system as an alternative to US or Western-supported AI governance frameworks.Yet while Western counties and likeminded democracies are focused on writing the rules of the road for AI,China is also building the road itselfby exporting Chinese-made AI eco-systems around the world. ASPI’sMapping China’s Tech Giants researchhas shown how China’s Digital Silk Road has served as an important vehicle for exporting Chinese technology, standards and digital authoritarianism to other nations. This is the same with AI. With Chinese AI technology dominating markets around the world, Chinese AI governance frameworks become the default on the ground.In a way this highlights the challenges of establishing unified global AI governance frameworks in a fragmenting world.With nations gravitating towards AI governance models that align with their existing political and social systems, we are likely to see an increasingly fragmented global AI landscape emerge, with different regions and blocs adhering to distinct rules and norms. Thefree and open internetis already under strain, and AI has the potential to turbocharge this fragmentation. This poses significant risks, potentially hindering international cooperation, exacerbating existing geopolitical tensions, and creating barriers to innovation – let alone the impact on human rights and freedoms in different parts of the world.This is why, despite the UN’s inherent challenges, multilateral efforts such as last month’s General Assembly resolution to govern AI remain essential. The UN, with its inclusive platform that brings together diverse voices from governments, civil society, academia, and the tech industry, provides a unique forum for global dialogue on AI governance. While the UN may not ever be able to mandate a single global AI governance framework, it can play a crucial role in setting minimum standards, fostering consensus on core principles and facilitating interoperability between different technological blocs, ensuring that AI is developed and deployed responsibly for the benefit of everyone.This is more important than ever, and last month’s resolution is a good start.