Byte-Sized Diplomacy: The Search for Safe AI
Lowy Institute
SKIPPED
Details
- Date Published
- 11 Sept 2024
- Priority Score
- 3
- Australian
- Yes
- Created
- 8 Mar 2025, 01:04 pm
Summary
The article explores Australia's challenges and opportunities in promoting AI safety amid a tense international landscape. It highlights Australia's lag in adopting AI safety standards compared to jurisdictions like the EU and California, which have implemented advanced regulatory frameworks. Despite Australia’s public concern about AI risks, such as misinformation, the country lacks a dedicated AI Safety Institute, unlike counterparts in the United States and the UK. Establishing such an institute could enhance Australia's ability to address complex AI risks and participate in global safety collaborations. The article underscores the importance of Australia's active engagement in shaping global AI safety norms to support a secure and equitable technological future.
Body
Listen to this articleGot a big question on technology and security for “Byte-sized Diplomacy”.Send it through here.Artificial Intelligence is already here,diffused through the economy in different ways. So, Australia needs to accept a few conflicted realities.We have a deeply worried population, with what are – let’s face it – some pretty legitimate concerns about AI and big tech companies.We also have an economy that will be left behind if we don’t enable Australian companies and entrepreneurs to create and adopt AI. We have people that need to be prepared with AI (and other) skills for future jobs and contributions to society.Any government investment needs to address these realities. The local AI safety conversation is also heating up. It hasn’t won the same attention yet as the governmentflagging limits on social media access, but AI safety will be part of a big global debate.Last week saw the release ofvoluntary AI safety standardsand a proposal paper onmandatory guardrailsfor AI in high-risk settings. Despitecallsfor safety in the uses of AI technology, Australia is seen as lagging on the issue globally, while other jurisdictions are moving rapidly.The European Union, for example, has adopted theAI Act 2024, a risk-based regulatory framework governing the application and development of AI systems. At a state level, Californian legislators recently passed asuite of AI billsincluding acontroversial AI safety bill, known as SB1047. This requires developers of advanced AI models to adopt and follow safety procedures – including shutdown protocols – to reduce the risk that their models are deployed in a way that causes “critical harm”. The bill has facedvocal oppositionand the Californian governor must decide whether to veto the legislation or sign it into law by the end of the month.An aerial view of Silicon Valley, California, with Apple Park, headquarters of Apple Inc in centre (Amit Lahav/Unsplash)Establishing AI safety institutes are also seen as anothercrucial stepin managing advanced AI complexity through technically informed, globally coordinated action. Australia doesn’t have one presently. But the United States and United Kingdom do, and last month theyannounced a partnership. AI safety institutes have also been established inJapan,CanadaandSingapore. Some tech companies are alsoon board with an agreementon AI safety research, testing and evaluation.Perhaps an Australian AI Safety Institute will help identify crosscutting issues that existing regulatory agencies can’t effectively respond to and help collaborate globally.Australia is not alone, however. France, Germany, Italy and South Korea have likewise made arenewed commitment to safe and responsible AIand support for aninternational network of AI safety institutesbut are yet to set up one themselves – each country diverges in approach withcompeting incentivesand varying structures. The network of AI safety institutes is growing, albeit slowly. Some withinIndiahave expressed interest, too.Public concern about AI risks isn’t abating. Australians are more concerned about the future of AI than other nations, with 64 per cent of Australians saying AImakes them nervous. Eighty per cent of Australiansthink managing AI risk is a global priority. This concern is most clear in relation to misinformation and disinformation, but appears across the board, as Australians aremore uncomfortable with AI producing newsthan most other countries and about its use of private information in business.Views on the topic are by no means homogenous. Numerous voices are concerned that Australia is not investing in AI enough. It’s a big topic and Canberra has seen a stream of global AI experts offering views. Kent Walker, Google’s president of global affairs,recently spokewith Australian parliamentarians about AI. Alondra Nelson, the former Director of the White House Office of Science and Technology Policy whenthe 2022 Blueprint for an AI Bill of Rightswas released, also visited Canberra and Sydney.I interviewed Signal CEO Meredith Whittaker about herconcern around AI power concentration, consumer rights and extractive business models. I also spoke with Connor Leahy, CEO of Conjecture – an AI Safety company – about the power of technology and AI safety. Leahy said that Australia should look to take advantage of the burgeoning network of AI Safety Institutes:“Australia has a lot to offer the AI global safety discussion, with a long history of standing up to tech companies, a historical role in global diplomacy from nuclear to pandemics as well as strong public institutions and legislatures.”Professor Anton van den Hengel wrote a few days ago that the AI economy is global and we can’t opt out. As he put it,it’s hard to imagine a future without AI, but easy to imagine one without Australian AI. Perhaps an Australian AI Safety Institute will help identify crosscutting issues that existing regulatory agencies can’t effectively respond to, help collaborate globally, work diplomatically, build research networks and help Australians build trust – and communicate their key concerns to policymakers.Time is not on our side. Australia needs to engage in international efforts to help shape the technology future that we want to live in. We should continue to use our diplomatic experience and technical expertise to support these efforts and inspire those in our region and around the globe to unite for an AI and technology ecosystem that makes the world safer, more secure and more equitable.