Back to Articles
First Companies Sign Up to AI Safety Standards on Eve of Seoul Summit

The Guardian

SKIPPED

Details

Date Published
21 May 2024
Priority Score
4
Australian
No
Created
10 Mar 2025, 10:27 pm

Authors (1)

Description

Rishi Sunak says 16 international firms have committed, but standards have been criticised for lacking teeth

Summary

The article reports on the commitment of 16 international companies to voluntary AI safety standards, coinciding with the Seoul summit. Although this move is hailed as a step towards global standards for AI safety, criticisms arise over the voluntary nature of these standards, potentially leading to non-compliance. The initiative garners international participation, including Chinese and Emirati firms, highlighting global interest in AI governance. The commitments primarily emphasize transparency, cybersecurity, and societal risk research, but lack enforcement mechanisms to ensure adherence. This development is pertinent for global AI safety discussions, though its effectiveness in thwarting catastrophic risks remains under scrutiny.

Body

Sunak will also co-chair a virtual meeting of world leaders on ‘innovation and inclusivity’ in AI with the Korean president.Photograph: Lee Jin-man/APView image in fullscreenSunak will also co-chair a virtual meeting of world leaders on ‘innovation and inclusivity’ in AI with the Korean president.Photograph: Lee Jin-man/APThis article is more than9 months oldFirst companies sign up to AI safety standards on eve of Seoul summitThis article is more than 9 months oldRishi Sunak says 16 international firms have committed, but standards have been criticised for lacking teethThe first 16 companies have signed up to voluntary artificial intelligence safety standards introduced at theBletchley Park summit, Rishi Sunak has said on the eve of the follow-up event in Seoul.The standards, however, have been criticised for lacking teeth, with signatories committing only to work toward information sharing, invest in cybersecurity and prioritise research into societal risks.“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”Among the 16 are Zhipu.ai from China, and the Technology Innovation Institute from the United Arab Emirates. The presence of signatories from countries that have been less willing to bind national champions to safety regulation is a benefit of the lighter touch, the government says.The UK’s technology secretary, Michelle Donelan, said the Seoul event “really does build on the work that we did at Bletchley and the ‘Bletchley effect’ that we created afterwards. It really had the ripple effect of moving AI and AI safety on to the agenda of many nations. We saw that with nations coming forward with plans to create their own AI safety institutes, for instance.“And what we’ve achieved in Seoul is we’ve really broadened out the conversation. We’ve got a collection from across the globe, highlighting that this process is really galvanising companies, not just in certain countries but in all areas of the globe to really tackle this issue.”The longer the codes remained voluntary, however, the greater the risk was that AI companies would simply ignore them, said Fran Bennett, the interim director of the Ada Lovelace Institute.“People thinking and talking about safety and security, that’s all good stuff. So is securing commitments from companies in other nations, particularly China and the UAE. But companies determining what is safe and what is dangerous, and voluntarily choosing what to do about that, that’s problematic.“It’s great to be thinking about safety and establishing norms, but now you need some teeth to it: you need regulation, and you need some institutions which are able to draw the line from the perspective of the people affected, not of the companies building the things.”Bennett also criticised the lack of transparency for training data. Even under the safety standards, companies are free to keep the data they train their models on completely secret, despite the risks known to come with biased or incomplete sources.Donelan argued that AI safety institutes such as the one in the UK have enough access to make data transparency unnecessary. “If the argument is that training data can present risks, then what the institute can do is go through the model and see if that model can itself present a risk,” she said. “That’s a million times more than what we had just over six months ago, when it was all down to the company.”OpenAI, another of the signatories to the standards, said they represented “an important step toward promoting broader implementation of safety practices for advanced AI systems.skip past newsletter promotionafter newsletter promotionAnna Makanju, the company’s vice-president for global affairs, said “the field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science”.The presence of Chinese and Emirati signatories on the list is seen as vindicating Britain’s leadership in AI safety, the Guardian understands, because a US-led effort would have had little chance of being seen as neutral enough to attract support.On Tuesday evening, Sunak co-chaired a closed virtual meeting of world and business leaders with the South Korean president, Yoon Suk Yeol. Attendees, including Kamala Harris, Emmanuel Macron, Meta’s Nick Clegg and Twitter owner Elon Musk, agreed on further co-operation to progress AI safety science, a Whitehall source said.The full list of the companies to have signed up to the safety standards:AmazonAnthropicCohereGoogle / Google DeepMindG42IBMInflection AIMetaMicrosoftMistral AINaverOpen AISamsung ElectronicsTechnology Innovation InstitutexAIZhipu.aiExplore more on these topicsArtificial intelligence (AI)ComputingRishi SunakInternet safetySouth KoreaAsia PacificnewsShareReuse this content