Back to Articles
OpenAI and Google DeepMind Employees Warn of AI Industry Risks in Open Letter

The Guardian

SKIPPED

Details

Date Published
4 June 2024
Priority Score
4
Australian
No
Created
10 Mar 2025, 10:27 pm

Authors (1)

Description

Current and former workers sign letter warning of lack of safety oversight and calling for more protections for whistleblowers

Summary

An open letter from current and former employees of OpenAI and Google DeepMind highlights the lack of safety oversight in the AI industry and calls for stronger whistleblower protections. The letter underscores the secrecy of AI companies and their weak obligations to share information regarding AI capabilities and risks with governments and civil society. This statement adds to growing concerns about AI systems exacerbating social harms and calls for transparency and accountability in AI development. It emphasizes the need for robust governance frameworks to mitigate these risks, aligning with global discourse on protecting society from potential catastrophic AI consequences.

Body

Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees.Photograph: Wachiwit/AlamyView image in fullscreenEleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees.Photograph: Wachiwit/AlamyThis article is more than9 months oldOpenAI and Google DeepMind workers warn of AI industry risks in open letterThis article is more than 9 months oldCurrent and former workers sign letter warning of lack of safety oversight and calling for more protections for whistleblowersA group of current and former employees at prominent artificial intelligence companiesissued an open letteron Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former GoogleDeepMindemployees – one of whom previously worked at Anthropic.“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”OpenAI defended its practices in a statement, saying that it had avenues such as a tipline to report issues at the company and that it did not release new technology until there were appropriate safeguards.Googledid not immediately respond to a request for comment.“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” anOpenAIspokesperson said.Concern over the potential harms of artificial intelligence have existed for decades, but the AI boom of recent years has intensified those fears and left regulators scrambling to catch up with technological advancements. While AI companies have publicly stated their commitment to safely developing the technology, researchers and employees have warned about a lack of oversight as AI tools exacerbate existing social harms or create entirely new ones.The letter from current and former AI company employees, which was firstreported by the New York Times, calls for increased protections for workers at advanced AI companies who decide to voice safety concerns. It asks for a commitment to four principles around transparency and accountability, including a provision that companies will not force employees to sign any non-disparagement agreements that prohibit airing risk-related AI issues and a mechanism for employees to anonymously share concerns with board members.“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter states. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”Companies such as OpenAI have also pursued aggressive tactics to prevent employees from speaking freely about their work, withVox reporting last weekthat OpenAI made employees who leave the company sign extremely restrictive non-disparagement and non-disclosure documents or lose all their vested equity. Sam Altman, OpenAI’s CEO, apologized following the report, saying that he would change off-boarding procedures.The letter comes after two top OpenAI employees, co-founder Ilya Sutskever and key safety researcher Jan Leike, resigned from the company last month. After his departure, Leike alleged that OpenAI hadabandoned a culture of safetyin favor of “shiny products”.The open letter on Tuesday echoed some of Leike’s statement, saying that companies did not display any obligation to be transparent about their operations.Explore more on these topicsArtificial intelligence (AI)OpenAIGoogleChatGPTAlphabetDeepMindSam AltmannewsShareReuse this content