Google Erases Promise Not to Use AI Technology for Weapons or Surveillance
9News
SKIPPED
Details
- Date Published
- 6 Feb 2025
- Priority Score
- 4
- Australian
- Yes
- Created
- 10 Mar 2025, 10:27 pm
Description
<p>Google&#x27;s updated, public AI ethics policy removes its promise that it won&#x27;t use the technology to pursue applications for weapons and surveillance.</p>
Summary
Google has revised its AI ethics policy, removing prior commitments to avoid developing AI for weapons or surveillance, a significant shift from its earlier stance. This change reflects the rapid evolution in AI capabilities and the pressures of global competition in AI leadership. The removal of these ethical commitments may increase the risk of AI technologies being used in ways that could exacerbate security threats and surveillance practices, raising concerns about the potential misuse of AI. The developments underline the need for robust international governance frameworks to ensure AI is aligned with values like human rights and security.
Body
Google'supdated, public AI ethics policy removes its promise that it won't use the technology to pursue applications for weapons and surveillance.In a previous version of the principles seenby CNN on the internet archive Wayback Machine, the company included applications it won't pursue.One such category was weapons or other technology intended to injure people.READ MORE:How the US foreign aid freeze is intensifying humanitarian crises across the globeSignage at the Google headquarters in Mountain View, California, is seen here in October 2024. Google's updated AI ethics policy removes its promise that it won't use the technology to pursue weapons and surveillance.\(David Paul Morris/Bloomberg/Getty Images via CNN)Another was technology used to surveil beyond international norms.That language isgone on the updatedprinciples page.Since OpenAI launched chatbot ChatGPT in 2022, the artificial intelligence race has advanced at a dizzying pace.While AI has boomed in use, legislation and regulations on transparency and ethics in AI have yet to catch up – and now Google seems to have loosened self-imposed restrictions.In a blog post on Tuesday, senior vice president of research, labs, technology & society James Manyika and Google DeepMind head Demis Hassabis said that AI frameworks published by democratic countries have deepened Google's "understanding of AI's potential and risks."READ MORE:Pizza shop owner 'everybody knew' stabbed to death outside business"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," the blog post said.The post continued, "and we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."Google first published its AI Principles in 2018, years before the technology became almost ubiquitous.Google's update is a sharp reversal in values from those original published principles.In 2018, Google dropped a $16 billion ($US10 billion) bid for a cloud computing Pentagon contract, saying at the time "we couldn't be assured that it would align with our AI Principles."More than 4,000 employees had signed a petition that year demanding "a clear policy stating that neither Google nor its contractors will ever build warfare technology," and about a dozen employees resigned in protest.CNN has reached out to Google for comment.