
Google has made a significant change to its AI Principles, removing a section that explicitly stated areas where the company would not develop or deploy artificial intelligence. The updated document, published on Tuesday, no longer includes its prior commitments to refrain from using AI for weapons, surveillance, or other applications that could violate human rights.
This revision suggests that Google may be reconsidering its stance on previously restricted areas as competition in the AI industry intensifies.
The AI Principles were first introduced in 2018, outlining Google’s approach to AI development with a focus on ethics, fairness, and accountability. Over the years, the company has updated the document, but the four core restrictions remained unchanged—until now.
A comparison with an archived version of the document on Wayback Machine reveals that Google has removed the section titled “Applications we will not pursue.” This section had explicitly stated that Google would not:
1. Develop AI technologies that cause or are likely to cause overall harm
2. Work on weapons or technologies that directly facilitate injury
3. Build surveillance technologies that violate international norms
4. Create AI systems that contravene human rights and international law
The removal of these commitments raises concerns about whether Google is now open to exploring AI applications in defense, security, or surveillance sectors.
Following the update, Google DeepMind’s CEO Demis Hassabis and Senior VP for Technology and Society James Manyika released a blog post explaining the company’s revised AI strategy.
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” the post stated. It further emphasized that companies, governments, and organizations that share these values should collaborate to develop AI responsibly while also supporting national security.
While Google did not explicitly state that it will begin working on military or surveillance AI, the removal of restrictions signals a potential policy shift amid growing global competition in artificial intelligence.
Google’s update comes at a time when AI is increasingly being integrated into national security and defense strategies worldwide. The US, China, and Europe are all actively investing in AI-driven security and military applications, and Google’s latest move could indicate that it wants to stay competitive in this evolving landscape.
The revision also aligns with recent US government initiatives to encourage public-private partnerships in AI development, particularly in areas such as cybersecurity, autonomous systems, and intelligence analysis.
However, critics argue that Google’s decision to remove these ethical commitments could lead to a lack of transparency and increase the risk of AI being used in ways that may compromise privacy and human rights.
With this change, Google has left the door open for broader AI applications, but it remains to be seen whether the company will actively pursue defense contracts or national security projects.
The move also comes as Google faces increasing pressure from competitors such as OpenAI, Microsoft, DeepSeek, and Anthropic, all of which are pushing advancements in generative AI, automation, and AI-driven analytics.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today