Alphabet (GOOGL.US) has reportedly joined a growing list of technology companies by signing an agreement with the U.S. Department of Defense, permitting the use of its artificial intelligence (AI) models for classified work. According to sources familiar with the matter, the agreement allows the Defense Department to utilize Google's AI models for "any lawful government purpose." This move places Google alongside OpenAI and Elon Musk's xAI, both of which have also signed deals to provide AI models for classified applications. Classified networks are used for a wide range of highly sensitive tasks, including mission planning and weapons targeting.
The reported agreement requires Google to assist the government in adjusting the security settings and filters of its AI models based on specific requirements. The contract includes clauses stating that "the parties agree that AI systems shall not be used for domestic mass surveillance or autonomous weapons (including target selection) without appropriate human supervision and control." However, it is also noted that the agreement stipulates Google does not have the authority to control or veto decisions related to lawful government operations.
A Google spokesperson stated that the company supports government agencies in both classified and unclassified projects. The spokesperson affirmed that Google adheres to the industry consensus that AI should not be used for domestic mass surveillance or autonomous weapons lacking proper human oversight. "We believe that providing API access to commercial models, including those running on Google infrastructure, with industry-standard practices and terms, is a responsible way to support national security," the spokesperson said. The U.S. Department of Defense stated it has no intention of using AI for mass surveillance of American citizens or developing weapons that operate without human involvement, but it seeks to allow for "any lawful use" of the technology.
The Defense Department has been pushing for greater access to AI capabilities from leading labs. In 2025, it signed contracts with several major AI labs, including Anthropic, OpenAI, and Google, each valued at up to $200 million. The department has been advocating for its "All Lawful Use" procurement standard, which demands that military agencies have the right to use a supplier's AI capabilities to the full extent permitted by existing U.S. laws and military policies, without being constrained by ethical restrictions imposed by tech companies.
During the enforcement of this standard, Google's key competitor, Anthropic, faced significant pressure from the U.S. government. Anthropic refused to compromise, insisting on retaining contractual restrictions that prohibit the use of its Claude model for domestic mass surveillance and fully autonomous weapons without human supervision. Consequently, Anthropic was labeled a "supply chain risk" by the U.S. government and given a six-month transition period for complete removal from federal networks. Former President Donald Trump reportedly ordered all federal agencies to immediately cease using Claude, denouncing it as a "radical left-wing" company attempting to impose its values on the military. On March 9, Anthropic filed a lawsuit against the Department of Defense and other federal agencies over the "supply chain risk" designation. On March 26, Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted Anthropic a preliminary injunction, temporarily blocking the federal government's order to ban its AI technology. Recent indications suggest a potential thaw in relations, with Trump stating in a recent interview that a deal allowing Anthropic's models to be used by the Defense Department might be possible.
If the reported agreement is confirmed, it signifies a fundamental strategic shift for Google in the long-standing balance between national security demands and ethical guidelines. Facing the entrenched dominance of Microsoft (MSFT.US) Azure and Amazon.com (AMZN.US) AWS in the government classified cloud market, Google urgently needs such defense-grade AI integration projects to achieve a strategic breakthrough.
The classified AI agreement with the Pentagon appears to be the result of a years-long, profound revision of Google's internal AI ethics principles. In 2018, following significant employee protest, Google withdrew from "Project Maven," which involved using AI to analyze drone footage, and subsequently established clear AI principles pledging not to develop technologies for weapons or surveillance that could cause harm. However, as global AI competition intensified, this stance underwent a fundamental change. In February 2025, Google quietly revised its internal AI principles, removing a section titled "Applications We Will Not Pursue," effectively canceling its commitment against using AI for weapons and surveillance. During an internal all-hands meeting in March 2026, Google's Vice President of Global Affairs, Tom Lue, signaled a clear pivot towards national security work, assuring concerned employees that these contracts aligned with the updated AI principles. Despite this, internal dissent persists. Over 100 Google AI employees recently sent a joint letter to Chief Scientist Jeff Dean, urging the company to prohibit the military from using Gemini for domestic surveillance or autonomous weapons, highlighting the tension between corporate commercial objectives and employee ethical convictions.
Anthropic's withdrawal from the Pentagon's initial framework created a significant strategic vacuum, which Google and OpenAI have moved to fill. Confronted with the military's firm stance, Google opted to accept the overarching "All Lawful Use" framework, while attempting to negotiate the inclusion of specific contractual clauses aimed at preventing the unsupervised misuse of AI for mass surveillance and autonomous weapons. Analysts suggest this strategy attempts to meet the Pentagon's baseline demand for technological autonomy while superficially maintaining the company's ethical image, reflecting the concession of corporate boundaries under geopolitical competitive pressures. The agreement indicates that Google is seeking to solidify its position as a core supplier within the emerging "defense-AI-industrial complex," a process where traditional corporate ethical boundaries are inevitably yielding to the absolute demands of national strategic interests.
Comments