Pentagon Imposes Critical Deadline on Anthropic, Countdown Underway

Deep News17:01

U.S. artificial intelligence company Anthropic must comply with the Pentagon's requirements by 5:01 PM Eastern Time or face being labeled a "supply chain risk"—a designation typically reserved for companies viewed as extensions of foreign adversaries.

The Pentagon utilizes Anthropic's Claude AI system within its classified networks, intending to deploy it for "all lawful purposes." However, Anthropic has established two clear boundaries for the Pentagon's use:

Claude must not be used for autonomous weapons. Claude must not be used for mass surveillance of U.S. citizens.

On Thursday, Anthropic announced it has no intention of compromising. "The threat does not change our stance: we cannot in good conscience agree to their demands," the company's CEO stated in a declaration. The Pentagon, meanwhile, claims it has no interest in either of the specified uses but insists on the freedom to utilize the technology it has already authorized without restrictions. Pentagon spokesperson Sean Parnell wrote on platform X: "This is a simple, common-sense requirement to prevent Anthropic from jeopardizing critical military operations and potentially endangering our personnel. We will not allow any company to dictate how we make operational decisions."

The standoff escalated during a high-level meeting at the Pentagon on Tuesday, attended by U.S. Defense Secretary Pete Haghese and Anthropic CEO Dario Amodei. According to sources familiar with the matter, while the meeting atmosphere remained civil, Pentagon officials not only threatened to cancel Anthropic's $200 million contract but also raised the possibility of applying the "supply chain risk" designation, which could severely impact the company's revenue.

What is the nature of Anthropic's collaboration with the Pentagon? Anthropic's Claude is the first major AI model authorized for operation on the U.S. military's classified networks. The company signed a contract worth up to $200 million with the Pentagon last summer. Other AI giants, like OpenAI, only collaborate with the Pentagon on unclassified networks. Anthropic's "Acceptable Use Policy," part of the contract, explicitly prohibits the use of Claude for mass surveillance and autonomous weapons.

Gregory Allen, a senior adviser at the Center for Strategic and International Studies, commented on Bloomberg Radio: "The timing of this dispute is awkward. On one hand, users within the U.S. Defense Department are very fond of Anthropic and Claude, and from what I understand, their usage restrictions have never been triggered." However, the Pentagon is unwilling to be bound by a company's policy. A Pentagon official told CNN: "You cannot run tactical operations on exceptions. Legitimacy is the responsibility of the Pentagon as the end-user." From the Pentagon's perspective, it does not want to be in a position where it must seek permission from a company or have usage restrictions lifted during national security events.

The situation also presents challenges for the Pentagon. Severing ties with Anthropic would be problematic for the Pentagon as well, requiring the replacement of all internal systems that currently use Claude. Although one official stated that Elon Musk's Grok AI system is "ready for use in classified environments," it is widely considered to be technologically inferior to Claude.

What would be the business impact on Anthropic? For Anthropic, valued at approximately $380 billion, losing the $200 million contract is not an existential threat. The greater risk lies in the "supply chain risk" label. This would mean that all companies working with the U.S. military must prove they do not use any technology related to Anthropic in their dealings with the Pentagon. A significant portion of Anthropic's success relies on commercial contracts with large enterprises, many of which are also military contractors.

Adam Conner, Vice President for Technology Policy at the Center for American Progress, stated: "This implies that a substantial portion of Anthropic's existing customers could be lost, either because they already hold government contracts or hope to secure them in the future."

Jensen Huang, CEO of AI chip giant NVIDIA, expressed hope that an agreement could be reached but noted that "even if no deal is made, it's not the end of the world." The Pentagon has other AI companies to choose from, and Anthropic has other clients.

Earlier this week, the Pentagon considered invoking the Defense Production Act to compel Anthropic's cooperation. This 1950 law grants the president broad powers to control domestic industries during emergencies. It remains unclear whether the Pentagon can simultaneously use this act to force cooperation and label the company a supply chain risk.

Conner noted that Anthropic is not the only company under threat; the Pentagon's actions signal to other AI firms attempting to sell services to the government. "In a broader sense, this tells other AI companies in negotiations not to attempt to impose any form of restrictions on AI usage."

Alan Rosensthine, a law professor at the University of Minnesota, opined: "If the Pentagon is merely dissatisfied with Anthropic's terms of use, it could simply terminate the contract and purchase a model from another provider. What the government truly wants is to continue using Anthropic's technology and is employing all available pressure tactics. This is a very powerful bargaining chip."

What will happen at 5:01 PM? The Pentagon has stated that if Anthropic fails to accept the terms by 5:01 PM, it will cancel the contract and designate the company a supply chain risk. It is uncertain whether a public announcement will be made. It is also unclear if Claude will be immediately disconnected from military systems, who might replace it, or how other defense contractors collaborating with Anthropic will respond. When foreign companies have been designated as risks in the past, they were typically granted a transition period to demonstrate they had severed ties. However, if the Pentagon follows through on its threat, it would represent an unprecedented and severe escalation against one of the world's leading AI models.

Allen remarked on Bloomberg: "At a time when the White House is comparing the U.S.-China AI competition to the Cold War space race, targeting a leading domestic AI company—you don't want to destroy the crown jewel of the industry over something like this. There are better solutions than the government's current absolutist position."

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment