Anthropic to Sue U.S. Defense Department Over "Supply Chain Risk" Designation

Stock News03-06

Anthropic PBC has been formally notified by the U.S. Department of Defense that the company and its products have been designated as a risk to the U.S. supply chain. According to a senior defense official, this action escalates an ongoing dispute between the parties concerning artificial intelligence security. The official stated on Thursday that the Department of War has officially informed Anthropic's leadership that the company and its products are now considered a supply chain risk, effective immediately. It is notable that Defense Secretary Pete Hegseth has recently shown a preference for using the traditional "Department of War" moniker, and this marks the first official statement issued publicly using that name.

In response to the Defense Department's threat of a blacklisting over the AI security dispute, Anthropic anticipates the standoff will lead to a legal confrontation. "We firmly believe this action lacks legal foundation and therefore have no choice but to challenge it through the judicial system," Anthropic CEO Dario Amodei stated unequivocally in an official blog post on Thursday. Although the defense official emphasized the decision is "effective immediately," sources familiar with the matter indicate that U.S. military forces continue to actively use Anthropic's Claude AI tool in operations against Iran. Last Friday, Secretary Hegseth warned the company and outlined a six-month transition period for it to transfer its AI business to other providers.

Neither Anthropic nor the U.S. Department of Defense spokespersons provided immediate comment on these developments. The defense official did not specify when or how the risk designation notification was delivered to Anthropic. Previously, Anthropic had stated it would pursue legal action against any supply chain risk designation made by the Defense Department.

This designation could disrupt the collaborative operations between Anthropic and the military, which has a long-standing, deep reliance on the company's software. Until recently, Anthropic was the sole approved supplier of an AI system operating on the Defense Department's classified cloud platform. Its Claude Gov tool, praised for its superior ease of use, has become a core operational platform for defense personnel. "This represents a highly strategic technological capability," emphasized Lauren Kahn, a senior research analyst at Georgetown University's Center for Security and Emerging Technology, in an interview. "Forcing the removal of this capability would cause cascading damage to the military, the company, and the broader technology ecosystem."

Anthropic CEO Dario Amodei had engaged in weeks of negotiations with Defense Undersecretary for Research and Engineering Emir Michael to finalize a specific contract governing the U.S. Defense Department's access to Anthropic's technology. However, the talks collapsed last week after the startup insisted on obtaining explicit guarantees that its AI technology would not be used for mass surveillance of U.S. citizens or for deploying autonomous weapon systems. Subsequently, Secretary Hegseth posted on platform X last Friday that Anthropic constituted a "supply chain risk." Such designations are typically reserved for nations or entities considered adversaries of the U.S., making this application to a technology company highly unusual. The specific legal authority under which the Defense Department designated Anthropic a supply chain threat remains unclear.

In a statement last week responding to Secretary Hegseth's social media post, Anthropic indicated it expects the risk designation action will ultimately be executed under Section 3252 of the U.S. Armed Forces Management Act. "From the outset, the core principle has been clear—the military must have the autonomy to use technology for all lawful purposes," the defense official stressed on Thursday. "We will not allow any vendor to interfere with the chain of command by restricting the lawful use of critical capabilities, thereby endangering personnel."

This designation comes at a critical time when U.S. forces are heavily reliant on the Claude tool for operations in Iran, utilizing a suite of AI tools to efficiently process vast amounts of operational data. Sources indicate that the Maven intelligence system, developed by Palantir Technologies Inc., is widely used in the Middle East, and Anthropic's Claude AI tool serves as one of the core large language models integrated into that system. These sources emphasized that Claude has performed excellently in actual operations, becoming a crucial support for U.S. actions regarding Iran and significantly accelerating the AI modernization of the Maven system.

Anthropic is currently valued at $380 billion. Based on current performance, the company is projected to achieve an annualized revenue run rate nearing $20 billion, indicating a doubling from the level seen at the end of last year. However, the escalating dispute with the U.S. Defense Department casts a shadow over the future prospects of this tech giant. The long-term impact of the Defense Department's risk designation on Anthropic's sales to enterprise customers, which have long been its primary revenue source, remains to be seen. Notably, amid the dispute, the company is quietly expanding into the consumer market. Its main application recently topped the download charts on the Apple App Store, reflecting broad market recognition and user support for Anthropic's technology products.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment