Analysts note that despite recent ethical disputes between the U.S. Department of Defense and AI startup Anthropic, the core position of Palantir Technologies Inc. (PLTR.US), led by Alex Karp, within the U.S. military is unlikely to be shaken. The two parties have maintained a cooperative relationship since 2024. Following weeks of negotiations, U.S. President Trump last Friday ordered federal agencies to completely halt the use of Anthropic's AI tools and canceled its contract worth over $200 million. U.S. Secretary of War Pete Hegseth designated the company as a national security "supply chain risk," a rare classification for a domestic U.S. company that has attracted widespread attention. Hegseth stated on platform X: "In accordance with the President's directive for the federal government to cease using Anthropic technology, I am now instructing the War Department to list Anthropic as a national security supply chain risk. Effective immediately, all contractors, suppliers, and partners doing business with the U.S. military are prohibited from engaging in any commercial activities with Anthropic. Anthropic will continue to provide services to the War Department for a period not exceeding six months to ensure a smooth transition to a better, more patriotic service provider."
The controversy stems from Anthropic's firm stance: the company refused to allow its AI to control fully autonomous weapons without human supervision and also declined its use for large-scale domestic surveillance in the United States.
Anthropic's Role in "Operation Epic Rage" It is understood that Anthropic was involved in last weekend's airstrikes against Iran, known as "Operation Epic Rage." Seeking Alpha analyst Nova Capital stated on Monday: "The U.S. military was using Anthropic's model to attack Iran. But in the current information chaos, a potentially overlooked fact is that neither Claude nor any other model that might have been used can process militarily sensitive data as a standalone tool. For AI to effectively perform such tasks, a complete ecosystem is required. I believe this is where Palantir's ontology comes into play... Without Palantir, Claude cannot operate. Palantir is known as the 'battlefield brain'... I am confident that since late 2024, Claude has been used through Palantir's 'hosted' system."
Seeking Alpha analyst Julia Ostian pointed out that removing Claude from the Palantir system would require significant effort, but this also opens market opportunities for OpenAI and its primary investor, Microsoft Corp. (MSFT.US). Following the Anthropic incident, OpenAI quickly reached an agreement with the U.S. War Department to deploy its models onto government classified networks. Ostian commented: "In my view, labeling Anthropic a 'supply chain risk' is a strange move, as this typically applies to foreign companies suspected of espionage. For Palantir, this is a direct blow because they are the primary gateway for getting Anthropic's Claude into classified networks. Now they have to readjust their working methods, likely turning to alternatives like OpenAI or xAI."
"Google (GOOGL.US) and Amazon.com Inc. (AMZN.US) are also in a very difficult position. As investors and cloud service providers, they have invested billions of dollars in Anthropic. Now they must find ways to divest these investments or sever any ties with Anthropic to protect their own government cloud contracts worth billions of dollars. However, for Microsoft, this is undoubtedly good news, as Azure and OpenAI become almost the only options for the Pentagon during the transition period."
Anthropic Refutes Hegseth's Remarks on Business Partners Despite Hegseth's declaration that companies contracted with the U.S. military cannot have any commercial cooperation with Anthropic, Anthropic explicitly denied this claim. Anthropic stated in a declaration: "Secretary Hegseth implied that this designation would restrict all entities doing business with the military from cooperating with Anthropic. However, the Secretary has no legal basis to support this statement. According to Title 10, Section 3252 of the U.S. Code, the 'supply chain risk' designation applies only to the use of Claude under War Department contracts and does not affect contractors using Claude when serving other clients."
This means: "If you are an individual user or have a commercial contract with Anthropic, your access to Claude via API, claude.ai, or other products will be completely unaffected. If you are a War Department contractor, this designation (if formally enacted) only affects your use of Claude for War Department contract work; other uses are not restricted."
In response, Wedbush expects the dispute between Anthropic and the Pentagon may ultimately be settled in court. Wedbush analyst Dan Ives said in a report: "In the coming weeks and months, the two parties will battle it out in court. We hope for a swift resolution to the dispute to eliminate uncertainty in the AI industry—some companies might pause Claude deployments during litigation. With its excellent Claude AI technology, Anthropic has become an industry disruptor; the subsequent direction of this 'soap opera' (negotiation or litigation) will have a ripple effect on its technology partners and clients."
Seeking Alpha analyst Uttam Dey believes Palantir can easily replace Anthropic with OpenAI models. Dey stated: "The dispute last week between the War Department and Anthropic shows that large language models are crucial for the War Department, acting as a 'force multiplier' for its defense systems. Therefore, even if Anthropic has reservations about some War Department use cases, its contract is highly replaceable—OpenAI has already quickly stepped in." Dey also noted that Google's Gemini is expected to be the next best alternative to Anthropic. He added: "This incident has no substantial impact on Palantir's business cooperation with the War Department; Palantir will continue business as usual."
Comments