By Adam Clark
Anthropic scored an early win in a legal battle against the Trump administration. It could be a relief for Palantir Technologies, which for now can avoid reworking its artificial-intelligence platform used by the U.S. military.
A U.S. federal judge on Thursday halted the Trump administration's designation of Anthropic as a supply-chain risk, which would have required federal agencies to stop using the artificial-intelligence company's technology. The government could still appeal the injunction.
Palantir and Anthropic didn't immediately respond to requests for comment.
The dispute between the government and Anthropic has put Palantir in an awkward situation. Palantir integrates Anthropic's tools into its own software platform; Palantir's Maven Smart Systems , used by the Defense Department , is partially built on Anthropic's Claude code, according to multiple reports. Exactly how it is integrated is unclear.
"While it does not make the lethal decision-making for selecting and approving targets, Palantir's Maven system assists the US military operators with making more informed decisions for military operations," wrote Wedbush analyst Daniel Ives.
Palantir CEO Alex Karp said in an interview with CNBC earlier this month that the company was still using Anthropic within its platform and that the Department of War hadn't phased Anthropic out yet.
Although Palantir has suggested it could swap out Claude for other AI models, it isn't clear how quickly that could be done. Additionally, there are no guarantees that other AI developers won't face similar conflicts.
Anthropic, during contract negotiations with the U.S. government, sought assurances from the Trump administration that its models wouldn't be used in fully autonomous weapons or for domestic surveillance.
Concerns escalated after The Wall Street Journal reported that Claude was used in the operation to capture former Venezuelan President Nicolás Maduro, via its partnership with Palantir. Anthropic's usage guidelines prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. When Anthropic looked to apply those limits for government use, the Trump administration moved to blacklist the company, calling the AI provider a supply-chain risk.
In court, the government argued that Anthropic's efforts to put limits on how the Pentagon can use its Claude models could effectively lead to sabotage if the AI company changes its models or disables its technology during a military operation. The judge said the U.S. government appeared to be punishing Anthropic, and its actions didn't appear directed at a national-security concern.
Palantir's Karp has been outspoken about his support for the use of AI for American military purposes. Last year, Palantir won a contract worth up to $10 billion with the U.S. Army, plus a $448 million deal with the Navy.
The federal judge's ruling on the designation of Anthropic as a supply-chain risk is unlikely to be the end of the matter. At least for now, Palantir won't need to make any changes to its technology.
Write to Adam Clark at adam.clark@barrons.com
This content was created by Barron's, which is operated by Dow Jones & Co. Barron's is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
March 27, 2026 13:19 ET (17:19 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.
Comments