EXCLUSIVE-Pentagon clashes with Anthropic over military AI use - sources

Reuters06:21
EXCLUSIVE-Pentagon clashes with Anthropic over military AI use - sources

Adds background in paragraphs 7-10

By Deepa Seetharaman, David Jeans and Jeffrey Dastin

WASHINGTON/SAN FRANCISCO, Jan 29 (Reuters) - The Pentagon and artificial-intelligence developer Anthropic are at odds over potentially eliminating safeguards that might allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters.

The discussions represent an early test case for whether Silicon Valley – in Washington’s good graces after years of tensions – can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield.

After weeks of talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity. The company's position on how its AI tools can be used has intensified disagreements between it and the Trump Administration, details of which have not been previously reported.

In line with a January 9 Defense Department memo on its AI strategy, Pentagon officials have argued that they should be able to deploy commercial AI technology regardless of companies' usage policies, so long as they comply with U.S. law, the people said.

A spokesperson for the department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.

In a statement, Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work."

Anthropic is one of a few major AI developers that were awarded contracts by the Pentagon last year. Others were Alphabet's Google GOOGL.O, Elon Musk's xAI and OpenAI.

Anthropic has long focused on U.S. national security at the same time as its executives have aimed to delineate responsible use. That has drawn conflict with the Trump administration, Semafor has previously reported.

In an essay on his personal blog this week, Anthropic CEO Dario Amodei warned that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries."

Amodei was among Anthropic's co-founders critical of fatal shootings of U.S. citizens protesting immigration enforcement actions in Minneapolis, which he described as a "horror" in a post on X. The deaths have compounded concern among some in Silicon Valley about government use of their tools for potential violence.

(Reporting By Deepa Seetharaman and Jeffrey Dastin in San Francisco and David Jeans in Washington, Editing by Kenneth Li, Franklin Paul and Anna Driver)

((Jeffrey.Dastin@thomsonreuters.com; +1 424 434 7548;))

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment