On March 1st, following a U.S. airstrike codenamed "Operation Epic Fury" near northern Tehran, Iran's Supreme Leader Ayatollah Khamenei was killed in the attack. Concurrently, a report in The Wall Street Journal described how the U.S. military utilized the Claude model, developed by the company Anthropic, to conduct intelligence assessment, target identification, and combat scenario simulations. The entire process was completed within a closed-loop, confidential network. Notably, just the day before, former President Trump had ordered Anthropic to be blacklisted, citing the company's "non-compliance." Simultaneously, the U.S. Department of War labeled Anthropic as a "security threat" and a "supply chain risk," a designation previously reserved for adversarial nations and never before applied to a domestic AI firm.
This narrative, combined with the sensitive timing, inevitably sparked widespread speculation. Numerous articles and videos emerged, piecing together speculative accounts of how Claude might have been involved in Khamenei's death—merging two highly topical subjects. In the absence of authoritative facts, the public imagination constructed a dramatic scenario of "AI precision-targeting a human," which gradually evolved into a technological rumor.
The exaggerated narrative of "AI killing Khamenei" spread widely, highlighting the ongoing friction between Silicon Valley and the Pentagon. This is not the first instance of AI technology appearing in U.S. military contexts. As early as 2018, over 3,100 employees of the then-star Silicon Valley company Google signed a petition protesting their involvement in Project Maven. Initiated by the Pentagon in 2017 as its first formal AI program, the "Algorithmic Warfare Cross-Functional Team," Project Maven initially tasked Google with a seemingly benign objective: using machine learning to analyze vast amounts of drone footage for automatic target recognition. However, Google's leading AI researchers recognized this as a potential first step towards an automated kill chain.
This marked the first major public clash between Silicon Valley and the Pentagon. The outcome appeared positive for the protesters—Google withdrew from the project. However, the contract itself persisted. Project Maven was transferred to the National Geospatial-Intelligence Agency (NGA), where it continues to operate, expanding its scope far beyond video analysis to include military target support, data fusion, and analytical tools.
In 2022, the Pentagon established the Chief Digital and Artificial Intelligence Office (CDAO). Two years later, amid rapid advancements in large language models and generative AI, the CDAO awarded defense contracts to Anthropic, Google, OpenAI, and xAI, each with a ceiling of up to $200 million. At that time, however, only Anthropic's core product, the Claude model, had received the necessary authorizations under the Federal Risk and Authorization Management Program (FedRAMP) and for Department of Defense cloud services. This allowed it to operate securely in Level 4 and 5 environments, making Anthropic the sole contractor to explicitly set red lines.
Anthropic's CEO, Dario Amodei, refused to remove two key usage restrictions: the model must not be used for mass domestic surveillance or for fully autonomous weapons. The Pentagon's response escalated: initially requesting the removal of these restrictions, then threatening to designate Anthropic as a "supply chain risk." As the situation threatened to spiral, Amodei firmly stated that their conscience would not allow them to capitulate to such demands. Consequently, Anthropic's Claude became the first major commercial AI company to explicitly refuse compliance with government requests on these grounds.
This schism between Anthropic and the White House garnered significant support within Silicon Valley. Even Sam Altman, CEO of rival OpenAI, voiced support for Anthropic's stance, even as OpenAI and Elon Musk's xAI, among other model providers, agreed to the Pentagon's new, less restrictive terms. This reflected a calculated strategy by the Pentagon; it could not afford to be without AI tools on its classified networks. After effectively dismissing one provider, it moved swiftly to secure alternatives.
The situation had by then transcended the fate of a single company, evolving into a critical issue for the entire AI industry. However, if the Pentagon were to completely remove Claude from its systems, it would likely take three months or longer to restore AI tools of equivalent capability on its classified networks. This explains why, according to media reports, Claude was still reportedly in use when Operation Epic Fury was launched.
What has AI technology actually changed in warfare? In traditional military command systems, intelligence analysts manually compare satellite imagery, communications intercepts, and open-source social media data. In the recent U.S. operation against Iran, besides the role played by Anthropic's model in natural language interaction for intelligence assessment and simulation, the technology platform Ontology, developed by Palantir—a company founded by "Silicon Valley godfather" Peter Thiel—reportedly acted as the "battlefield brain."
Since its IPO in 2020, Palantir's stock price has surged over 1700%, with its market capitalization briefly exceeding $450 billion, making it one of the top-performing AI growth stocks in the S&P 500. Named after the "seeing-stones" in The Lord of the Rings, Palantir's core mission has always been to break down data silos between intelligence agencies. Its core technology, the Ontology platform, essentially uses database modeling to operate at speeds of 1.2 million heterogeneous data points per second across semantic, dynamic, and decision layers. It claims to transform chaotic information into combat elements easily understood by military officers. This model, reliant on advancements in AI and satellite networks like Starlink, does expand intelligence sources and improve efficiency.
For instance, when gathering multi-source intelligence from CIA assets, drones, satellites, and social media, the system generates key indicators related to specific "persons," "locations," or "launch sites," forming a common operational picture within Palantir through sensors and real-time network monitoring. Media reports suggest Palantir's AIP (Artificial Intelligence Platform) played a significant decision-support role in an operation targeting Venezuelan President Nicolás Maduro. The system reportedly moved beyond displaying "what is happening" to simulating "what could happen." For example, it calculated probabilities based on Maduro's routines, combined with optimal weather conditions and minimal risk of civilian casualties, to recommend the best time for mission execution.
Palantir's AIP integrates low-orbit satellite imagery, communications intercepts, and even subtle fluctuations on social media. The company is currently at the peak of its transition from a "software company" to a "national security infrastructure" provider.
However, whether discussing Anthropic or Palantir, the current focus of these AI technologies remains on prediction and efficiency gains within the intelligence assessment phase. When it comes to the precise application of force in military operations, the work still relies on covert assets: stealth drone swarms, CIA clandestine units, and intelligence gathered on daily routines over long periods. Substantiating this point, while Palantir was reported to have been used in a February operation targeting Maduro, it is crucial to note that CIA operatives had already infiltrated Venezuela as early as August of the previous year. Informants close to Maduro were a critical factor in allowing the CIA to map his detailed daily movements.
Feeding this information to AI and conducting rehearsals in a full-scale replica of Maduro's residence during mission preparation can indeed enable low-cost, rapid training for capture operations. However, the final decision to act and the act itself—pulling the trigger—remain human responsibilities.
It is essential to emphasize that modern military operations still fundamentally depend on long-term human intelligence penetration, encrypted communications decryption, and multiple layers of cross-verification. The scenario cannot be likened to simplistic fiction where AI is handed control, nor should the technological myth of "AI autonomous killing" be propagated. AI can accelerate strikes and optimize intelligence, but it can never decide whether to go to war, for whom to fight, or when to cease hostilities. The ultimate authority over the生死按钮 of war remains firmly in human hands. There is no need to mystify AI, nor to use it as an excuse to absolve humanity of responsibility. The beginning and end of war are still determined by people.
Comments