The Evolution of AI Militarization: Who Will Be the Next Victim?

Deep News03-16 16:51

An article published on March 12 discusses actions taken by the United States to apply artificial intelligence (AI) technology in the military domain. In the conflict involving the U.S., Israel, and Iran, the use of AI tools is attracting increasing attention and concern. Reports indicate that the U.S. and Israel have utilized AI systems in these military operations for intelligence analysis, generating target lists, and simulating combat scenarios. However, it has been suggested that issues with an AI system may be linked to an airstrike on an elementary school in southern Iran, which resulted in over 170 casualties. Simultaneously, in this conflict, data centers of Western technology companies in the Middle East were targeted by Iran for the first time. The Russia-Ukraine conflict has been labeled the "first AI war." From the ongoing Russia-Ukraine conflict to the temporarily halted Gaza war, and now the U.S.-Israel-Iran conflict, the militarization of AI continues to exhibit new characteristics. Furthermore, the deep involvement of AI in the battlefield raises a series of issues, including "automation bias," lack of oversight, and the resulting increase in civilian casualties. These developments challenge the ethical boundaries of warfare.

The role of AI is evolving from a supportive function to one of autonomy. On March 11, the commander of U.S. Central Command stated in a social media video that AI has become a crucial tool for the U.S. in selecting targets for strikes against Iran. According to reports, both the U.S. and Israel used a battlefield intelligence platform built by Palantir Technologies, which incorporates an AI model from Anthropic. This system analyzes classified data from satellites, surveillance systems, and other intelligence channels to provide real-time target identification and prioritization for military actions in Iran. Reports also indicate this system was previously used in operations targeting the president of Venezuela. The U.S. government has significantly expanded the use of this platform across many military branches; by May of last year, over 20,000 U.S. military personnel were using it. A source familiar with the collaboration between Anthropic and the U.S. Department of Defense indicated that the AI model primarily assists analysts in categorizing intelligence and does not directly recommend strike targets. Nonetheless, within the first 24 hours of the operation against Iran, the U.S. military struck over 1,000 targets with AI assistance. Experts note that the rate of target identification suggests a high degree of automation in the process.

In the Gaza Strip, AI also served as a core tool for Israeli military operations. Reports state that the Israeli military used multiple AI systems to generate target recommendations during retaliatory actions. One system identified and suggested attacks on buildings and facilities potentially used by Hamas members. Another system was programmed to identify suspected members of Hamas and other armed groups for targeted operations. A separate system tracked targets via mobile phone signals to monitor their movements and identify strike opportunities. The Israeli military has stated that these AI tools merely assist intelligence analysts in reviewing and analyzing existing information and are not the sole basis for determining targets, nor do they autonomously select targets. The Russia-Ukraine conflict has been described by many as the "first AI war." While experts previously noted that AI tools mainly provided support and information, with AI-driven artillery systems and drones largely in testing phases, recent reports indicate the conflict is rapidly becoming more automated. A significant development is the emergence of autonomous drones equipped with software capable of navigating payloads over long distances and locking onto targets. Ukrainian AI-enhanced drones can autonomously attack Russian soldiers when communications fail, using computer vision to identify specific targets and execute precise strikes. Ukraine now launches up to 9,000 drones daily, making it a primary testing ground for weapons manufacturers racing to automate parts of the "kill chain." A military expert commented that early in the Russia-Ukraine conflict, AI was primarily used for辅助 reconnaissance, target identification, and intelligence analysis, with limited synergy between systems. In contrast, current U.S. and Israeli strikes against Iran demonstrate deep integration of AI with joint command and control systems, enabling real-time data sharing and coordinated operations across land, sea, air, space, cyber, and electromagnetic domains. In short, from the Russia-Ukraine conflict to the Gaza war and now the U.S.-Israel-Iran conflict, the role of AI technology on the battlefield is undergoing a profound evolution: from辅助 to core, from localized to comprehensive, and from passive to autonomous.

A notable new development in the recent U.S.-Israel-Iran conflict is that AI infrastructure has become a target for the first time. Amazon Web Services reported that three of its data centers in the UAE and Bahrain were attacked by Iranian drones. Iranian state television stated the attacks were aimed at "clarifying the role these centers play in supporting enemy military and intelligence activities." According to reports, Iran's Islamic Revolutionary Guard Corps has published a new list of potential targets, including data centers and offices of several U.S. tech companies in the Middle East, such as Google, Microsoft, and Palantir. Iran's wariness of Western tech companies is not without reason. Silicon Valley firms have long maintained deep cooperative relationships with the Pentagon. On February 28, OpenAI announced an agreement with the U.S. Department of Defense to deploy its AI models on the department's classified networks. Last year, the Pentagon also signed agreements with Google, xAI, and other U.S. tech companies, planning to leverage their technology and talent to accelerate the adoption of advanced AI. Western tech companies, represented by firms like Alphabet and Amazon, provide everyday services to the global public while simultaneously maintaining deep cooperation with the U.S. military. This has raised concerns about whether their civilian AI tools and data could be used for military purposes. Anthropic has reportedly clashed with the White House over concerns that the U.S. government might use its AI tools for mass surveillance or developing autonomous weapons. An expert from a Chinese research institute highlighted the high risks involved, explaining that vast amounts of data stored on Western commercial clouds—including commercial satellite imagery, communications data, financial transaction records, and social media trails—could easily be accessed by the U.S. military during wartime, via permissions or technical means, and used as a basis for strike decisions. He emphasized that digital sovereignty has become a new strategic battleground, and that data centers, computing power nodes, and communication cables are directly related to national security and wartime initiative. Therefore, there is a need to accelerate the construction of independent, controllable cloud infrastructure, promote local data storage, and establish backup mechanisms to ensure survivability under extreme circumstances.

The U.S. and NATO are accelerating the pace of AI militarization. The U.S. Secretary of Defense recently urged the country to speed up the adoption of AI technology to build an "AI-first combat force." A senior researcher reported that in a series of exercises, the U.S. Army used Palantir's software, matching the target selection efficiency previously achieved in Iraq. Thanks to AI, this result was accomplished with only 20 personnel, whereas the same task required over 2,000 people during the Iraq War. NATO, which signed a contract with Palantir last year, depicted its version of the intelligence platform in a recent video as giving commanders battlefield oversight akin to a video game. While AI can provide significant military advantages, it also raises widespread concerns. Examples of AI failures abound, from software misidentifying individuals to fatalities caused by autonomous vehicles. Experts point to the danger of "automation bias," where there is a default assumption that information provided by AI is accurate and reliable. An AI expert questioned whether humans genuinely review each target recommended by AI before authorizing strikes. An Israeli journalist who spoke with intelligence officers involved in Gaza operations reported that, despite knowing a certain AI system suggested incorrect targets approximately 10% of the time, officials still relied on it. One intelligence officer responsible for authorizing airstrikes recalled spending only about 20 seconds to confirm a target, essentially checking only if the target was male. Many scholars and media outlets believe the high number of civilian casualties in the Gaza war is closely related to Israel's use of AI tools. The use of AI by the U.S. and Israel in strikes against Iran raises numerous moral and legal questions and fuels concerns that humans may be losing control over the machinery of war. A former chief of staff of the Israel Defense Forces expressed a similar concern in a 2023 interview, stating the worry is not that robots will control us, but that AI will control our thinking and replace us without us realizing it. A report noted that treaties governing armed conflict do not specifically regulate the tools used to achieve military effects; international law applies regardless of whether the weapon is a bow, a tank, or an AI-based database. However, some institutions and scholars, including the International Committee of the Red Cross, argue that the militarization of AI requires new legal frameworks for regulation. They emphasize that as AI weapon systems become more advanced, ensuring human control and accountability over these systems is crucial. Chinese experts have also warned about the risks of AI militarization. An expert stated that the pace of AI militarization is accelerating, with countries increasingly engaged in a race, yet a regulatory framework or consensus—such as adhering to humanitarian principles—has not been established. Meanwhile, due to the "black box" nature of AI militarization, it is difficult to monitor whether conflicting parties are implementing relevant principles, posing a significant challenge for arms control. The international community needs to engage in arms control negotiations on this issue.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment