According to US Central Command, US military troops are using a variety of artificial intelligence (AI) tools to swiftly handle massive volumes of data for operations against Iran, underscoring the developing technology’s expanding role in combat.
Over 2,000 targets have been targeted by US military strikes since they began last week, including 1,000 in the first 24 hours. The endeavor is almost “double the scale” of the United States’ “shock and awe” attack on Iraq in 2003, according to Admiral Brad Cooper, the chief of Central Command.
According to Captain Timothy Hawkins, a Central Command spokesperson, AI technology has been crucial to the Iran campaign by helping with the initial screening of incoming data, freeing up human analysts to concentrate on higher-level analysis and verification.
In an interview Hawkins stated, “Centcom uses a variety of AI tools, which are exactly what they are, tools, to assist human experts in a rigorous process aligned with US policy, military doctrine, and the law.” He declined to identify the equipment or the businesses that supply the military with it.
The Iran conflict has heightened the global discussion over who controls the future of AI as a tool of war, particularly whether the rapidly expanding technology can be employed lawfully. It is at the center of a high-stakes disagreement between US defense officials and Anthropic PBC, one of the most promising AI startups whose models are utilized on the Pentagon’s classified cloud.
Last week, US Defense Secretary Pete Hegseth designated Anthropic as a supply-chain risk and warned military contractors six months to cease doing business with the company after they were unable to reach an agreement on parameters limiting the use of its AI technology. Additionally, President Donald Trump ordered government agencies to stop working with Anthropic, calling it a “radical left AI company that is out of control.”
Following last week’s breakdown in negotiations, Anthropic CEO Dario Amodei and defense officials have since resumed talks, increasing the likelihood that the Pentagon could come to a deal with the company and avoid the fines that Hegseth threatened.
Although Amodei is concerned about employing AI in completely autonomous weapons before the technology is proven, and he does not want his firm’s tools used to mass spy on US civilians, he does favor working on lethal US military operations that adhere to those red lines.
According to persons familiar with US operations who spoke on condition of anonymity in order to share sensitive information, Maven Smart System, a digital mission control platform, is one of the AI technologies employed in the Iran campaign. US military authorities have already stated in public that the system, which is developed by Palantir Technologies Inc., is supplied by more than 150 different data sources.
According to the people, the system also makes use of Anthropic’s Claude AI tool, which is one of the large language models put on the system. They claim that Claude is effective and has become essential to both advancing Maven’s AI efforts and US operations against Iran.
Anthropic spokespeople declined to comment, while Palantir executives did not immediately answer questions. There are other AI-enabled defense tech firms engaged in the Iran War besides Anthropic.
Anduril Industries Inc.’s co-founder and executive chairman, Trae Stephens, told that his company’s technology was being utilized in the fighting. He stated that the corporation was “actively working day to day” with the Defense Department on continuing operations and that “we have all sorts of primarily counter-air systems that are present in conflict zones.”
“Obviously I can’t give a whole lot of details beyond that,” he informed. The defense technology corporation has stated that it intends to use AI, rapid manufacturing, and other technologies to transform the US and its allies’ military.
Hawkins, the Central Command spokesperson, stated that artificial intelligence aids analysts in determining what they need to focus on, producing so-called areas of interest and assisting soldiers in making “smart” judgments in the Iran operation. AI is also helping to gather data from systems and organize information to create clarity, he said.
In summary, these tools enable human leaders to make more intelligent judgments more quickly. According to Hawkins, “the tools do not replace them or make targeting decisions.” He added that the process of choosing a target is very particular, stringent, and legal, involving leaders and commanders.
Some groups, like Stop Killer Robots, a coalition of 270 human-rights organizations, contend that AI-enabled decision-support systems risk introducing automation bias, which occurs when people place an excessive amount of trust in machine outputs, and reduce the distinction between recommending and carrying out a strike to a “dangerously thin” line.
Following claims that around 160 individuals were killed in a strike against a girls’ primary school, Centcom is looking into potential incidences of injury to civilians. Who was at fault is still unknown, and there is no proof that artificial intelligence was involved.
“We are investigating these reports and take them seriously,” Hawkins stated. “We will continue to take all necessary procedures to reduce the possibility of unintentional harm because the protection of civilians is of the utmost priority. We have never targeted people and we never will, in contrast to the Iranian regime.






