Tensions between Anthropic, a massive artificial intelligence company, and the Pentagon have escalated over the past week.
Anthropic, a frontier AI startup with a defense contract worth up to $200 million and the creator of the Claude chatbot system, has positioned itself as a company that promotes AI safety by highlighting red lines that it claims it will not cross.
These days, it seems like the Pentagon is pushing those limits.
When The Wall Street Journal and Axios revealed that Anthropic products were used in the attempt to capture Venezuelan President Nicolás Maduro, hints of a potential schism between Anthropic and the Defense Department—now known as the Department of War—began to grow more intense.
The use of Anthropic’s Claude remains unknown.
Two persons with knowledge of the situation, who asked to remain anonymous to discuss sensitive subjects, said Anthropic had not brought up or discovered any policy infractions in the wake of the Maduro operation. According to them, the business has excellent insight into the ways in which its AI tool, Claude, is applied, including in data analysis processes.
Anthropic was the first AI business permitted to provide services on classified networks, due to a partnership with Palantir in 2024. Palantir stated in its collaboration announcement that Claude may be used “to support government operations such as processing vast amounts of complex data rapidly” and “helping U.S. officials to make better choices in time-sensitive situations.”
Palantir is one of the preferred data and software providers for the military; for instance, it gathers information from space sensors to help soldiers aim strikes more accurately. Its work under the Trump administration and law enforcement authorities has also drawn criticism.
Anthropic has insisted that it does not and will not permit its AI systems to be directly utilized in deadly autonomous weapons or for domestic surveillance; however, an Anthropic employee reportedly expressed concerns about the reported use of its technology in connection to the Venezuela raid through the contract with Palantir.
During a regular meeting between Anthropic and Palantir, Semafor said Tuesday, a Palantir official expressed concern that an Anthropic employee might not agree with the use of its systems in the operation, causing “a rupture in Anthropic’s relationship with the Pentagon.”
“A senior executive from Anthropic communicated with a senior Palantir executive, asking whether their software was used for the Maduro raid,” a senior Pentagon official told.
The Palantir executive “was concerned that the question came up in such a way to imply that Anthropic might disapprove of their software being used during that raid,” the Pentagon official said.
Anthropic would not confirm or deny if its Claude chatbot systems were utilized in the Maduro operation, citing the sensitive nature of military operations: The spokesman told in a statement, “We can’t comment on whether Claude or any other AI model was used for any specific operation, classified or otherwise.
The spokeswoman denied that the event had produced significant ramifications, telling that the business had not communicated any mission-related disputes with the military or engaged in unusual conversations with partners regarding Claude usage.
According to the spokeswoman, “Anthropic has not discussed with the Department of War the use of Claude for specific operations.” “Aside from regular conversations on purely technical issues, we have also not discussed this with, or voiced concerns to, any industry partners.”
Anthropic and the Defense Department’s fundamental disagreement seems to arise from a larger dispute over how Anthropic’s technology will be used by the military in the future. Recently, the Defense Department has made clear that it wants to be able to employ all AI systems for any legal purpose, but Anthropic says it wants to keep its own boundaries.
Sean Parnell, the Pentagon’s chief spokesperson, informed that “the Department of War’s relationship with Anthropic is be reviewed.”
In a statement, he said, “Our country needs our allies to be prepared to assist our warfighters in winning any battle.”
“This is ultimately about our troops and the American people’s safety.” Undersecretary of Defense Emil Michael told on Tuesday that the department’s talks with Anthropic had stalled because of a disagreement about possible applications for its systems.
Defense Secretary Pete Hegseth unveiled a new AI strategy document in early January, which mandated that contracts with AI companies remove any restrictions or guardrails specific to the companies’ AI systems. This new policy permits “any lawful use” of AI for Defense Department purposes.
The document, which would have implicated Anthropic’s interactions with the military, required defense officials to include this clause in any Defense Department AI contract within 180 days.
Although Anthropic has generally endorsed the use of its services for national security, it has insisted that its systems not be employed in fully autonomous weaponry or for domestic monitoring.
The Defense Department has been putting more and more pressure on Anthropic after objecting to the company’s emphasis on these two points.
According to our Usage Policy, Claude is utilized for a wide range of intelligence-related use cases throughout the government, including the Department of War, the Anthropic representative stated. “In a sincere effort to continue that work and resolve these complicated issues, we are having fruitful discussions with the Department of War.”
In contrast to other AI firms, Anthropic has given enterprise and national security uses of its AI systems first priority. Former senior defense and intelligence personnel made up the national security and public sector advisory committee that Anthropic established in August 2025. Last week, Chris Lidell, the former deputy chief of staff for President Donald Trump, joined the board of directors.
Palantir and Anthropic have been working together since late 2024 to make Claude systems available to U.S. intelligence and defense organizations. “At the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations,” stated Kate Earle Jensen, head of sales and partnerships at Anthropic at the time.
In July 2025, Anthropic and other top U.S. AI firms, including OpenAI and Google, each won separate two-year contracts with the Defense Department worth up to $200 million to assist in “prototyping frontier AI capabilities that advance U.S. national security.”
The Anthropic representative told in a statement that the organization is dedicated to leveraging frontier AI to serve US national security. “We provided customized models for national security customers and were the first frontier AI company to place our models on classified networks.”
Dario Amodei, the CEO of Anthropic, has frequently underlined the company’s dedication to utilizing its AI services for national security objectives. Amodei stated in a late January essay that “we should arm democracies with AI, but we should do so carefully and within limits” and that “democracies have a legitimate interest in some AI-powered military and geopolitical tools.”
Any worries about the use of Anthropic systems for active engagement in lethal autonomous weapons would probably be irrelevant to current negotiations given the type of systems Anthropic is developing, according to Michael Horowitz, who led AI and emerging technology policy in the Pentagon and is currently a professor of political science at the University of Pennsylvania.
“Given that the algorithms for lethal autonomous weapon systems will be more customized than Claude’s, I would be shocked if Anthropic models were the best ones to use at this time,” Horowitz told.
“I get the impression that Anthropic wants to expand the breadth and depth of its collaboration with the Pentagon. From what we understand, this seems to be a disagreement more about theoretical possibilities than actual use cases.”






