Although Anthropic raises concerns about domestic surveillance, its AI models may be useful to spies in the analysis of classified data. According to reports, the Trump administration is upset about that restriction.
The Trump administration is becoming increasingly hostile toward Anthropic because of the AI company’s limitations on law enforcement’s use of its Claude models, according to a report published by Semafor on Tuesday. According to two senior White House officials who spoke to the site, federal contractors that collaborate with the FBI and Secret Service have encountered difficulties when trying to employ Claude for surveillance duties.
The conflict arises from Anthropic’s usage guidelines, which forbid applications for domestic monitoring. Speaking anonymously, the authorities expressed concern that Anthropic selectively enforces its standards based on political considerations and used ambiguous language that permits a wide interpretation of its regulations.
The limitations impact private contractors that use AI models for their work with law enforcement organizations. Officials said that in some instances, Anthropic’s Claude models are the only AI systems authorized for top-secret security scenarios via Amazon Web Services’ GovCloud.
Anthropic provides a specialized service for national security clients and has an agreement with the federal government to charge agencies a modest price of $1 for its services. While its standards still forbid using its models for weapons development, the business also collaborates with the Department of Defense.
OpenAI announced in August a competing deal to provide ChatGPT Enterprise access to over 2 million federal executive branch employees for $1 per agency for a one-year period. The arrangement was made just one day after the General Services Administration inked a broad contract that enabled Google, Anthropic, and OpenAI to provide capabilities to federal workers.
Balanced between ethics and money
Anthropic is apparently involved in media outreach in Washington, so the timing of the spat with the Trump administration presents challenges. The administration seeks reciprocal collaboration from American AI businesses, which it has consistently positioned as major competitors in the global rivalry. But this isn’t the first time Anthropic has clashed with Trump administration officials. The business has previously opposed legislation that would have made it impossible for US states to enact their own AI laws.
Anthropic has generally been navigating a challenging path between upholding its corporate principles, pursuing contracts, and obtaining venture funding to fund its operations. In November 2024, for instance, Anthropic announced a collaboration with Palantir and Amazon Web Services to utilize Palantir’s Impact Level 6 environment, which manages data up to the “secret” classification level, to deliver Claude to US intelligence and defense agencies. Given Anthropic’s professed emphasis on AI safety, some members of the AI ethics community criticized the collaboration.
More broadly, security experts are examining AI language models’ possible surveillance capabilities. By automating the analysis and summarization of large conversation databases, security researcher Bruce Schneier said in a December 2023 Slate piece that AI models might allow for previously unheard-of mass spying. Though AI systems can handle communications at scale, he pointed out that traditional espionage techniques involve a lot of human effort, whereas AI systems can handle communications on a large scale, possibly moving monitoring from seeing activities to reading intent through sentiment analysis.
With AI models able to handle human communications at a never-before-seen scale, the fight over who may use them for surveillance (and under what conditions) is only beginning.







