OpenAI defends itself as Anthropic soars following Pentagon fallout

OpenAI is under fire this week for its new arrangement with the Pentagon, which came only hours after the agency’s negotiations with competitor Anthropic over safety guardrails failed.

According to sources quoting market intelligence firm Sensor Tower, the reaction was almost instantaneous, with uninstalls of its flagship ChatGPT software increasing 295 percent day over day last Saturday. on the meantime, customers rushed to Anthropic’s Claude app, which reached No. 1 on the App Store as a potential show of support.

In order to make the company’s “principles very clear,” OpenAI CEO Sam Altman released an internal post to X on Monday night that described new amendments to the Pentagon agreements. In doing so, the co-founder of the business admitted last Friday that it “shouldn’t have rushed to get this out.”

Altman remarked, “The issues are super complex and demand clear communication.” “We were sincerely trying to defuse the situation and prevent a much worse outcome, but I think it just looked opportunistic and sloppy.”

Although Altman described it as a “good learning experience,” it is unclear if the admission will satisfy detractors or restore confidence in the wake of the backlash. In response to Altman’s article, several users on X asked for the contract to be made public in an effort to demonstrate transparency.

One person commented, “You guys completely torched your brand and integrity on this,” in response to Altman’s post that had 13,000 views. The individual stated that releasing the contract agreement itself is the “only way” to “regain any trust.”

The business “wants you to just trust them that the NSA is excluded from their contract,” according to former OpenAI safety researcher Steven Adley, who wrote on X that he hopes it is “clear why, without strong evidence to the contrary, people are mistrusting OpenAI ousting OpenAI on this.”

Legislators on Capitol Hill have taken notice of the dispute; on Tuesday, Democratic Senator Brian Schatz of Hawaii posted on X that he “just downloaded Claude.” The matter may also mark the start of a legislative discussion.

Rep. Sam Liccardo (D-Calif.) of Silicon Valley announced on Monday that he will propose an amendment to the Defense Production Act this week that would forbid federal agencies from “retaliating” against high-risk technology vendors and developers who attempt to restrict the use of their technology in “ways to mitigate the risk to United States citizens.” Additionally, according to the report, Sen. Ron Wyden (D-Ore.) promised to oppose Anthropic’s conduct.

According to Altman, OpenAI’s revised agreement now has wording that is “consistent with applicable laws,” such as the statement that the “AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

“For the avoidance of doubt, the Department understands this limitation to prohibit purposeful tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information,” Altman stated, adding that it is “critical to protect the civil liberties of Americans.”

Altman claims that the Pentagon also promised OpenAI that the National Security Agency and other intelligence agencies inside the department would not use its services.

According to the Wall Street Journal, Altman further defended the agreement during an all-hands meeting on Tuesday, telling staff members he “feels[s] terrible for subjecting” them to the criticism. According to the WSJ, Altman called the situation “really painful” and said it was a “complex but the right decision with extremely difficult brand consequences and very negative PR for us in the short term.”

In a protracted debate on X, Katrina Mulligan, OpenAI’s chief of national security partnerships, further defended the modifications. According to Mulligan, the new deal provides other AI labs with “a better starting place on the issues.” Mulligan reiterated that the intelligence agencies are not involved in the agreement and stated that she does not agree to provide contract terms when asked.

Anthropic pushed for particular constraints on completely autonomous lethal weapons and widespread domestic surveillance, with surveillance and tracking being among the main concerns. The Department of Defense sought wording that would allow Anthropic’s technology to be used for “all lawful purposes.”

Defense Secretary Pete Hegseth declared that the Pentagon will classify Anthropic as a supply chain risk after the deadline had passed, and President Trump also directed government agencies to cease utilizing Anthropic’s technology.

Anthropic has been providing AI models to US defense and civilian agencies since late 2024 through a cooperation with veteran government contractor Palantir, which has received criticism for its work on immigration enforcement. Anthropic said it intends to dispute the supply risk categorization in court.

Source link