Musk’s xAI was a last minute addition to The Pentagon’s 0 million AI contracts

A former Pentagon official claims that the controversial inclusion of Elon Musk’s xAI in a set of Defense Department contracts valued up to $200 million was a last-minute decision made by the Trump administration. The contracts had been planned since the Biden administration, according to Glenn Parham, a former Pentagon staffer who worked on the initiative’s early stages.

He said that prior to Parham accepting a government buyout in March, xAI had not been incorporated into contract preparation. At the Pentagon’s Chief Digital and Artificial Intelligence Office, Parham served as a technical lead for generative AI, assisting in trade negotiations and incorporating AI into Defense Department projects.

“There had not been any conversations with anyone from X or xAI, Until I left” he stated. “It pretty much appeared out of nowhere.”

Anthropic, Google, OpenAI, and xAI were the four companies with which the Pentagon ultimately announced contracts last week. Each contract has a $200 million maximum payoff and a $2 million floor, with the payout amount based on the success of each partnership. (The arrangement with OpenAI was first revealed last month.) Questions were raised by artificial intelligence experts about Musk’s xAI.

The chatbot Grok from xAI had gone on an antisemitic rampage days before to the disclosure, which the business found difficult to control. It was also introducing contentious animated AI “companions” that have the potential to be aggressive and sexually provocative. According to Musk, he combined X and xAI in March.

In other words, despite Musk’s extensive experience dealing with the government, xAI lacked the reputation and performance records that usually result in large government contracts. xAI’s models were questioned by some as to whether they were trustworthy enough for government work.

On the Senate floor last Tuesday, Senate Minority Leader Chuck Schumer, D-N.Y., referred to the contract as “wrong” and “dangerous,” citing Grok’s antisemitic incident in which it referred to itself as “MechaHitler.” “The Trump administration needs to explain how this happened, the terms of the deal, and why they believe our national security isn’t worth meeting a higher standard,” he demanded.

According to Parham, the program, which is being promoted as a collaboration between the Defense Department and American tech companies at the forefront of AI development, initially concentrated on more established AI companies like OpenAI and Anthropic, which are older than xAI and have long-standing agreements with significant cloud computing companies and ties to the military.

Why Pentagon authorities added xAI to the list of contractors since March is unclear. When asked why it selected xAI, the department’s Chief Digital and Artificial Intelligence Office, which made the contracts public, did not respond in writing. However, the Pentagon stated in a statement that the antisemitism incident was insufficient to disqualify it.

“The Department will manage risks associated with this emerging technology area throughout the prototype process, as several frontier AI models have produced questionable outputs over their ongoing development,” the Defense Department said in a statement on Friday.

The statement stated that “these risks did not warrant excluding use of these capabilities as part of DoD’s prototyping efforts.”

According to the department, because “frontier AI models” are at the forefront, they present both opportunities and risks.

Regarding xAI, Musk’s complex relationship with the federal government becomes more complex. Musk’s business empire already had strong connections within the government, including contracts for his rocket company, SpaceX, even before he joined President Donald Trump as a White House adviser this year. In their current on-again, off-again conflict, Musk has promised to start a third political party with the goal of lowering the national debt. As recently as July 6, he reiterated the commitment, but it doesn’t seem like he made any real public moves to prepare for it. Musk’s government contracts have been threatened by Trump throughout the conflict.

Some academics acknowledged that despite its shortcomings, they could understand why the Defense Department may wish to work with xAI.

The policy director for Americans for Responsible Innovation, an advocacy group that typically supports a moderate ground on AI regulation, Morgan Plummer stated, “I think the department benefits when it’s engaged with as many organizations as possible.”

The $800 million program’s concept, according to Parham, predates the Trump administration, and it was started in October following an executive order on national security and artificial intelligence given by President Joe Biden. He claimed to have worked on it for around five months prior to his departure, for a total of over three years in the Defense Department developing AI.

The military’s link with the newest and most exciting technology is also greatly strengthened by the contracts with the four AI companies. Each company’s large language model (LLM), which for many users frequently takes the form of a chatbot, will be used by the military in exchange for the millions of cash. From simpler applications like email summarization to more complex ones like language translation or intelligence analysis, experts said they anticipate the military using the LLMs for a range of tasks.

The Defense Department is also leading other AI initiatives, such as Project Maven, a system that combines machine learning with a lot of data and data sources for usage and display during combat.

The potential of xAI is a topic of intense discussion within the AI community. Grok does exceptionally well on several artificial intelligence benchmarks, such as “Humanity’s Last Exam,” which is a set of questions created by subject-matter experts. However, its current involvement with neo-Nazism—and before with racial relations in Musk’s home country of South Africa—made the chatbot a target of mockery inside the business and among the general public.

The least secure of these systems is most likely Grok. “It’s doing some really strange things,” said Gary Marcus, an AI skeptic and retired psychology professor at New York University.

Marcus brought up Grok’s ideological diatribes and xAI’s refusal to submit safety reports, which have become industry norms for top AI models.

Parham believes xAI may require more time than the other three Pentagon contractors to properly deploy its technology to the military. He stated that other businesses, like as Anthropic and OpenAI, have previously undergone a rigorous government review and compliance procedure to get their software — including application programming interfaces, which programmers use to build on top of LLMs — approved for usage. He stated that up until March, when he departed, xAI had not done the same.

“It’s going to take them much longer, I think, to actually [get] their models rolled out in government environments,” he told me. “It is not impossible.” It’s simply that they’re so far behind everyone else.

According to Parham, it took more than a year for Anthropic and OpenAI’s clearance processes, from paperwork submission to authorization being issued.

Some have criticized the Pentagon’s use of commercial LLMs, partly because AI models are typically trained on massive data sets that may contain publicly available personal information. It is too dangerous to combine such data with military applications, according to Sarah Myers West, co-executive director of the research group AI Now Institute.

“Our critical infrastructure is exposed to security and privacy vulnerabilities,” she stated.

xAI is a startup that is relatively new. After co-founding OpenAI years prior, Musk launched it in 2023 and later had a falling out with Sam Altman, the company’s CEO.

Some defense and AI professionals expressed amazement at Grok’s recent antisemitic outburst and questioned whether a similar incident would happen again in government usage.

Josh Wallin, who studies the relationship between AI and the military at the Democratic-leaning think tank Center for a New American Security, stated, “I would have some safety-associated concerns based on the release of their most recent model.”

Grok’s antisemitic outbursts, according to Wallin, show a propensity for unpredictable or dangerous conduct, such passing off inaccurate or misleading information as fact—a condition known as hallucinations.

“Suppose you’re automatically generating reports from various intelligence sources, or you’re preparing a daily report for a commander. “There would be concerns about whether what you’re getting is a hallucination,” he remarked.

Source link