Officials at multiple federal agencies have raised concerns about the safety and reliability of Elon Musk’s xAI artificial-intelligence tools in recent months, highlighting continuing disagreements within the U.S. government about which AI models to deploy, according to people familiar with the matter.
The warnings preceded the Pentagon’s decision this week to put xAI at the center of some of the nation’s most sensitive and secretive operations by agreeing to allow its chatbot Grok to be used in classified settings.
The Pentagon has given one of xAI’s rivals, Anthropic, a Friday deadline to agree to looser rules on its use by the U.S. military. Anthropic was the only developer approved for classified use before the deal between xAI and the military.
Throughout the government, agencies are racing to deploy AI for a host of purposes but the debate over which models to use has become increasingly political. Senior U.S. officials including at the White House view Anthropic’s outspoken stances on safety and ties to big Democratic donors as potentially making the company too “woke” to be a reliable provider, people familiar with the matter said. The looser controls on Grok, and Musk’s absolutist stance on free speech, have made it a more attractive choice to the Pentagon.
Other officials have questioned whether Grok’s looser controls present risks.
Ed Forst, the top official at the General Services Administration, a procurement arm of the federal government, in recent months sounded an alarm with White House officials about potential safety issues with Grok, people familiar with the matter said. Other GSA officials under him had also raised safety concerns about Grok, which they viewed as sycophantic and too susceptible to manipulation or corruption by faulty or biased data—creating a potential system risk.

