AI Upstart Listed as "Supply Chain Risk," Is the Pentagon Dividing American Tech Companies?

robot
Abstract generation in progress

The Pentagon has listed emerging AI startup Anthropic as a “supply chain risk,” prompting Silicon Valley tech giants to take divergent stances based on their interests. Microsoft publicly supports Anthropic’s lawsuit against the Pentagon, while Google, amid tensions with Anthropic, has expanded its infiltration into the latter.

The Financial Times reported on the 11th that on Tuesday, Microsoft openly backed Anthropic, becoming the first major tech company to side with the AI startup in its dispute with the U.S. Department of Defense. In court documents, Microsoft warned that the Department of Defense’s “extreme” and “unprecedented” actions against the AI startup could have “broad negative impacts” on the U.S. tech industry. Microsoft has requested a temporary restraining order to prevent the Department of Defense’s decision to list Anthropic as a “supply chain risk” from taking effect during the case.

“This conflict has caused a split in Silicon Valley,” the FT noted. Since the current U.S. administration took office, major tech firms in Silicon Valley have been very cautious to avoid openly confronting it.

According to reports from Forbes and other media outlets, Anthropic’s situation is seen by competitors as an opportunity to penetrate government markets. Just one day after Anthropic sued the U.S. government on Monday, Google announced that its newly developed AI agents would be deployed in the Pentagon’s offices, serving approximately 3 million military and civilian personnel for tasks like meeting minutes and task planning, in non-classified environments. There are also reports that negotiations are underway to expand this to classified and top-secret environments.

Google is not the first company to expand cooperation with the Department of Defense after this conflict erupted. Previously, after OpenAI was blocked by the Department of Defense, it quickly announced a partnership with the Pentagon, claiming the agreement had “more security safeguards than any previous AI deployment agreement,” but this move was met with strong market backlash. Following the announcement, OpenAI’s ChatGPT saw a surge in uninstallation rates.

Notably, some employees of OpenAI and Google have joined the anti-Pentagon camp, stating that “the U.S. government is trying to sow division among AI companies through fear-mongering.”

Anthropic has been the only AI provider operating within the Pentagon’s classified cloud environment until February 27, when it was designated a “supply chain risk.” This rare measure is usually reserved for foreign competitors. On Monday, Anthropic filed a lawsuit against the Pentagon, calling the action “unprecedented and illegal,” and claiming it has caused “irreparable harm” to the company.

Founded in 2021 by former OpenAI executives, Anthropic has rapidly become one of the fastest-growing tech startups in the U.S., with a valuation of $380 billion. The conflict between the two parties erupted in February this year. The New York Times reported that in its contracts with the Pentagon, Anthropic drew two red lines: opposing AI being used for mass surveillance of Americans and deployment in autonomous weapons with no human oversight. The Associated Press reported that U.S. Secretary of Defense Lloyd Austin issued a final ultimatum to Anthropic in February, demanding the company lift all restrictions and allow the military to use AI for “all lawful purposes,” which Anthropic refused. On February 27, the same day the Pentagon listed Anthropic as a “supply chain risk,” former President Trump announced that he had ordered all federal agencies to immediately cease using Anthropic’s technology. On March 9, Anthropic officially sued the U.S. government.

However, Reuters reported on the 12th that the Pentagon is easing restrictions on Anthropic. An internal memo revealed that if certain AI tools are deemed critical to U.S. national security, the Pentagon will allow some units to retain and use Anthropic’s products beyond the original six-month phase-out period. Analysts believe this indicates most Pentagon suppliers are struggling to exclude Anthropic from their supply chains. Reuters noted that while the Pentagon quietly opened a waiver pathway, the memo still prioritizes removing Anthropic’s products from systems supporting critical missions, such as nuclear and missile defense systems.

Dr. Brianna Rosen, Executive Director of the Oxford Blavatnik School of Government’s Network and Technology Policy Program, said this dispute is widely seen as a conflict between AI ethics and national security, exposing long-standing governance gaps in military AI use. It reflects that business contract mechanisms can no longer replace governance frameworks capable of adapting to AI’s role in warfare.

Nada Sanders, a professor of supply chain management at Northeastern University, stated that Anthropic’s good cooperation with the Pentagon makes it the first AI company to provide large language models for government classified networks. Listing it as a “supply chain risk” is an extremely serious and unprecedented punishment for U.S. companies. She noted that labeling U.S. AI companies in this way—especially when it appears to be retaliation for their negotiation positions—could hinder innovation. Companies that implement safety or ethical safeguards risk being excluded from government markets, which may make them hesitant to develop such protective technologies.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin