What just happened? A federal judge in California has blocked the Department of Defense from designating Anthropic as a national security risk. The ruling provides the company with temporary relief as it continues its legal battle with the Pentagon over whether private technology firms can object to how their AI is used in military programs.
In a sharply worded 43-page order, US District Judge Rita F. Lin said the government's actions appeared to be driven by retaliation rather than legitimate security concerns. "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," Judge Lin wrote. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the United States for expressing disagreement with the government."
The injunction halts enforcement of a February 27 order from Defense Secretary Pete Hegseth labeling Anthropic a "supply chain risk," a designation typically reserved for foreign entities viewed as threats to US national security. The label would have barred Anthropic from selling its systems to federal agencies and potentially from working with other defense contractors.
The ruling also prevents agencies from carrying out a related directive issued after President Trump amplified the designation on social media.

The dispute began during negotiations over a $200 million Pentagon contract focused on artificial intelligence capabilities for defense operations. Anthropic, which has long advocated for guardrails on AI deployment, sought restrictions on how its models could be used in surveillance or autonomous weapons systems. Defense officials resisted, arguing that no private company should dictate how the military applies the technology it acquires.
Shortly after those discussions broke down, the Pentagon labeled Anthropic a supply chain risk. The company responded by filing lawsuits in two federal courts, alleging that the government's actions were both punitive and unconstitutional. It argued that the designation violated its First Amendment rights and inflicted "irreparable harm" on its reputation and business.

Judge Lin agreed that the company had demonstrated a strong likelihood of success on the merits, stating that "government officials cannot use the power of the state to punish or suppress disfavored expression." She added that the Defense Department's decision appeared "arbitrary and capricious" and that "the balance of equities and public interest favor Anthropic."
The temporary injunction, which will take effect unless the Pentagon appeals within seven days, is being closely watched across the technology industry. Microsoft, along with employees of OpenAI and Google, submitted amicus briefs supporting Anthropic's position. Many in the sector view the case as a potential precedent for how the government may treat companies that challenge its use of AI systems in sensitive or military contexts.
At a hearing earlier this week, Judge Lin signaled skepticism toward the Pentagon's assertion that Anthropic could manipulate its own technology "to suit its own interests" in wartime scenarios. "It looks like an attempt to cripple Anthropic," she said.
In a statement following the ruling, Anthropic said it was "grateful to the court for moving swiftly" and reaffirmed that its "focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
The Defense Department has not yet publicly commented on the decision.