A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic, leaving the company exposed to the fallout of a dispute over how the military could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans. The U.S. Court of Appeals in Washington, D.C., said no to Anthropic’s request for protection while the panel is still collecting evidence, a reminder that the machinery of state power can move first and sort out the consequences later. **Who Gets Hit First** The court’s decision lands on Anthropic, the San Francisco company, after it asked for an order that would shield it from the fallout of the Pentagon’s actions. The dispute centers on how the Pentagon could deploy Claude in fully autonomous weapons and potential surveillance of Americans. That is the terrain here: a company fighting to avoid being boxed out by military authorities while the public remains in the blast radius of decisions made behind institutional walls. The appeals court in Washington differed from the conclusions reached in another judge’s ruling on the same issues. Anthropic had already prevailed in a separate case focused on the same issues in San Francisco federal court. In that case, a judge forced President Donald Trump’s administration to remove a label tainting the company as a national security risk. The split rulings show the legal system doing what it often does best: generating confusion while power keeps its options open. **The State’s Labels, the Company’s Lawsuits** Anthropic filed two separate lawsuits in San Francisco and the Washington appeals court last month, saying the Trump administration was engaging in an “unlawful campaign of retaliation” because of its attempt to impose limits on how its AI technology can be deployed. The Trump administration blasted Anthropic as a liberal-leaning company trying to dictate U.S. military policy. Both sides are speaking the language of institutional control, but the underlying issue remains who gets to decide how a powerful technology is used and who bears the risk when those decisions are made. In the San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk unqualified to work with military contractors and issuing other directives that could cripple a company locked in a race for AI supremacy against rivals such as ChatGPT maker Open AI and Google. That ruling prompted the Trump administration to remove the stigmatizing labels from Anthropic and take other steps clearing the way for government employees and contractors to continue using Claude and other chatbots, according to court filing made in San Francisco earlier this week. **What the Court Would and Wouldn’t Do** The Washington appeals court did not see things the same way, even while conceding that Anthropic would “likely suffer some degree of irreparable harm” if it is deemed a supply chain risk. Still, the court said there was not sufficient reason to issue its own order revoking the Trump administration’s actions, partly because “the precise amount of Anthropic’s financial harm is not fully clear.” The company’s losses are acknowledged, but not enough to move the court to intervene on its behalf. Further evidence in the case is scheduled to be presented before the appeals court in a hearing scheduled for May 19. Until then, the Pentagon’s blacklist threat remains in place in Washington, even as the San Francisco ruling already forced the administration to back off some of its labels. Anthropic said in a statement, “We’re grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful.” Matt Schruers, the CEO of the technology trade group Computer & Communications Industry Association, said the conflicting court decisions create “substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI.” He added, “The Pentagon’s actions and the DC Circuit’s ruling create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI.” What emerges is a familiar arrangement: military authority, corporate competition, and courtrooms sorting out the terms while ordinary people are left to live with the consequences of autonomous weapons talk and potential surveillance of Americans. The institutions keep arguing over labels, risk, and market position; the public gets the system.