Pentagon vs Anthropic: Who Controls Military AI Use

Author: Łukasz Grochal

The clash around Anthropic and its CEO Dario Amodei is basically a high‑stakes fight over who sets the rules for military AI and mass surveillance, not just a personality feud. Anthropic wants hard contractual and technical limits so its Claude models cannot be used for fully autonomous weapons or broad, domestic dragnet surveillance, arguing that today’s systems are too unreliable for lethal autonomy and too powerful for unchecked monitoring of citizens.

The Pentagon, backed by the Trump administration, is pushing for the right to deploy Anthropic’s models “for any lawful purpose” and has framed the company’s constraints as a national security risk. After Anthropic refused a “best and final” offer to loosen its safeguards, Defense Secretary Pete Hegseth designated the firm a supply chain risk, which restricts contractors from using its technology and potentially cuts it out of a 200 million dollar deal.

President Trump then ordered all federal agencies to drop Anthropic products, while officials like Undersecretary Emil Michael publicly branded Amodei a liar with a “God complex,” accusing him of trying to control U.S. defense policy through corporate terms of service. Amodei, in turn, calls the response “retaliatory and punitive” and says the company is drawing “red lines” to defend American values rather than dictate policy. Ironically, the political backlash appears to have boosted consumer interest: the Claude app has surged near the top of the U.S. App Store as users rally around the company in its dispute with Washington. At its core, this is a power struggle over whether a private AI developer can meaningfully limit government uses of its tech when those uses are justified in the name of national security, and where the line between ethical design and perceived “corporate treason” should be drawn.