Claude, Palantir and Who Controls AI in Modern War

Łukasz Grochal

Anthropic has become a focal point in a rapidly escalating clash over how far military use of artificial intelligence should go. The company built Claude with strong usage rules that explicitly forbid applications for autonomous weapons, target selection without meaningful human control, and large‑scale surveillance of civilians. For a while, the Pentagon still chose Anthropic as a key AI provider, seeing its models as powerful tools for planning, translation, logistics, and threat analysis. That fragile compromise collapsed when US defense officials pushed for broader, less restricted battlefield use of Claude and Anthropic refused to loosen its “hard red lines”.

According to US media reports, parts of the US military had already experimented with integrating Anthropic’s models into Project Maven and related targeting tools developed with Palantir, which analyze huge volumes of satellite, signals, and battlefield data. In operations against Iran, a Maven‑based system reportedly augmented by Claude helped generate long lists of potential targets, attach coordinates, and prioritize them at unprecedented speed. Similar AI‑assisted workflows were previously linked to US and allied operations in Venezuela, including actions against the Maduro regime, and in Iraq, where automated pattern analysis supported counterterrorism strikes and surveillance campaigns. Supporters inside the security establishment argue that such tools can reduce collateral damage and protect soldiers by speeding up intelligence work. Critics counter that when an AI system suggests targets, human “oversight” can easily become rubber‑stamping in the fog of war.

The confrontation peaked when the Trump administration ordered all US government and military contractors to stop using Anthropic products after the company refused to drop restrictions on autonomous weapons and mass surveillance. Defense officials labeled Anthropic a “supply chain risk” usually reserved for foreign adversaries, effectively blacklisting it from Pentagon work. Anthropic responded by suing the Department of Defense, accusing it of abusing national‑security tools to punish a company for enforcing safety standards and for refusing to hand over open‑ended wartime usage rights. That lawsuit has turned a policy dispute into a test case for how much leverage democratic governments should have over private AI firms that choose to set their own ethical limits.

At the heart of the conflict are Anthropic’s non‑negotiable boundaries: no direct use in selecting or engaging lethal targets, no enabling of fully autonomous weapons systems, no support for indiscriminate or real‑time mass surveillance, and heightened scrutiny for any work connected to detention, predictive policing, or domestic monitoring. The company says those rules apply globally, including to allies, partners, or intermediaries, and insists that “dual‑use” features be tightly scoped and auditable. That position has won support from many AI safety researchers, civil‑liberties groups, and some European policymakers who fear a rapid drift toward automated warfare. At the same time, senior US defense officials and some security hawks accuse Anthropic of undermining national security by denying the military access to what they see as decisive technology.

Complicating the picture further is Anthropic’s close technical collaboration with Palantir, whose Maven Smart System has already been used to identify and rank targets in Iran and other theaters. While Palantir markets its stack as merely an analytic aid with a human in the loop, critics argue that once AI systems generate lists of “high‑value targets”, the line between recommendation and de facto decision becomes blurry. The broader debate now stretches far beyond one contract. It touches on who sets the limits on AI in war, how transparent battlefield AI should be, and whether companies like Anthropic should be punished or protected when they draw a line and refuse to cross it.

References
4 sources
01
reuters.comReuters
02
thehill.comThe Hill
03
nationaltoday.comNational Today
04
aa.com.trAA
Palantir Manifesto Graphic: AI Defense and Culture Clash

Palantir Manifesto Hits at Regressive Cultures and AI Shift

Palantier Dilemma Human Rights vs Sercurity

Europe's Palantir Boom Amid Sovereignty and Rights Fears

Palantir The Company You Do Not Know, Yet Shapes Your World

Inside Palantir: The Tolkien‑Inspired Data Empire

DeepSeek V4‑Pro 1.6T‑Parameter AI Model Architecture

DeepSeek V4: 1M‑Token Context and Budget Frontier AI Models

OpenAI ChatGPT Images 2.0 feature overview

OpenAI Updates ChatGPT Images With Better Text

Publishers Are Shutting Out Internet Archive

News Giants Block Wayback Machine Over AI Fears

Claude Design Launch: Brand-Aware AI Prototyping Image

Anthropic Launches Claude Design to Rival Figma Tools

Europe Digital Sovereignty and Big Tech Dependence

Europe’s Push for Digital Sovereignty Is Changing the Game

Qwen3.6 Coding Agent Benchmarks Chart Visual

Exploring Qwen3.6: Coding Benchmarks and Speed

Project Glasswing: Anthropic Mythos Zero-Day Exploit Finder Art

Claude Mythos Leak Ignites Fears of Unstoppable AI Exploits