Palantir, Anthropic And The Battle For AI In War

Author: Łukasz Grochal

At Palantir’s AIPCon 9, Alex Karp confirmed that Palantir’s platforms remain integrated with Anthropic’s Claude, despite the US Department of Defense designating Anthropic as a supply chain risk and planning a six‑month phase‑out for Pentagon work. The designation forces defense contractors to certify they are not using Claude on defense programs, but Karp stressed that these systems are deeply embedded and cannot simply be “ripped out overnight”, echoing comments from Pentagon CTO Emil Michael about the complexity of unwinding core infrastructure. Anthropic has responded by suing the Trump administration, calling the classification unlawful and warning that large government contracts are at stake, which underlines how intertwined commercial AI vendors and defense projects have become.

At the same time, Palantir is publicly defending its direct role in the “kill chain” – the sequence that connects sensor data, targeting decisions and actual strikes. Company representatives have said they are “very, very proud” of how their software shortens decision cycles from hours to minutes, for example through systems like Project Maven that fuse intelligence feeds and recommend targets. Supporters argue this precision can reduce collateral damage and give Western militaries an edge in current conflicts in the Middle East, while critics see it as an aggressive commercialization of lethal technology.

In a Forbes interview, Karp framed all of this in terms of “value creation” in both war and business, arguing that organizations survive only if they deliver outsized, hard‑to‑replicate results for their partners. He portrays Palantir as a high‑risk, high‑reward company that thrives where others hesitate, from battlefields to heavily regulated industries. This rhetoric resonates with some investors who view Palantir as core infrastructure for sovereign AI, defense, and industrial operations, especially given new alliances with Nvidia and others to build secure national AI data centers. Recent reports highlight strong commercial growth in the US, expanding defense and aerospace deals, and stock momentum tied to rising geopolitical tensions and expectations of higher defense spending.

The flip side is that Palantir’s identity is now tightly bound to militarized AI and controversial conflicts. Civil society critics question both the ethics of automating parts of the kill chain and Karp’s framing of war as another arena for “value creation”. In short, the articles together depict a company leaning hard into its role as a strategic wartime software supplier, betting that governments and investors will reward it for speed, integration and battlefield impact, even as legal and ethical disputes intensify around its use of AI models and its participation in lethal decision chains.

References(4)
Sources