Palantir’s CEO pushes AI‑driven hard‑power doctrine, prompting technofascism accusations
On Monday, Palantir Technologies’ chief executive Alexander Karp released his latest work, The Technological Republic, a manifesto that explicitly calls for a Western hard‑power strategy constructed on the premise that advanced software platforms and artificial intelligence can serve as the primary instruments of geopolitical dominance, a proposition that has immediately drawn the ire of a coalition of civil‑rights advocates, academic critics, and former intelligence officials who have collectively labelled the vision as a form of ‘technofascism’.
The book’s central thesis, which contends that the United States and its allies should institutionalise algorithmic decision‑making, predictive analytics, and real‑time data fusion as the backbone of military planning and diplomatic coercion, is presented as an inevitable evolution of modern warfare, yet the critics point out that such an approach effectively circumvents traditional democratic oversight, embeds opaque proprietary code within national security apparatuses, and risks normalising lethal autonomous systems without establishing robust accountability mechanisms.
Palantir, which has historically positioned itself as a neutral data‑analytics provider to both governmental and private sectors, has offered no substantive clarification beyond reiterating the book’s assertion that ‘software is the new artillery’, thereby exposing a procedural inconsistency whereby a publicly traded defence‑adjacent corporation simultaneously markets its products under the banner of ethical AI while advocating policy frameworks that would grant it unprecedented influence over state‑level combat operations.
The episode, therefore, underscores a broader systemic flaw in which the conflation of commercial technological ambition with national security strategy creates an environment where private interests can shape war‑making doctrines without transparent legislative scrutiny, a dynamic that critics warn could cement a self‑reinforcing feedback loop between profit‑driven AI development and the erosion of democratic checks on the use of force.
Published: April 20, 2026