Palantir and Anduril Industries are reportedly forming a powerful consortium to compete for Pentagon contracts. This development, reported by TechCrunch, signals the emergence of a new generation of defence contractors challenging traditional giants like Lockheed Martin, Raytheon, and Boeing.

The alliance could potentially includes Elon Musk's SpaceX, OpenAI, Saronic, and Scale AI. AI-powered warfare systems are transforming military operations globally, often with controversial consequences. As AI becomes embedded in defence infrastructure, questions will be raised about ethics and oversight. Questions that will fill boundless web pages but will be more than likely drowned out by the Billions of dollars on offer.

Palantir, founded by Peter Thiel, specialises in data analytics and AI software for defense and intelligence operations. Their platforms enable the analysis of massive datasets for operations ranging from counterterrorism to battlefield targeting. The company recently secured a $480 million Pentagon contract for its AI technology that helps identify hostile targets.

Anduril, founded by Palmer Luckey in 2017, focuses on autonomous systems and AI for military applications. Their main off the shelf system is Lattice, is an AI-powered command-and-control platform that manages drones, robots, and surveillance systems. Earlier this year, Anduril secured a contract to help build the Pentagon's unmanned warplane programme and has been valued at approximately $14 billion.

Unlike traditional defence contractors who typically operate on cost-plus contracts (where companies are paid for all costs plus a guaranteed profit), these tech companies follow a more commercial approach. They develop products at their own expense, often with venture capital funding, create ready-to-deploy solutions rather than custom-built programmes, and sell platforms that can be used across multiple customers and applications.

This mooted collaboration could pose a direct challenge to the traditional processes and thinking of the military-industrial complex, where a handful of prime contractors have dominated defence procurement for decades and still secure most of the approximately $850 billion U.S. defence budget.

Conflict in Gaza and Ukraine have given a live testing ground for AI capability. The Israeli military's use of systems like "Lavender" and "The Gospel" in Gaza has raised significant ethical concerns. Reports indicate that these AI systems were used to rapidly identify thousands of potential targets with limited human oversight. However, being realistic, the reports have not been widely read, and I haven't seen one person's viewpoint on the Gaza Conflict changed purely on the purported use of AI by the Israeli Forces.

The Pentagon has invested heavily in AI-powered systems like Project Maven, which uses machine learning algorithms to identify hostile targets.
The Russia-Ukraine conflict has served as a critical accelerant for AI adoption in warfare. According to research from the Center for a New American Security, Ukraine has leveraged AI as a key data analysis tool to process the overwhelming volume of information generated by various systems, weapons, and soldiers in the field.

As Samuel Bendett writes in his CNAS analysis, "An absolutely crucial aspect of this war is the rapid evolution of combat technologies and the adaptation of key tactics and concepts by both sides." The report details how Ukraine has benefited from allies and partners offering AI technologies, giving them an edge in geospatial intelligence.

Ukrainian forces have employed AI to geolocate and analyse open-source data, including social media content, to identify Russian soldiers, weapons, systems, and movements. As Bendett notes, "Neural networks are used to combine ground-level photos, video footage from numerous drones and UAVs, and satellite imagery to provide faster intelligence analysis and assessment to produce strategic and tactical intelligence advantages."

Ukraine has maintained what Bendett describes as a "human-centric approach toward AI use, with operators making the final decisions." He observes that "Ukraine's Western partners are embracing that approach, but their militaries still need to agree on how to use AI after its debut in the Russian-Ukrainian conflict."

While those in the pro camp argue that AI enhances precision and reduces civilian casualties, those on the other side point to the risks of over-reliance on automated systems, the potential for errors, and the ethical implications of reducing human oversight in lethal decisions. The pro camp will win. No doubt. I'd bet my house on it. You can't put the lid back on.

Looking at the financial side of things, legacy primes grew through government contracts and profit reinvestment. With decades of relationship building behind them they may not fear the new arrivals.

The tech defence firms have been fuelled by significant venture capital, Anduril raised $1.48 billion in a Series F round in mid-2024, while Palantir's stock "more than quadrupled" in 2023 amid excitement about its AI products. Interestingly, many of these companies share common investors, with Peter Thiel's Founders Fund having stakes in Palantir, Anduril, and SpaceX. The minimum industry goal 10x everything. They get to market with a product quick, refine, replicate and then assumptively repeat the process.

This go to market culture could shift thinking the world over as to how Military budgets are allocated. If it costs $1 million US to launch a missile, opposed to $20-30 thousand dollars for a Kamikaze UAV- If the result is the same- then the sensible choice is clear. It might not be palatable to many, for varying different reasons but let's be realistic about the world we now live in.

The U.S. has prioritised AI, cyber, space, and unmanned systems innovation to maintain military-technological superiority, and this alliance represents a critical component of that strategy.

Critics warn about the "double black box" problem—when governments take already secretive and proprietary AI technologies and place them within the clandestine world of national security.

According to the TechCrunch report, initial partnerships between Palantir and Anduril could be announced as soon as January 2025, with hints already emerging in December 2024 when the companies announced an integration between Palantir's AI Platform and Anduril's Lattice software.

The lessons from Ukraine suggest that maintaining human-centric AI systems will be crucial moving forward. The Palantir-Anduril alliance will likely need to incorporate the practical battlefield insights gained from Ukraine's experiences as they develop their own military AI applications.


Share this post
The link has been copied!