By Nick Srnicek
Publication Date: 2026-01-14 12:00:00
At the start In 2024, Anthropic, Google, Meta and OpenAI jointly spoke out against the military use of their AI tools. But over the next 12 months, something changed.
In January, OpenAI quietly lifted its ban on using AI for “military and warfare purposes,” and shortly afterward it was reported that it was working on “a number of projects” with the Pentagon. In November, the same week that Donald Trump was re-elected US president, Meta announced that the United States and select allies could use Lama for defense purposes. A few days later, Anthropic announced that the company would also approve its models for military use and had entered into a partnership with the defense company Palantir. At the end of the year, OpenAI announced its own partnership with defense startup Anduril. Finally, in February 2025, Google revised its AI principles to allow the development and use of weapons and technologies that could harm people. Over the course of a year, concerns about the existential risks grow…

