Anthropic denies that it could sabotage AI tools in war

Anthropic denies that it could sabotage AI tools in war

By Paresh Dave
Publication Date: 2026-03-21 00:03:00

Anthropocene cannot manipulate Its generative AI model, Claude, will be operational as soon as the U.S. military gets it up and running, an executive wrote in a court filing Friday. The statement was made in response to allegations from the Trump administration that the company may have manipulated its AI tools during the war.

“Anthropic never had the opportunity to force Claude to stop working, change its functionality, block access, or otherwise influence or jeopardize military operations,” wrote Thiyagu Ramasamy, head of public sector at Anthropic. “Anthropic does not have the necessary access to disable the technology or change the behavior of the model before or during live operation.”

The Pentagon has been discussing for months with the leading AI laboratory how its technology can be used for national security – and what the limits should be on this use. This month, Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk, a designation that will prevent the Defense Department from…