By David Zax
Publication Date: 2026-02-10 12:00:00
In traditional AI deployments, many of the biggest risks depend on model quality: accuracy, drift, and bias. But agent AI is different. Ultimately, what sets AI agents apart is that they act: much of the threat comes not from what the agent “says” but from what it “does”: the APIs it calls, the functions it calls. And in cases where agents interact in physical space (e.g. warehouse automation or autonomous driving), threats can extend even beyond digital and data-based damage into the real world.
Securing agents therefore requires security professionals to pay close attention to this “level of action.” Within this layer, threats can vary depending on the type of agent or its position in an agent hierarchy or other multi-agent ecosystem. For example, the vulnerabilities of a command-and-control “orchestration” agent can vary in both nature and extent. Since such orchestration agents are often those that communicate with human users, security…