Agentic AI represents a qualitative shift in how software operates. Traditional software executes deterministic instructions. Generative AI responds to human prompts with output that humans review and use at their discretion. Agentic AI differs from both. Agents connect to software tools and APIs and uses large language models (LLMs) as reasoning engines to plan and execute sequences of actions autonomously—at machine speed—with real-world consequences. This shift raises new questions for information security. In January 2026, NIST’s Center for AI Standards and Innovation (CAISI) issued a Request for Information (RFI) seeking industry input on how to secure these systems. AWS submitted a response grounded in our experience building and operating agentic AI services. This post summarizes the four security principles at the heart of that response and the architectural building blocks that implement them.
CAISI asked developers, deployers, and…
https://aws.amazon.com/blogs/security/four-security-principles-for-agentic-ai-systems/