By Frank Landymore
Publication Date: 2026-03-21 10:00:00
A rogue AI agent caused a critical security incident at Meta that leaked sensitive user data to people who did not have the appropriate authorization, says a report from The information And The edgein the latest account of the security pitfalls encountered by AI systems.
The error occurred last week when a software developer used an internal AI agent to break down a technical question posed by another employee on an internal discussion forum based on company communications and an incident report. The in-house AI has been compared to OpenClaw, an open source agent model that has generated a lot of hype in tech circles because it is an AI that “actually does things.”
What emerged was a mix of AI hallucination and a game of telephone. The AI posted its response on the forum without the consent of the employee who initiated it….