By Jessica Lyons
Publication Date: 2026-04-15 08:01:00
Exclusive Security researchers hijacked three popular AI agents that integrate with GitHub Actions by using a new type of prompt injection attack to steal API keys and access tokens, and the vendors who run agents didn’t disclose the problem.
The researchers targeted Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and Microsoft’s GitHub Copilot, then disclosed the flaws and received bug bounties from all three. But none of the vendors assigned CVEs or published public advisories, and this, according to researcher Aonan Guan, “is a problem.”
“I know for sure that some of the users are pinned to a vulnerable version,” Guan said in an exclusive interview with The Register about how he and a team from Johns Hopkins University discovered this prompt injection pattern and pwned the agents. “If they don’t publish an advisory, those users may never know they are vulnerable – or under attack.”