Site icon VMVirtualMachine.com

IBM’s AI ‘Bob’ could be manipulated to download and execute malware

IBM’s AI ‘Bob’ could be manipulated to download and execute malware

By Sead Fadilpašić
Publication Date: 2026-01-09 16:50:00


  • IBM’s GenAI tool “Bob” is vulnerable to indirect prompt injection attacks in beta testing
  • CLI faces prompt injection risks; IDE exposed to AI-specific data exfiltration vectors
  • Exploitation requires “always allow” permissions, enabling arbitrary shell scripts and malware deployment

IBM’s Generative Artificial Intelligence (GenAI) tool, Bob, is susceptible to the same dangerous attack vector as most other similar tools – indirect prompt injection.

Indirect prompt injection is when the AI tool is allowed to read the contents found in other apps, such as email, or calendar.

A malicious actor can then send a seemingly benign email, or calendar entry, which has a hidden prompt that instructs the tool to do nefarious things, such as exfiltrate data, download and run malware, or establish persistence.

Risky permissions

Recently, security researchers Prompt Armor published a new report, stating that IBM’s coding agent, which is currently in beta, can be accessed…

Exit mobile version