By Ben Dickson
Publication Date: 2026-01-19 18:28:00
Lasso Security has discovered significant prompt injection vulnerabilities in BrowseSafe, a new open-source tool from Perplexity designed to protect AI browsers against prompt injection attacks. Despite marketing that promised developers could “immediately harden their systems,” Lasso’s red team achieved a 36% bypass rate using standard encoding techniques. The findings show that relying on a single model for security can create dangerous blind spots, leaving agentic browsers vulnerable to hijacking.
Securing AI browsers
Perplexity released BrowseSafe to address the growing threat of browser-based prompt injection attacks. As AI assistants evolve from simple search interfaces to autonomous agents that navigate the web, they face new attack vectors. Malicious instructions can hide in comments, templates, or invisible HTML elements. If an agent processes this content without safeguards, an attacker can override its original intent and redirect its behavior.
BrowseSafe is a content detection model fine-tuned to scan web pages in real time. It answers a specific question: does the page’s HTML code contain malicious instructions? Perplexity designed the model to handle the “messiness” of HTML content and marketed it as a solution that eliminates the need for developers to build safety rails from scratch. The company claims the model flags malicious instructions “before they reach your agent’s core logic.”