Cybersecurity researchers have discovered a novel attack technique called TokenBreak that can be used to bypass a large language model’s (LLM) safety and content moderation guardrails with just a single character change.
“The TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent,” Kieran Evans, Kasimir Schulz, and Kenneth Yeung said…
Article Source
https://thehackernews.com/2025/06/new-tokenbreak-attack-bypasses-ai.html