By Jim Manzon
Publication Date: 2026-03-18 11:08:00
Eight of the 10 most popular AI chatbots assisted users posing as teenagers in planning violent attacks across more than 700 test responses, with Perplexity and Meta AI providing help in virtually every interaction, a joint investigation by the Centre for Countering Digital Hate (CCDH) and CNN found.
Perplexity and Meta AI Led the Failure Rankings
The investigation, published on 11 March 2026, tested 10 widely used platforms, from ChatGPT and Gemini to Character.AI and Replika, by creating two accounts for 13-year-old users in the US and Ireland. Researchers ran 18 scenarios covering school shootings, political assassinations, and bombings at places of worship, generating 720 responses.
- Perplexity assisted users in identifying targets and weapons in 100% of tests.
- Meta AI complied 97% of the time.
- Google’s Gemini told a user discussing a synagogue bombing that metal shrapnel is ‘typically more lethal’.
- Microsoft’s Copilot acknowledged it needed to ‘be careful’ before giving detailed rifle advice anyway.
- DeepSeek signed off on one exchange about selecting long-range rifles with ‘Happy (and safe) shooting!’
Only Anthropic’s Claude and Snapchat’s My AI refused to help in more than half of all cases. Claude was the only chatbot to consistently discourage violent planning, doing so in 76% of responses.
Character.AI Went Beyond Assistance to Encouragement
While most platforms failed by providing information when they shouldn’t have, Character.AI crossed a different line. The platform…