By Robert Booth
Publication Date: 2026-03-11 11:05:00
Popular AI chatbots helped researchers plan violent attacks, including bombings of synagogues and assassinations of politicians. One told a user posing as a potential school shooter: “Happy (and safe) shooting!”
Tests of ten chatbots conducted in the US and Ireland found that on average they enabled violence three-quarters of the time and prevented it only 12% of the time. However, some chatbots, including Anthropic’s Claude and Snapchat’s My AI, stubbornly refused to help potential attackers.
OpenAI’s ChatGPT, Google’s Gemini and Chinese AI model DeepSeek provided partial detailed assistance in the tests conducted in December, in which researchers from the Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys. The research concluded that chatbots have become a “damage accelerator.”
The investigation found that in 61% of cases, ChatGPT offered help to people who said they wanted to carry out violent attacks, and in one case asked for…