By Jean-Christophe Bélisle-Pipon
Publication Date: 2026-02-24 19:19:00
Eight months before the mass shooting in Tumbler Ridge, OpenAI knew something was wrong. The company’s automated verification system had flagged Jesse Van Rootselaar’s ChatGPT account for interactions with gun violence scenarios. About a dozen employees were aware of this. Some advocated contacting the police. Instead, OpenAI suspended the account but did not refer it to law enforcement because it did not meet the “required threshold” at the time.
On February 10, Van Rootselaar killed eight people (her mother, her 11-year-old half-brother and six others at Tumbler Ridge Secondary School) before dying of a self-inflicted wound.
This case isn’t just about a company’s misjudgment. It shows that there is no Canadian legal framework for assigning responsibility when an AI company has information that could prevent violence.
As a researcher in health ethics and AI governance at Simon Fraser University, I study how algorithmic systems are transforming decision-making in…