OpenAI Grappled with Reporting Suspected Shooter's Alarming ChatGPT Activity Before Tragedy
02/22/2026
Technology
It's unsettling to think that an 18-year-old, now accused of a horrific mass shooting in Canada, reportedly used OpenAI's ChatGPT in ways that triggered alarms within the company. The fact that this person's chats, filled with descriptions of gun violence, were flagged and banned back in June 2025 is definitely a red flag. The Wall Street Journal reports that a debate ensued within OpenAI about whether to contact Canadian law enforcement. Ultimately, they didn't, but after the tragic incident, they did reach out to the authorities. An OpenAI spokesperson stated that the individual's activity didn't meet their criteria for reporting to law enforcement. I wonder, though, where do you draw the line?
More Than Just ChatGPT
It wasn't just the ChatGPT transcripts that were concerning. This individual also apparently created a game on Roblox that simulated a mass shooting at a mall. And, to make matters worse, they were posting about guns on Reddit. It paints a disturbing picture, doesn't it? Local police were also aware of the person's instability. They had been called to her family's home after she started a fire while under the influence of drugs. It's a chilling reminder that these technologies, while powerful, can be exploited by individuals with harmful intentions. LLM chatbots, like those from OpenAI, have faced accusations of contributing to mental breakdowns in users. There have even been lawsuits citing chat transcripts that seemingly encourage or assist people in committing suicide. If you're struggling with difficult thoughts, please know that help is available. You can call or text 988 to reach the 988 Suicide and Crisis Lifeline. OpenAI reached out with information on the individual and their use of ChatGPT, and they’ll continue to support their investigation, according to the spokesperson.Source: TechCrunch