OpenAI Flagged Shooting Suspect Internally, Chose Not to Call Cops

OpenAI staff raised alarms about a Canadian mass shooting suspect months before the attack but never contacted police.

OpenAI Flagged Shooting Suspect Internally, Chose Not to Call Cops

OpenAI employees internally flagged concerning activity from a Canadian mass shooting suspect months before the incident — but the company decided not to alert law enforcement.

According to the Wall Street Journal, ChatGPT maker OpenAI identified troubling interactions from Jesse Van Rootselaar as far back as last June. Staff raised concerns after the suspect described acts of violence through the platform.

OpenAI's call? The activity didn't clear the company's threshold for reporting to police.

The revelation raises sharp questions about where AI companies draw the line between user monitoring, privacy, and public safety. When your chatbot is absorbing descriptions of violence from someone who later carries out a mass shooting, "didn't meet the bar" is a brutal phrase to defend.

The case will almost certainly intensify pressure on AI firms to establish clearer safety reporting protocols.