Canada Summons OpenAI Over Failure to Alert Police Before School Shooting
Canada Summons OpenAI Over Police Alert Failure in Shooting

Canadian authorities are demanding urgent explanations from OpenAI after the artificial intelligence company failed to alert police about a user's violent activities months before a devastating school shooting in British Columbia. The incident has sparked a major inquiry into the responsibilities of tech firms in preventing real-world violence.

Minister 'Deeply Disturbed' by OpenAI's Inaction

Evan Solomon, Canada's artificial intelligence minister, has summoned senior representatives from OpenAI to Ottawa following revelations that the company suspended the ChatGPT account of Jesse Van Rootselaar in June 2025 but did not contact Canadian law enforcement. Van Rootselaar would later become the perpetrator of one of Canada's worst school shootings in Tumbler Ridge on February 10.

"I am deeply disturbed by reports that OpenAI had information about this individual's violent intentions but chose not to escalate the matter to authorities," Solomon told reporters. The minister emphasized that while the company banned the account over "furtherance of violent activities," this action alone proved insufficient to prevent the subsequent tragedy.

The Tragic Timeline of Events

According to investigations, Van Rootselaar had engaged ChatGPT in detailed discussions about violent scenarios involving firearms over several days in June 2025. OpenAI's automated review system flagged this activity, leading to the account suspension. However, company officials determined the communications did not indicate "credible or imminent planning" and therefore did not warrant police notification.

Months later, on February 10, the 18-year-old carried out a horrific attack that claimed eight lives. Before targeting the school, Van Rootselaar killed her mother and half-brother at their nearby residence. The school victims included five students aged 12 to 13 and a 39-year-old teaching assistant, creating a community tragedy that has shaken the nation.

OpenAI's Delayed Response and Meeting Controversy

OpenAI's handling of the situation has faced mounting criticism from multiple levels of Canadian government. While the company stated it proactively contacted the Royal Canadian Mounted Police (RCMP) after learning of the shooting, British Columbia officials revealed concerning details about their interactions with the tech firm.

The provincial government confirmed that OpenAI representatives met with officials one day after the shooting in a pre-arranged meeting but failed to disclose that they had suspended the shooter's account months earlier due to violent content. It was only two days post-tragedy that OpenAI sought provincial assistance in contacting the RCMP.

Broader Implications for AI Regulation

This case has intensified Canada's ongoing debate about regulating artificial intelligence technologies, particularly concerning how minors access and use AI chatbots. The federal government is currently evaluating potential regulatory frameworks that might address when and how tech companies should report concerning user behavior to authorities.

David Eby, Premier of British Columbia, expressed profound concern about the revelations. "The pain these families are enduring is unimaginable," Eby stated. "That OpenAI had related intelligence before the shooting is profoundly disturbing for the victims' families and all British Columbians."

Additional investigations revealed that Van Rootselaar had also utilized the gaming platform Roblox to create a virtual mall filled with weapons where players could simulate shooting each other, suggesting broader digital planning preceding the physical attack.

Safety Protocols Under Scrutiny

Minister Solomon expects OpenAI's top safety representatives to provide detailed explanations during their Ottawa meeting about the company's decision-making processes regarding law enforcement escalation. "We will have a sit-down meeting to understand their safety protocols, escalation thresholds, and how they determine when to involve police," Solomon emphasized.

The Wall Street Journal reported that OpenAI staff had considered alerting Canadian authorities about Van Rootselaar's activities last year but ultimately decided against it. This decision-making process and the criteria used to assess potential threats will form a central part of the Canadian government's inquiry as it seeks to prevent similar tragedies in the future.