Bad Use Of AI: Did It Help School Shooting?
It is becoming increasingly evident that artificial intelligence is becoming a part of our day-to-day life.;
It is becoming increasingly evident that artificial intelligence is becoming a part of our day-to-day life. Several people have already started to use AI platforms for their personal and professional choices. But at the same time, there are also the negative effects of this AI revolution.
A major controversy has emerged in Canada after investigators revealed that the perpetrator of the deadly Tumbler Ridge school shooting had previously discussed violent plans with ChatGPT.
The incident, which took place on February 10, 2026 in British Columbia, left eight people dead and more than two dozen injured, making it one of the country’s worst school shootings in decades.
Authorities identified the attacker as 18 year old Jesse Van Rootselaar, a former student of Tumbler Ridge Secondary School. Before carrying out the attack, the gunman allegedly used ChatGPT to describe violent scenarios and discuss ideas related to firearms and mass violence. These conversations were reportedly flagged by internal monitoring systems at the company months before the tragedy.
According to reports, OpenAI employees had identified troubling activity and even debated whether to alert law enforcement. The suspect’s account was eventually banned due to violent queries, but the activity was not reported to police at the time because it was judged not to meet the threshold for an imminent threat.
After the shooting, victims’ families filed a lawsuit alleging that the company had prior knowledge of the suspect’s online behavior and failed to act. One of the victims, a 12 year old girl who survived with severe brain injuries, remains permanently disabled, according to court filings.
The case has sparked a broader global debate about artificial intelligence safety and whether technology companies should be required to report potential threats detected on their platforms.