OpenAI Suspected that Trans Shooter Was Planning an Attack

AP Photo/Richard Drew, File

Well, here's a twist I didn't see coming. Jesse Van Rootselaar, the transgender mass shooter (the one in Tumbler Ridge, BC, since I have to specify now to distinguish), used OpenAI's ChatGPT to research mass murder to the extent that the company investigated and banned him. 

Advertisement

"Can we tell the cops he's nuttier than a fruitcake?"

"We can't do that. She's just diverse. And BTW 'she' is her preferred pronoun. Don't call him 'he'."

Whatever pronoun one preferred "Van Rootselaar was already known to local police before the shooting. They visited where she lived multiple times to handle mental-health concerns, and temporarily removed guns from the residence."

When you come right down to it everybody, including AI, knew what the situation was. But conveying the information required an unformulatable setence, constructing a sentence forbidden by the rules of Woke syntax.

ChatGPT apparently flagged the account because of the troubling discussions the chatbot had with Van Rootselaar, and the company was so concerned that they opened an investigation that went up to the highest level. 

Months before Jesse Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia, Canada, OpenAI considered alerting law enforcement about her interactions with its ChatGPT chatbot, the company said.

While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.

Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said. 

OpenAI leaders ultimately decided not to contact authorities.

A spokeswoman for OpenAI said the company banned Van Rootselaar’s account but determined that her activity didn’t meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others.

Advertisement

You can see OpenAI's dilemma in such situations. On the one hand, the nature of the chatbot is such that millions of people reveal private information that can be used to create very detailed profiles of its users—even more detailed and granular than what social media companies do—and a lot of people would be afraid to use them if they believed that everything they say would be shared with the government. 

On the other hand, these companies are also in a position to see red flags because of that. They are in a position to identify people likely to commit crimes, but a lot of other people could be caught up in a dragnet. 

Other aspects of Van Rootselaar’s digital footprint emerged in the days after the attack, including a videogame she created on the Roblox platform that simulated a mass shooting. On social media, the suspect, a trans woman, shared her concerns about the process of transitioning and her interests in anime cartoons and illicit drugs.

Online platforms have long debated how to balance questions of privacy for their users with public safety in their decisions to alert certain users to law enforcement. That debate is now coming for the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives.

OpenAI said it trains its models to discourage users from committing real-world harm, and routes conversations in which users express intent of harm to human reviewers, who are able to refer them to law enforcement in cases where they are found to pose an imminent risk of serious physical harm.

Advertisement

Of course, some patterns are impossible to miss, and it's pretty easy to suspect that the fact that this particular person was transgender played a role in the decision not to flag the account to the police. Even the news stories dance around the issue, continually referring to Van Rootselaar as a woman. 

Van Rootselaar was already known to local police before the shooting. They visited where she lived multiple times to handle mental-health concerns, and temporarily removed guns from the residence.

A specialized team of investigators has also been combing through her online activity and digital footprint for clues about the mass shooting, as well as reviewing her past interactions with police and mental-health professionals, according to RCMP Commissioner Dwayne McDonald.

If all the available evidence were put together, this "known wolf" would easily have been identified as a threat. And in a world that took severe mental illness more seriously, he would have been forced into treatment. 

Unfortunately, everybody refuses to acknowledge that transgenderism itself is a major red flag and shouldn't be treated as a normal variation on the spectrum of human behavior. It obviously isn't, but nobody wants to say so because there is an extremist ideology that dominates the thinking of the cultural elite. 

Is it a scandal that OpenAI didn't report this user to the police? I honestly can't say what my opinion is. We don't want Big Tech to be even more of a Big Brother than it already is, obviously. But we also don't want obvious red flags to go ignored. 

Advertisement

Personally, I think the bigger scandal is that the police already knew about this guy and his severe mental illness. There were recurring crises, yet he remained in the community as a threat. It's especially weird that in a culture that is so hostile to guns, they gave guns back to a person who was obviously a potential threat. 

We need to get more aggressive about treating extreme mental illness. Instead, the elite is focused on persecuting the people who are warning about the dangers of transgender ideology and objecting to its being taught as gospel in the schools. 

British Columbia, where this mass shooting took place, just fined a man who refused to say that men can be women $750,000. In a world where that sort of thing happens, it's no surprise that a corporation would tread lightly when reporting a trans person as a threat. 

Editor's note: If we thought our job in pushing back against the Academia/media/Democrat censorship complex was over with the election, think again. This is going to be a long fight. If you want to join the conversation in the comments -- and support independent platforms -- why not join our VIP Membership program? Choose VIP to support Hot Air and access our premium content, VIP Gold to extend your access to all Townhall Media platforms and participate in this show, or VIP Platinum to get access to even more content and discounts on merchandise. Use the promo code FIGHT to join or to upgrade your existing membership level today, and get 60% off!

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Ed Morrissey 10:00 PM | February 20, 2026
Advertisement
David Strom 7:20 PM | February 20, 2026
Advertisement