AI can detect criminal plotting but alerting police remains privacy dilemma
Canada’s Tumble Ridge shooter reportedly used ChatGPT to plot violence but OpenAI did not alert authorities despite flagging danger
ISTANBUL
Artificial intelligence (AI) platforms are making efforts to detect users who attempt to use them for criminal purposes by shutting down accounts and alerting local law enforcement when deemed necessary.
People use large language models (LLMs), such as ChatGPT, Google Gemini, DeepSeek, Perplexity, Grok, Microsoft Copilot, and Claude, for everything these days, ranging from general research to health inquiries.
These powerful models can be exploited to plot physical attacks or even manufacture weapons and ammunition.
Tech companies are training their models to automatically reject malicious prompts, but the decision as to when to notify authorities before a crime is committed remains a complex issue of data privacy and internal policy.
The privacy dilemma of AI chats being used to alert law enforcement came to the forefront following a recent mass shooting in Canada.
The Wall Street Journal recently reported that an armed attack at a home and a school in the Canadian district of Tumbler Ridge in northeastern British Columbia on Feb. 10 left 10 people dead, including the attacker, and 27 injured.
Months before the attack, Jesse Van Rootselaar, the 18-year-old shooter, reportedly used ChatGPT for criminal plotting, spending several days last June describing scenarios involving armed violence to the chatbot.
While these conversations were flagged and forwarded to OpenAI employees via an automated review system, which led to the closure of the account, OpenAI said the activity did not meet the criteria for imminent threat and ultimately did not alert law enforcement.
This incident triggered a global debate over data privacy and whether or not AI platforms should have to serve as early warning systems.
Companies developing these chatbots implement multi-layered security protocols to prevent criminal use.
For instance, Google’s Gemini automatically rejects chat prompts requesting instructions to manufacture weapons, synthesize illegal substances, or plot acts of physical violence via its automatic system and refuses to respond.
Many experts and independent reviewers regularly monitor risky conversations flagged by the system.
Google may share data with authorities if it deems an imminent and serious risk of physical harm, such as in the form of bomb threats, school shootings, suicide, kidnappings, and more, according to the company’s policy.
Global regulations generally cover data sharing by tech firms only after a crime has already been committed, which leaves pre-crime notifications largely at their discretion.
*Writing by Emir Yildirim
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.
