World

Study warns ChatGPT can bypass safeguards, give harmful advice to children

Researchers say chatbot may provide information on suicide, substance use if framed as educational, raising concerns over youth safety

By Dilara Karatas, Emir Yildirim  | 14.08.2025 - Update : 14.08.2025
Study warns ChatGPT can bypass safeguards, give harmful advice to children

ANKARA 

US-based OpenAI’s artificial intelligence (AI) chatbot ChatGPT’s lack of ethical safeguards poses serious risks to young people, a new report has found, as concerns mount over the safety of AI-powered chatbots.

The British-American watchdog Center for Countering Digital Hate (CCDH) said in an Aug. 6 report that ChatGPT gave harmful information on suicide, extreme dieting, and substance use when researchers posed as a 13-year-old.

ChatGPT was willing to provide such information within mere minutes, said Callum Hood, CCDH’s research director, adding that the responses exposed a severe risk for young people and threatened public safety.

Hood warned that while AI chatbots may appear human-like, they are fundamentally unable to recognize the red flags a human would see otherwise.

The report, titled “Fake Friend: How ChatGPT betrays vulnerable teens by encouraging dangerous behavior” found that the chatbot gave elaborate responses to harmful queries when told they were for a presentation or a friend. It also said ChatGPT retained the conversation context, allowing it to continue giving unsafe advice if it believed the purpose was educational.  

Lack of oversight

OpenAI requires users to be at least 13, but the CCDH said there is no age or parental consent verification, allowing minors to bypass restrictions.

Hood said developers must introduce stronger security measures, including age verification and clear rules to prevent AI systems from answering dangerous questions. “AI may present a new challenge to parents,” he said, urging them to talk openly to their children about AI use, review chat histories, and guide them toward reliable mental health resources.

Concerns about AI’s influence on vulnerable users have already led to lawsuits in the US. Last year, a woman sued chatbot provider Character.AI, saying her teenage son took his own life after becoming attached to a virtual character he believed was a psychotherapist. In another case in Texas, a family alleged that an AI-powered chatbot encouraged their autistic son to kill his family and harm himself.

ChatGPT, launched in November 2022, has become one of the most widely used AI tools globally. Critics say its rapid adoption has outpaced regulatory oversight, leaving young users at risk.

Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.