Skip to main content

OpenAI introduces parental controls with mental health notifications

The company said it worked with mental health and teen experts to help chatGPT recognize signs that a teen may be thinking of harming themselves.
By Jessica Hagen , Executive Editor
Person working on a laptop

Photo: Isabel Pavia/Getty Images

OpenAI announced it added parental controls to its AI platform, equipped with mental health notifications, as the company "knows some teens turn to ChatGPT during hard moments."

"We’ve added protections that help ChatGPT recognize potential signs that a teen might be thinking about harming themselves," the company said in a statement.

"If our systems detect potential harm, a small team of specially trained people reviews the situation. If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out." 

OpenAI said it worked with mental health and teen experts to design the protections, but clarifies that no system is perfect. 

"We know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent," the company said. 

OpenAI is also working on processes to reach law enforcement or other emergency services should the AI detect an imminent threat to life and cannot reach a parent. 

As part of OpenAI's standard parental controls, the company said parents and teens can connect their accounts, and the teen will automatically receive content protections, including reduced viral challenges, graphic content, extreme beauty ideals, and sexual, romantic or violent roleplay. 

Parents will also be able to remove image generation capabilities, turn off memory so ChatGPT does not save memories when responding, set quiet hours, turn off voice mode and opt out of model training.

"Over the coming months, we’re building an age prediction system⁠ that will help us predict whether a user is under 18 so that ChatGPT can automatically apply teen-appropriate settings," OpenAI said. 

"In instances where we’re unsure of a user’s age, we’ll take the safer route and apply teen settings proactively. In the meantime, parental controls will be the most effective way for parents to ensure their teens are opted into our age-appropriate teen experience." 

THE LARGER TREND

According to a study published by Common Sense Media, 72% of teenagers have used AI companions, with 12% of those teens using the technology for emotional or mental health support.  

"Current research indicates that AI companions are designed to be particularly engaging through 'sycophancy,' meaning a tendency to agree with users and provide validation, rather than challenging their thinking," the study's authors wrote. 

"This design feature, combined with the lack of safeguards and meaningful age assurance, creates a concerning environment for adolescent users, who are still developing critical thinking skills and emotional regulation."

The non-profit said parents and caregivers should maintain ongoing conversations with teens about the fundamental differences between genuine human relationships and AI interactions. 

Another company working in the space includes Aura, which offers AI-powered online protection for families and individuals in the form of defense against identity theft, scams and online threats. 

The company said it partnered with child psychologists to create online tools to protect children from online bullying. Caregivers also gain insights into supporting healthy screen time and overall well-being for children.

In March, Aura closed a Series G funding round, raising $140 million in equity and debt, bringing its valuation to $1.6 billion.