Skip to main content

California Gov. Gavin Newsom signs legislation on AI chatbots

The package of bills aims to bolster protections for children using online chatbots, including prohibiting chatbots from representing themselves as healthcare professionals.
By Jessica Hagen , Executive Editor
California state capitol building

Photo: Justin Sullivan / Staff/Getty Images

California Gov. Gavin Newsom has signed legislation that includes a series of bills aimed at creating protections for children online and regulating emerging technologies, such as AI. 

The bills include safeguards establishing requirements for "companion chatbot" platforms, including creating protocols "to identify and address users' suicidal ideation or expressions of self-harm."

Chatbots will also be prohibited from representing themselves as healthcare professionals. 

It will also be mandatory for companies to state that the conversations are artificially generated, and platforms must take steps to prevent minors from viewing sexually explicit images generated by the chatbot. Providing break reminders will also be mandatory. 

Companies offering chatbots will be required to share protocols for "dealing with self-harm and statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health." 

Additional requirements include age verifications, social media warning labels, penalties for deepfake pornography and guidance to prevent cyberbullying.

The bills also aim to create "clear accountability for harm caused by AI technology by preventing those who develop, alter or use AI from escaping liability" by claiming the technology acted autonomously. 

"Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a statement. 

"We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability," he said. "We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale."

THE LARGER TREND

Earlier this month, AI giant OpenAI announced it was rolling out new parental controls and mental health safeguards for ChatGPT, citing that some teens use the AI during difficult times. 

The new system will flag signs of potential self-harm, prompting review by trained specialists who may alert parents by email, text or phone if distress is detected.

The company stated that the system was developed with input from mental health experts, but it is not foolproof. 

Through the new controls, parents can link accounts with their teens to allow for automatic content protections, including filters for graphic material and viral challenges. They can also disable image generation, voice mode and memory features. 

OpenAI said it plans to add an age prediction system to apply teen-appropriate settings automatically and is building processes to contact emergency services if parents can’t be reached.