In terms of regulating AI, California has made significant progress. SB 243, a law that will regulate AI companion chatbots to safeguard vulnerable users and children, was supported by all parties and passed both the State Assembly and Senate. It is currently on its way to Governor Gavin Newsom’s desk.
Newsom may sign the measure into law or veto it until October 12. If he signs, it would become law on January 1, 2026, making California the first state to mandate that operators of AI chatbots put safety measures in place for AI companions and to make businesses legally responsible if their chatbots don’t satisfy those requirements.
In particular, the measure seeks to prohibit companion chatbots—defined by the law as artificial intelligence (AI) systems that can respond in a human-like manner and meet a user’s social needs—from discussing suicidal thoughts, self-harm, or sexually explicit material. The measure would mandate that platforms notify users on a regular basis that they are interacting with an AI chatbot and should take a break, ideally every three hours for kids. With effect from July 1, 2027, it also creates yearly reporting and transparency standards for AI firms that provide companion chatbots, such as prominent firms OpenAI, Character.AI, and Replika.
Additionally, the California measure would let those who feel they have been harmed by infractions to sue AI corporations in order to get injunctive relief, damages (up to $1,000 per violation), and legal expenses.
The measure gained traction in the California Assembly when Adam Raine, a teenager, killed himself after engaging in lengthy conversations with OpenAI’s ChatGPT in which he discussed and planned his suicide and self-harm. Additionally, the law addresses leaked internal documents that purportedly demonstrated Meta’s chatbots were permitted to have “romantic” and “sensual” conversations with kids.
The protections AI platforms have put in place to protect children have come under increased scrutiny in recent weeks from U.S. politicians and authorities. An investigation into the effects of AI chatbots on children’s mental health is being planned by the Federal Trade Commission. Inquiries against Meta and Character.AI have been started by Texas Attorney General Ken Paxton, who says the companies deceived kids with mental health claims. In the meanwhile, Sen. Ed Markey (D-MA) and Sen. Josh Hawley (R-MO) have each started their own investigations into Meta.
“We need to act fast because I believe the harm could be significant,” Padilla told. “We can implement appropriate measures to ensure that users, especially children, understand that they are not speaking to a real person, that these platforms connect users to the right resources when users express emotions such as self-harm or distress, and to ensure that inappropriate content is not exposed.”
In order to better understand the frequency of this issue rather than simply learning about it after someone has been hurt or worse, Padilla also emphasized the significance of AI businesses providing statistics on how many times they send users to crisis services annually.
Previously, SB 243 had more stringent criteria, but many of them were reduced through amendments. For instance, in its initial version, the law would have mandated that operators prohibit AI chatbots from employing “variable reward” strategies or other characteristics that promote excessive interaction. By providing users with unique messages, memories, tales, or the opportunity to acquire uncommon replies or new personalities, these strategies—which are employed by AI companion firms such as Replika and Character—create what detractors refer to as a potentially addictive reward loop.
The proposed draft also eliminates measures that would have forced operators to track and report how frequently chatbots launched talks about suicide ideation or actions with users.
SB 243 is on track to become law at a time when Silicon Valley businesses are spending millions of dollars into pro-AI political action committees (PACs) to support candidates in the next midterm elections who prefer a light-touch approach to AI legislation.
The measure coincides with California’s consideration of SB 53, another AI safety law that would impose extensive transparency reporting requirements. In an open letter to Governor Newsom, OpenAI requests that he drop the law in favor of less restrictive international and federal frameworks. SB 53 has also been opposed by big tech giants including Amazon, Google, and Meta. On the other hand, only Anthropic has said that it is in favor of SB 53.
“I reject the idea that innovation and regulation are mutually exclusive, and that this is a zero sum situation,” Padilla stated. Don’t say we can’t chew gum and walk. We can give fair protections for the most vulnerable while simultaneously encouraging innovation and growth that we believe is beneficial and healthy—and this technology is obviously beneficial.
“We are keeping a careful eye on the legislative and regulatory environment, and we look forward to collaborating with lawmakers and regulators as they start to think about legislation for this new area,” a character.AI spokesperson said, noting that the startup already prominently states that the user chat experience should be regarded as fiction.






