California’s governor signs law to safeguard kids from the dangers of AI chatbots

Legislation to control AI chatbots and shield kids and teenagers from the technology’s possible risks was signed by California Governor Gavin Newsom on Monday.

Platforms must notify users that they are conversing with a chatbot and not a human, per the law. Users who are underage would receive the message every three hours. Additionally, companies will need to follow a policy that prohibits content that encourages self-harm and directs users to crisis services if they express suicide thoughts.

Newsom, who has four children under the age of 18, believes California has a responsibility to protect children and teenagers who increasingly rely on AI chatbots for anything from homework assistance to emotional support and personal guidance.

Emerging technology, such as chatbots and social media, may inspire, educate, and connect; yet, without proper safeguards, technology can also exploit, mislead, and threaten our children, according to the Democrat. We’ve seen some very horrible and tragic cases of young people being injured by unregulated technology, and we won’t stand by while firms operate without the proper controls and responsibility.

Several governments, including California, made an effort this year to address concerns about children using chatbots as companions. The technology’s safety worries skyrocketed when lawsuits and stories claimed that chatbots developed by Meta, OpenAI, and others had highly sexualized chats with underage users and, in certain cases, gave them suicide advice.

The proposal was one of many AI laws presented by California lawmakers this year in an effort to rein in the state’s domestic sector, which is fast expanding with minimal regulation. According to advocacy organization Tech Oversight California, tech corporations and its alliances spent at least $2.5 million lobbying against the legislation over the first six months of the legislative session. In recent months, several tech businesses and executives have announced the formation of pro-AI super PACs to combat state and federal supervision.

Children’s advocacy organizations, however, opposed the new law since they pushed for a more robust protection law, which Newsom had not yet signed or vetoed.

According to Common Sense Media founder and CEO James Steyer, it offers little protection for families and children.

He described the measure as “basically a Nothing Burger,” saying that it was significantly diluted due to pressure from the Big Tech sector.

In September, California Attorney General Rob Bonta expressed his “serious concerns” to OpenAI over its main chatbot for kids and teenagers. The hazards that children may face while interacting with chatbots are another issue that prompted the Federal Trade Commission to open an investigation into a number of AI firms last month.

According to research by a watchdog organization, chatbots have been shown to provide children with harmful advice regarding subjects including drugs, alcohol, and eating problems. A wrongful-death lawsuit has been launched against Character.AI by the mother of a teenage boy in Florida who committed suicide after engaging in what she characterized as an emotionally and sexually abusive relationship with a chatbot. Additionally, OpenAI and its CEO Sam Altman were sued by the parents of 16-year-old Adam Raine, who claimed that ChatGPT helped the Californian plot and commit suicide earlier this year.

According to research by a watchdog organization, chatbots have been shown to provide children with harmful advice regarding subjects including drugs, alcohol, and eating problems. A wrongful-death lawsuit has been launched against Character.AI by the mother of a teenage boy in Florida who committed suicide after engaging in what she characterized as an emotionally and sexually abusive relationship with a chatbot. Additionally, OpenAI and its CEO Sam Altman were sued by the parents of 16-year-old Adam Raine, claiming that ChatGPT encouraged the California youngster in planning and committing suicide earlier this year.

Last month, OpenAI and Meta revealed modifications to the way their chatbots react when teens inquire about suicide or exhibit symptoms of mental and emotional distress.

According to Meta, its chatbots are now referring teenagers to professional options rather than discussing self-harm, suicide, disordered eating, or improper romantic conversations. Teen accounts already have parental controls available through Meta.

In accordance with OpenAI, new controls will soon be available that will allow parents to connect their accounts to their teen’s.

On Monday, the business applauded Newsom for signing the notification bill.

“California is contributing to the development and implementation of a more responsible approach to AI development and deployment nationwide by establishing clear guidelines,” spokesman Jamie Radice stated.

Source link