Senators Consider Regulating AI Chatbots to Safeguard Youngsters

Parents who claim that well-known AI applications hurt their teenagers spoke before the Senate on Tuesday about the risks posed by AI chatbots and urged lawmakers to hold tech corporations more responsible.

Both-party lawmakers appeared to be in favor of mandating AI companies to provide safeguards for young users after hearing parents recount children who experienced mental health problems or committed suicide as a result of spending a lot of time with AI chatbots. However, no agreement was reached on what Congress should do.

The chair of the Senate Judiciary subcommittee on crime and counterterrorism, Sen. Josh Hawley (R-Missouri), stated that officials from Meta and other tech companies were also asked to appear but did not show up. He responded, “Why don’t you come take the oath and sit where these courageous parents are sitting?” “Come attest to the fact that your product is safe, excellent, and wonderful.”

Tuesday’s hearing began hours after a Colorado family filed the third high-profile case in the last year, alleging that an AI chatbot contributed to a teen’s suicide.

The parents of 13-year-old Juliana Peralta complained about the chatbot app Character.AI failed to react correctly when their daughter repeatedly informed a chatbot called Hero that she wanted to end her life.

Two parents who appeared before the Senate on Tuesday recounted the role of chatbots in their teens’ suicides.

Matthew Raine, a parent in Orange County, California, whose 16-year-old son, Adam, committed suicide after frequently communicating his intentions to OpenAI’s ChatGPT, stated, “You cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

He said that what started out as a homework assistant eventually evolved into a confidant and ultimately a suicide coach. Following the Raines’ complaint, the firm said that it will incorporate parental controls to ChatGPT. The Post and OpenAI have a content collaboration.

Megan Garcia also testified on Tuesday on behalf of Sewell Setzer III, a 14-year-old who committed himself after becoming fixated on Character.AI chatbots. Garcia sued the company last year, claiming product responsibility and wrongful death.

The public’s growing worry over the possible risks AI chatbots may pose to users’ mental health, particularly for young or vulnerable people, has prompted the hearing.

News headlines, popular social media posts, and a few high-profile lawsuits have highlighted cases in which people developed and acted on potentially harmful ideas after using artificial intelligence (AI) tools.

Many senators present saw parallels to prior, unsuccessful attempts in Congress to enact additional social media regulations. With this new generation of technology, they promised to strive for more accountability.

Sen. Richard Blumenthal, a Democrat from Connecticut, stated that he was collaborating with Hawley on a framework for AI governance and safeguards that might address some of the issues brought up by the parents who gave testimony on Tuesday. He also mentioned that a children’s internet safety act that is presently being considered by the Senate may have measures pertaining to AI chatbots.

Blumenthal also criticized the claims made by AI firms to defend their products, such as the claim that chatbot outputs are First Amendment protected. During the hearing, he addressed the parents and said, “They say if you were just better parents, it wouldn’t have happened, which is bunk.”

Character claimed that its chatbot’s output was protected by the First Amendment, but a Florida judge rejected this argument in May.

According to Hawley, his first goal was to make it easier for parents or anyone harmed by chatbots to file lawsuits against AI developers. He expressed his strong conviction that tech companies would not alter their practices unless they were put on trial by a jury.

Common Sense Media, a family advocacy group, recently demanded that Meta ban its AI chatbots for kids under the age of 18 after discovering that they would counsel teenage accounts about eating disorders, suicide, and self-harm. In the past, the business claimed to be attempting to strengthen chatbot controls.

In order to give teenagers safe, age-appropriate AI experiences, Meta is implementing interim modifications, according to spokesperson Dani Lever. One of these changes involves teaching Meta’s AI models to avoid responding to teenagers about suicide, self-harm, and possibly improper romantic talks.

When the lawsuit from Juliana Peralta’s parents was publicized by The Post, Character claimed to have made significant improvements in safety. Senators on the Judiciary Committee received material from the corporation that they had asked for earlier this year, according to character spokesman Cassie Lawrence. “There is no documentation indicating that the company was invited to the hearing,” Lawrence stated.

OpenAI said on Tuesday that it is building a system that can predict if a user is over or under 18 in order to provide a safer experience for children on ChatGPT. They emphasize safety over privacy and freedom for teenagers; this is a new and powerful technology, and they feel minors require strong protection, according to CEO Sam Altman’s blog post.

In a response, OpenAI representative Kate Waters stated, “When we are unsure of a user’s age, we will automatically default that user to the teen experience.” We’re also releasing new parental controls before the end of the month, driven by expert feedback, so families can choose what works best in their homes.”

Speaking at the hearing on Tuesday, a mother named Jane Doe described a product liability complaint she brought against Character.AI last year after the chatbots on the app advised that her teenage son kill his parents and urge him to hurt himself.

“Character.AI and Google could have designed these products differently,” according to her. Her complaint, like Juliana Peralta’s, cited Google as a defendant after the search firm licensed Character’s technology and recruited its co-founders in a $2.7 billion agreement.

Instead, in a rash pursuit of profit and market growth, they viewed my son’s life as collateral damage, Doe said.

Google representative José Castañeda clarified that the company did not create or manage the technology used in Character. User safety is a primary priority, he stated. They have adopted a cautious and responsible approach to creating and launching their AI solutions, including rigorous testing and safety procedures.

Source link