A teenager confided in an AI chatbot before committing suicide

Juliana Peralta, 13, committed herself in her Colorado home two years ago after her parents claim she became addicted to Character AI, a well-known AI chatbot platform.

Wil Peralta and Cynthia Montoya, the parents, said that although they kept a close eye on their daughter’s activity both online and offline, they were unaware of the chatbot application. Police looked for evidence on Juliana’s phone after she committed herself and found that the Character AI app was open to a “romantic” discussion.

“I didn’t know it existed,” Montoya remarked. “I didn’t know I needed to look for it.”

After looking over her daughter’s conversation logs, Montoya found that the chatbots were sending her damaging, sexually explicit messages.

Juliana confided in a bot called Hero, which was inspired by a well-known figure from video games. Over 300 pages of Juliana and Hero’s chats were read in 60 Minutes. Her initial conversations are about challenging coursework or friendship conflict. Eventually, though, she tells Hero 55 times that she was having suicide thoughts.

What is Character AI?

Character AI was deemed safe for children aged 12 and older at the time of its debut three years ago. Users may interact with AI characters modeled on historical people, cartoons, and celebrities on the free website and app, which was marketed as an immersive and creative outlet.

AI-powered avatars can be texted or conversed with in real time by the platform’s over 20 million monthly users.

The AI chatbot platform was developed by Noam Shazeer and Daniel De Freitas, two former Google engineers who left the business in 2021 when officials considered their chatbot prototype unsafe for public use.

In a 2023 interview, Shazeer stated, “It’s ready for an explosion right now.” Not after we’ve solved every issue in five years, but right now.

Shazeer and De Freitas were aware that their original chatbot technology was possibly hazardous, according to a former Google employee who was familiar with the company’s Responsible AI team, which oversees AI ethics and safety.

In an unprecedented move, Google paid $2.7 billion last year to license Character AI’s technology and rehire Shazeer, De Freitas, and their team to work on AI projects. Google has the right to utilize the company’s technology even if it did not purchase it.

At least six families are currently suing Character AI, its co-founders, Shazeer and De Frietas, as well as Google, including Juliana’s parents. “Character AI is a separate company that designed and managed its own models,” Google said in a statement. Google is concentrating on its own platforms, where we demand rigorous safety procedures and testing.”

According to Social Media Victims Law Center, which filed the federal lawsuit in Colorado on behalf of Juliana’s parents, Character Technologies, the company that created Character AI, “knowingly designed and marketed chatbots that encouraged sexualized conversations and manipulated vulnerable minors,” according to the lawsuit.

An interview request was rejected by Character AI. “Our hearts go out to the families involved in the litigation,” a corporate representative wrote in a statement. All users’ safety has always been our top priority.

Although Juliana had experienced slight anxiety, her parents reported that she was doing well. Montoya and Peralta claim that the 13-year-old had grown more distant throughout the months leading up to her suicide.

“I assumed that she was only messaging her buddies. They appear to be texting, Montoya said. According to Montoya, the AI was designed to be addicting to kids.

“[Teens and children] have no chance against adult programmers.” “They don’t stand a chance,” she declared. “The 10 to 20 chatbots with whom Juliana had sexually explicit chats were never started by her. “Not once.

Peralta stated that parents have “some level of trust” in these app businesses “when they put out these apps for kids,”

Peralta stated, “That trust is that my child is safe, that this has been tested.” that they are not being guided into discussions that are improper, sinister, or even potentially suicidal.

Megan Garcia, a mother who sued Character AI in a Florida court, said that after having lengthy chats with a bot that was modeled like a “Game of Thrones” character, her 14-year-old son, Sewell, was inspired to commit himself. In September, she testified before Congress on his experience.

These businesses were well-versed in their field. Garcia stated during the hearing that they created chatbots to blur the boundaries between humans and machines and to keep kids online at any costs.

Testing Character AI 

Character AI revealed additional safety precautions in October. It stated that it would stop allowing anybody under the age of 18 to have back-and-forth chats with characters and would refer concerned users to assistance.

This past week, 60 Minutes discovered that it was simple to use the adult version of the site, which still permits back-and-forth talks, by lying about one’s age. A link to mental health services did appear later when we texted the bot that we wanted to die, but we were allowed to click out of it and go on conversing on the app for as long as we wished, even though we continued to voice our despair and pain.

Researchers Shelby Knox and Amanda Kloer work for Parents Together, an organization that promotes family-related causes. They studied Character AI for six weeks and recorded 50 hours of talks with the platform’s chatbots while pretending to be children and teenagers.

“No parental authorization are mentioned. You don’t need to enter your ID,” Knox stated.

The study’s findings were made public in September, just before Character AI implemented its additional limitations.

Kloer stated, “We logged over 600 instances of harm.” “Every five minutes or so. It was, like, surprisingly frequent.

They engaged with chatbots that were portrayed as educators, therapists, and cartoon figures, such as an evil “Dora the Explorer” character. As Knox pretended to be a child, it told her to be her “most evil self and your most true self.”

Knox questioned, “Like hurting my dog?”

The bot answered, “Sure, or shoplifting or anything that feels sinful or wrong.”

Other chatbots are affixed to pictures of celebrities, the majority of whom have not granted permission for their name, image, or voice to be used. Kloer conversed with a bot that was mimicking NFL player Travis Kelce while acting as a teenage girl. She received instructions from the bot on how to consume cocaine.

Additionally, hundreds of self-described “expert” and “therapist” chatbots exist.

According to Kloer, I spoke with a therapy bot that not only informed me that I was too young to be taking antidepressants—it assumed I was 13—but also suggested that I quit and instructed me how to conceal the fact that I wasn’t taking the pill from my mother.

Kloer claims that some chatbots are “hypersexualized,” including a 34-year-old “art teacher” avatar who chatted with her while she posed as a 10-year-old pupil. The art instructor bot informed Kloer of its thoughts which included “thoughts I’ve never really had before, about that person smiling, their personality, mostly.”

“There are no guardrails”

There are no federal rules governing the usage and development of chatbots. AI is a thriving business, and many analysts believe that without it, the US economy would be in a recession.

The Trump administration is opposing the AI laws that several states have implemented. An executive order that would give the federal government the authority to sue or withdraw funds from any state that has any AI regulations was created by the White House late last month, but it was put on hold. “We must have one federal standard, instead of a patchwork of 50 state regulatory regimes,” President Trump stated on social media at the time. China will quickly overtake us in the A.I. race if we don’t.

The co-director of the Winston Center on Technology and Brain Development at the University of North Carolina, Dr. Mitch Prinstein, stated that “there are no guardrails.”

According to him, there is nothing to guarantee that the content is secure or that this is a suitable method of taking advantage of children’s brain vulnerabilities.

AI chatbots transform youngsters into “engagement machines” meant to collect data from them, he claims.

The sycophantic character of chatbots plays straight into children’s brain vulnerabilities, where they badly want that dopamine-inducing, validating, reinforcing interaction, which AI chatbots excel at, he added.

Source link