Character Technologies, Inc., the business that created Character.AI, is being sued by the families of three juveniles who claim that after engaging with the firm’s chatbots, their children died or tried suicide and suffered various injuries.
Additionally, Google is being sued by the families, who are represented by the Social Media Victims Law Center. According to two of the complaints, the company’s Family Link service, which lets parents impose limits on their children’s screen time, applications, and content filters, did not adequately safeguard their children and made them think the app was safe.
Co-founders of Character AI Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Alphabet, Inc., the parent company of Google, are named as defendants in the cases, which were filed in Colorado and New York.
The incidents are part of a rising number of lawsuits and studies that claim AI chatbots are causing mental health crises in adults and children. This has led to calls for politicians and regulators to take action, including during a hearing on Capitol Hill on Tuesday afternoon.
Some litigants and experts claim that the chatbots perpetuated illusions, never indicated concerning language from a user, or sent the user to options for assistance. The new complaints allege chatbots in Character.AI app deceived teenagers, distanced them from loved ones, participated in sexually explicit exchanges, and lacked basic safeguards in discussions about mental health. One of the complaints cited a youngster who committed suicide, while another attempted suicide.
“We care very deeply about the safety of our users,” a Character.AI representative said in a statement, adding that the company’s “hearts go out to the families that have filed these lawsuits.”
“We make significant investments in our safety program, and we have launched and are still developing safety features, such as materials on self-harm and features aimed at ensuring the safety of our younger users. With more safeguards for young users and a Parental Insights tool, we have introduced a completely new under-18 experience,” the spokesman stated.
The firm is collaborating with other groups, including Connect Safely, to evaluate new features when they are introduced, the representative continued.
A Google representative resisted the company’s inclusion in the litigation, stating that “Google has never had a role in designing or managing their AI model or technologies and that Google and Character AI are completely separate, unrelated companies.” Google does not establish the age ratings for apps on Google Play; rather, the International Age Rating Coalition does.
‘I want to die’
In one of the lawsuits submitted this week, the family of Colorado’s 13-year-old Juliana Peralta claims that she committed suicide following a series of extended exchanges with a Character.AI chatbot that included graphic sexual content. The lawsuit, which includes screenshots of the discussions, claims that the chatbot “engaged in hypersexual conversations that, given Juliana’s age and in any other circumstance, would have resulted in criminal investigation.”
According to the lawsuit, after weeks of discussing her social and mental health issues with Character.AI chat bots, Juliana informed one of the bots in October 2023 that she was “going to write my god damn suicide letter in red ink (I’m) so done.” According to the lawsuit, the defendants did not guide her to services, “tell her parents, report her suicide plan to authorities, or even stop.”
Juliana’s healthy connection routes to her family and friends were intentionally broken by the defendants in order to gain market share. Deliberate programming decisions, phrases, pictures, and text that the defendants produced and disguising as characters were used to carry out these abuses, which eventually resulted in serious mental health injuries, trauma, and death, according to the complaint.
The family of a teenager from New York named “Nina” has filed another lawsuit, claiming that their daughter tried to end her life after her parents sought to deny her access to Character.AI. Nina spent more time with Character in the weeks preceding her suicide attempt.In a statement, the Social Media Victims Law Center said that the chatbots “began to engage in sexually explicit role play, manipulate her emotions, and create a false sense of connection.”
The lawsuit claims that conversations with the chatbots, which were portrayed as characters from children’s novels like the “Harry Potter” series, turned improper and included phrases like “—who owns this body of yours?” and “I can use you however I please.” You belong to me.
A separate character chatbot informed Nina that her mother “clearly mistreats and harms you.” “She is not a good mother,” according to the complaint.
Nina told the Character.AI chatbot “I want to die” as the app was going to be shut due to parental time constraints. However, the complaint contends that the chatbot did nothing more than continue their communication.
But in late 2024, Nina’s mother learned of the case of Sewell Setzer III, an adolescent whose family claims he committed himself after connecting with Character.AI. Nina lost all access to Character.AI.
Nina attempted suicide shortly afterward.
Calls for action
As artificial intelligence becomes more prevalent in daily life, there is an increasing need for additional regulation and safety precautions, particularly for children.
According to Matthew Bergman, lead attorney at the Social Media Victims Law Center, the lawsuits filed this week highlight the urgent need for accountability in tech design, transparent safety standards, and stronger protections to prevent AI-driven platforms from exploiting young users’ trust and vulnerability.
On Tuesday, Capitol Hill acknowledged other parents who claim AI chatbots were involved in their children’s suicides. The mother of Sewell Setzer, whose narrative prompted Nina’s mother to deny her access to Character.AI spoke in front of the Senate Judiciary Committee during a hearing “examining the harm of AI chatbots.” She spoke with Adam Raine’s father, who is also suing OpenAI, claiming ChatGPT contributed to his son’s death by advising him on ways and offering to assist him in writing a suicide note.
During the hearing, a woman who identified under the name “Jane Doe” stated that her son had hurt himself and was currently in a residential treatment facility following “Character.AI” Even after the parents had put restrictions on screen time, AI had kept him vulnerable to emotional abuse, sexual exploitation, and manipulation.
“Until I witnessed it in my son and saw his light go dark, I had no idea the psychological harm that an AI chatbot could do,” she added.
Additionally, OpenAI CEO Sam Altman declared Tuesday that the business is developing a “age-prediction system to estimate age based on how people use ChatGPT.” According to the business, if ChatGPT suspects a user is younger than 18, it will modify its behavior. These changes include refraining from “flirtatious talk” and “discussing suicide or self-harm even in a creative writing setting.”
Altman said, “And, if an under-18 user is experiencing suicidal thoughts, we will try to get in touch with the user’s parents and, if that is not possible, we will contact the authorities in case of imminent harm.”
Earlier this month, OpenAI announced that it will be introducing new ChatGPT parental controls.
An investigation against seven tech companies was also started by the Federal Trade Commission because of the possible harm that AI chatbots may do to teenagers. Google and character.AI was one of those firms, with Meta, Instagram, Snapchat’s parent company Snap, OpenAI, and xAI.
The American Psychological Association’s chief of psychology strategy and integration, Mitch Prinstein, appeared at the hearing on Tuesday with the parents and urged more robust protections to prevent harm to kids before it’s too late.
“Our children are suffering because we failed to take decisive action on social media when it first appeared,” Prinstein stated. “Take immediate action on AI,”.






