A New Law Would Outlaw the Use of AI Chatbots by Children

A new measure that was submitted in Congress today would mandate that anybody in the US who owns, runs, or otherwise permits access to AI chatbots confirm the age of its users and forbid them from utilizing AI companions if they are discovered to be minors.

Senators Richard Blumenthal, a Democrat from Connecticut, and Josh Hawley, a Republican from Missouri, sponsored the GUARD Act to safeguard kids when they engage with artificial intelligence. According to the law, these chatbots have the ability to affect behavior and manipulate emotions in ways that take advantage of kids’ developmental vulnerabilities.

The measure comes after Hawley presided over a Senate Judiciary subcommittee hearing last month on the Harm of AI Chatbots, during which the committee heard evidence from the parents of three young men who began self-harming or killed themselves after using chatbots from OpenAI and Character.AI. Hawley also initiated an inquiry into Meta’s AI policy in August, following the publication of internal papers permitting chatbots to engage in romantic or erotic chats with children.

Any AI chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication” is included under the bill’s broad definition of “AI companions.” Therefore, it may apply to both firms like Character.ai and Replika, which offer AI chatbots that mimic certain characters, and frontier model suppliers like OpenAI and Anthropic (the makers of ChatGPT and Claude).

Additionally, it would mandate age verification procedures that go beyond just entering a user’s birthday, such as “government-issued identification” or “any other commercially reasonable method” that can reliably identify a user as an adult or a minor.

It would also be illegal to create or make available chatbots that encourage or coerce “suicide, non-suicidal self-injury, or imminent physical or sexual violence,” or that pose a risk of soliciting, encouraging, or inducing minors to engage in sexual conduct. This could result in fines of up to $100,000 for firms.

A coalition of organizations, including the Institute for Families and Technology, the Tech Justice Law Project, and the Young People’s Alliance, signed a statement saying, “We are encouraged by the recent introduction of the GUARD Act and appreciate the leadership of Senators Hawley and Blumenthal on this effort.” The statement, “Noting that this bill is one part of a national movement to protect children and teens from the dangers of companion chatbots,” suggests that the bill focus on platform design and strengthen its definition of AI companions, forbidding AI platforms from using features that maximize engagement at the expense of the safety and well-being of young people.

Additionally, the measure would mandate that AI chatbots notify users on a regular basis that they are not human and that they do not offer financial, legal, medical, or psychological assistance.

Governor Gavin Newsom of California signed law SB243 into law earlier this month. This law mandates that AI companies operating in the state put safeguards in place for children, such as developing procedures to recognize and address suicidal ideation and self-harm, as well as taking action to stop users from hurting themselves. The bill will go into force on January 1st, 2026.

An “age-prediction system” that would automatically direct users to a teen-friendly version of ChatGPT was revealed by OpenAI in September. According to the firm, ChatGPT will teach children not to engage in conversations about suicide or self-harm, even in a creative writing situation, or to engage in flirty chat when prompted. Additionally, if a user under the age of eighteen is experiencing suicidal thoughts, we will make an effort to get in touch with their parents; if this is not possible, we will notify the authorities in the event of impending danger. The business introduced “parental controls” the same month, giving parents authority over how their kids interact with the product. Earlier this month, Meta also added parental controls for its AI models.

The family of a teen who committed suicide filed a lawsuit against OpenAI in August, claiming that the company had “intentionally decided” to “prioritize engagement” by loosening security measures that would have prohibited ChatGPT from discussing self-harm.

Source link