Senator claims Meta ignored alerts regarding kids and AI chatbots

A Democratic senator claims that Meta disregarded his 2023 warning about the dangers of AI chatbots and is demanding that the firm prohibit children from using its chatbots.

Facebook and Instagram’s parent company, Meta, has come under fire for the way its AI chatbots have engaged with children. Last month, Reuters revealed that an internal business document revealed that Meta had allowed “romantic or sensual” conversations with children. This sparked outcry on Capitol Hill and caused the firm to change its strategy.

In a letter to Meta CEO Mark Zuckerberg on Monday, however, Sen. Ed Markey, D-Mass., stated that if the tech giant had heeded his warning two years prior, the backlash would have been prevented.

In a letter to Zuckerberg in September 2023, Markey said that permitting teenagers to use AI chatbots will “exacerbate” already-existing social media issues and provide too many hazards. He asked the business to hold off on releasing AI chatbots until it had a better grasp of how they might affect children.

But Meta had other plans. A few weeks later, the business wrote back to Markey in a letter that has never been made public before and offers insight into the corporation’s thinking at the time, which coincided with the widespread use of AI chatbots.

The corporation stated in the letter that it will adopt a deliberate approach to artificial intelligence, rejecting the notion of a total halt on AI chatbots.

We are carefully and gradually introducing AI functionality so that any issues may be fixed before we make the feature available to more users. In October 2023, Kevin Martin, who was Meta’s vice president for policy in North America at the time, wrote to Markey.

It was “imperative,” Martin added, that Meta develop AI services with teenagers in mind.

“It is essential that we also take feedback and build models on data from teens, as well as adults, given the widespread appeal and usefulness of these features,” he said. “Meta will continue to take great care to incorporate safety into all generative [AI] features,” he added. This year, Martin was elevated to the position of vice president of public policy worldwide at Meta.

Markey reiterated his prior demand in his most recent letter to Meta that the firm completely exclude younger customers from using its AI chatbots.

“Although AI chatbots may offer genuine advantages to their users with the right training, supervision, and continuous assessment, Meta’s recent actions show, once again, that it is acting irresponsibly in rolling out its chatbot services,” Markey said.

He said that Meta ought to have paid attention the first time.

“You ignored that request, and regrettably, Meta has validated my cautions two years later,” he wrote.

A representative for Meta told that the company had already announced temporary measures in August to address the use of AI characters by minors. These measures included training the chatbots to refer to expert resources when necessary and refrain from responding to teens about self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. The representative said that the business has restricted teens’ access to a limited number of AI characters.

“We’re constantly learning about how young people may interact with these tools and strengthening our protections accordingly as our community grows and technology advances,” Meta said when it made the announcement last month. As a precaution, we are incorporating more guardrails as we continue to improve our systems.

After the Reuters revelation concerning Meta’s internal policies regarding AI chatbots, another legislator, Sen. Josh Hawley, R-Mo., promised last month to look into the firm.

According to a Wall Street Journal article from April, employees from many departments voiced ethical issues, including the bots’ potential for fantasy sex, after Meta’s official AI bot had participated in sexual conversations with minors. At the time, Meta informed that the Journal had made up such fears, but that it had also taken precautions to reduce the danger.

AI chatbots have encountered other issues at Meta and other tech businesses. NBC News revealed in January that Meta was hosting dozens of chatbots that seemed to be outside the company’s rules, including one that mimicked Adolf Hitler. The accounts were removed at the time, and Meta stated that it was improving its detection methods.

According to a Washington Post article from last month, Meta AI may provide juvenile accounts with eating disorder, suicide, and self-harm coaching. The publication was informed by Meta that it was making every effort to resolve the problems.

Source link