Facebook indicate its AI can’t detect hate or violent speech

Facebook Vice President of Integrity Guy Rosen wrote in a blog post on Sunday that the prevalence of hate speech on the platform has fallen by 50% in the past three years and that “a story that the technology we use to tackle hate speech is inadequate and that we are deliberately distorting our progress “was wrong.

“We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it,” Rosen wrote. “What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.”

The post appeared to be a response to a Sunday article in the Wall Street Journal, which said Facebook employees tasked with keeping abusive content off the platform don’t think the company is in a position to review it. reliably.

The WSJ report states that internal documents show that two years ago, Facebook reduced the time human examiners spent focusing on hate speech complaints and made other changes that reduced the number of complaints. This in turn helped create the impression that the artificial intelligence of Facebook according to the WSJ had been more successful in enforcing corporate rules than it actually was.

A team of Facebook employees discovered in March that the company’s automated systems were removing posts that generated 3-5% of hate speech views on the social platform and less than 1% of all content that violated its rules against violence and incitement, the WSJ reported.

But Rosen argued that focusing solely on removing content was “the wrong way to look at how we fight hate speech.” He says technology to suppress hate speech is just one of the methods Facebook uses to fight it. “We have to be sure something is hate speech before we delete it,” Rosen said.

Instead, he said, the company believes focusing on the prevalence of hate speech that people actually see on the platform and how to reduce it using various tools is a larger measure. He claimed that for every 10,000 views of content on Facebook, there were five views of hate speech. “Prevalence tells us what content violates people see because we lost it,” Rosen wrote. “It is the most objective way to assess our progress because it provides the most complete picture.”

But internal documents obtained by the WSJ have shown that some important content may have escaped detection by Facebook, including videos of car accidents showing people with graphic injuries and violent threats against trans children.

The WSJ has produced a number of Facebook reports based on internal documents provided by whistleblower Frances Haugen. He testified before Congress that the company was aware of the negative impact its Instagram platform could have on teens. Facebook has disputed the reporting based on the internal documents.

Source link