Facebook Vice President for Integrity Guy Rosen wrote in the blog post Sunday that the prevalence of hate speech on the platform has fallen by 50 percent in the past three years and that “a narrative that the technology we use to combat hate speech is inadequate and that we are deliberately misrepresenting our progress” is wrong .
“We don’t want to see any hatred on our platform, nor our users or advertisers, and we are transparent about our work to remove it,” wrote Rosen. “What these documents show is that our integrity work is a multi-year journey. Although we will never be perfect, our teams work continuously to develop our systems, identify problems and develop solutions. “
The post appeared to be an answer to a. to be Sunday article in Wall Street Journal, stating that the Facebook employees tasked with keeping objectionable content off the platform don’t believe the company can reliably check for it.
the WSJ Report says internal documents show that two years ago Facebook cut the time human reviewers focused on hate speech complaints and made other adjustments that reduced the number of complaints. This, in turn, helped create the appearance that Facebook’s artificial intelligence had been more successful in enforcing company rules than it actually was WSJ.
A team of Facebook employees found in March that the company’s automated systems removed posts that generated between 3 and 5 percent of hate speech views on the social platform and less than 1 percent of all content that violated its violence and baiting that WSJ reported.
However, Rosen argued that just focusing on removing content was “the wrong way to see how we fight hate speech”. He says hate speech removal technology is just one way Facebook is fighting it. “We have to be sure that something is hate speech before we remove it,” said Rosen.
Instead, he said, the company believes that a more important measure is to focus on the prevalence of hate speech that people are actually seeing on the platform and how it is using various tools to reduce it. He claimed that for every 10,000 views of a piece of content on Facebook, there were five views of hate speech. “Prevalence tells us what infringing content people see because we overlooked it,” Rosen wrote. “This is the most objective way to evaluate our progress, as it provides the most complete picture.”
But the internal documents that the WSJ showed that some key content escaped detection by Facebook, including videos of car crashes showing people with graphic injuries and violent threats against trans children.
the WSJ has produced Series of reports on Facebook based on internal documents of the whistleblower Frances Haugen. she testified before Congress that the company was aware of the negative impact its Instagram platform had on teenagers. Facebook has challenged the reporting based on the internal documents.