- We think focusing on the prevalence, the amount of hate speech people actually see on the platform – and how we are reducing it with all of our tools – is the most important measure.
- Our technology has a huge impact on how much hate speech people see on Facebook. According to our latest report on Community Standards Enforcement, it is frequency is about 0.05% of content viewed, or about 5 views per 10,000, which is down nearly 50% over the past three quarters.
- We use technology to reduce the spread of hate speech in a number of ways: it helps us proactively identify it, relay it to our reviewers, and remove it if it violates our guidelines. It also enables us reduce the distribution from likely infringing content. All of these tools work together, which affects the prevalence.
- In 2016, our content moderation efforts relied mostly on user reports. That is why we have developed a technology to proactively identify infringing content before People report it. Our proactive detection rate reflects this. we Report prevalence figures so we can gauge how much hate speech actually appears in our apps.
Data from leaked documents is used to create a narrative that the technology we use to combat hate speech is inadequate and that we are deliberately misrepresenting our progress. That is not true. We don’t want to see hatred on our platform, nor our users or advertisers, and we are transparent about our work to remove it. What these documents show is that our integrity work is a multi-year journey. While we will never be perfect, our teams are continually working to develop our systems, identify problems, and come up with solutions.
Recent reports suggest that our approach to tackling hate speech is much narrower than it actually is, ignoring the fact that the prevalence of hate speech has dropped to 0.05%, or 5 views per 10,000 on Facebook. We believe prevalence is the most important metric to use because it shows how much hate speech is actually seen on Facebook.
Just focusing on removing content is the wrong way to see how we tackle hate speech. That’s because using technology to eradicate hate speech is just one way to counter it. We need to be sure that something is hate speech before we remove it. If something could be hate speech but we’re not sure enough that it meets the bar for removal, it’s ours Technology can Reduce the distribution of content or not recommend groups, pages, or people who regularly post content that is likely to violate our policies. We also use technology to mark content for further review.
We have a high threshold for automatic content removal. Otherwise, we would risk making even more mistakes with content that looks like hate speech but isn’t. such as those who describe or condemn experiences with hate speech.
Another misunderstood metric is our proactive detection rate, which tells us how good our technology is at finding content before people report it to us. It tells us how much of the content we’re removing we found each other. In 2016, the vast majority of our content removals were based on user reports. We knew we had to do better, so we started developing technology to identify potentially harmful content without anyone pointing it out.
When we started reporting our hate speech metrics, only 23.6% of the content we removed was being proactively detected by our systems. most of what we’ve removed has been found by people. Today that number is over 97%. But our proactive rate doesn’t tell us what we’re missing and doesn’t take into account the sum total of our efforts, including what we’re doing reduce the distribution problematic content. That is why we focus on prevalence and consistently refer to it as the most important key figure. Prevalence tells us what infringing content people see because we overlooked it. This is the most objective way of evaluating our progress, as it provides the most complete picture. We talk about prevalence and quarterly in our report on enforcement of community standards describe it in our transparency center.
Prevalence is the way we measure our work internally and therefore we share the same metric externally. Although we know that our work in this field will never be done, the fact is that it is spreading reduced by almost 50% in the last three quarters shows that our efforts, taken together, are having an impact. As reported in ours Report on the enforcement of community standards, we can attribute a significant portion of the decline to our improved and expanded AI systems.
We worked with international experts to develop our metrics. We are also the only company that has voluntarily agreed to have them audited independently.
We include many metrics in our quarterly reports that are the most comprehensive of their kind to give people a more complete picture. We worked with international experts in measurement, statistics and other areas an independent, public assessment to make sure we’re measuring the right things. While they largely agreed with our approach, they also made recommendations on how we can improve. You can read her full report here. We are also committed to a independent examination, with global accounting firm EY to ensure we accurately measure and report our metrics.