Skip to Content

How does Facebook review content?

How does Facebook review content?

Facebook relies on a combination of automated systems and human reviewers to monitor the vast amount of content that is posted on their platform. With over 2 billion monthly active users, it would be impossible for Facebook to manually review everything that is posted. So they use the following methods to detect and remove harmful content:

Automated Systems

Facebook uses artificial intelligence and machine learning to automatically detect content that violates their Community Standards. They have developed complex algorithms that can identify nudity, hate speech, bullying, terrorism, and other types of abusive content. These automated systems analyze text, images, and videos to find policy-violating content. Here are some of the automated systems Facebook uses:

  • Image matching technology – scans photos and videos for known examples of abusive content like child exploitation imagery.
  • Language understanding – analyzes textual content to determine whether the language is attacking or threatening someone based on certain attributes.
  • Repeat offender prevention – identifies accounts that have a history of posting abusive content and automatically disables their ability to post.
  • Spam detection – identifies and removes accounts engaged in spammy behavior like repeatedly posting clickbait content or fake engagement.

Facebook is constantly improving these automated systems using the latest advancements in artificial intelligence. The systems analyze literally billions of posts per day to proactively detect policy-violating content.

Human Reviewers

While AI plays a crucial role, human review is still very important in Facebook’s content moderation process. Facebook employs thousands of human content moderators who review reported content 24/7 in over 50 languages. Content is flagged for human review through the following channels:

  • User reports – Facebook users can report content that they feel violates policies.
  • Proactive detection – Facebook’s automated systems will flag suspicious content to be examined by human reviewers.
  • Legal requests – Law enforcement agencies can formally request the removal of content that violates local laws.
  • Partner escalations – Facebook partners like NGOs and security organizations can escalate content for priority review.

Human reviewers will then examine the context and nuances of the reported content to make a judgement on whether it actually violates policies. The reviewers make the final decision on removing content and disabling accounts when warranted. They also provide input to improve Facebook’s automated detection systems. According to Facebook, their human reviewers uphold content that is reported but doesn’t violate policies around 50% of the time. So human judgement is critical.

Moderation Teams and Processes

Facebook has assembled large teams of human moderators, organized in the following way:

  • In-House Moderators – Facebook directly employs over 15,000 content reviewers located across over 20 sites around the world. These reviewers work onsite and handle the most sensitive content violations.
  • Outsourced Moderators – Facebook contracts with companies like Accenture and Cognizant to provide thousands of additional moderators. These third-party reviewers primarily work remotely.
  • Localized Support – Facebook has over 150 partner operations providing localized language support for content reviews. For example, reviewers in Indonesia moderate Bahasa Indonesia content.

Each moderator goes through extensive training on Facebook’s Community Standards and how to apply them properly. They are provided detailed guidance on what types of content are not allowed on each policy. The reviewers then examine reported content using Facebook’s custom-built moderation platforms. Each content decision is logged and subject to quality assurance checks. If a decision is found to be inaccurate, it is sent back for re-review. Facebook’s goal is to have users in different parts of the world receive as consistent of an experience as possible.

Escalations and Appeals

For complex cases that require additional expertise, Facebook has specialized moderation teams focused on areas like child safety, counter-terrorism, and suicide and self-injury. These teams have subject matter experts who handle escalated reviews. Users can also appeal content decisions they feel were made in error. Appeals go to a separate dedicated team of human reviewers who take a fresh look at the content in question. This acts as another check to prevent mistaken removals. According to Facebook, of the 1 million appeals received each week, around 20% result in the content being restored.

Policy Development

Facebook’s policies that define what types of content are allowed and not allowed are developed through an extensive internal process. These policies directly guide both the automated systems and human reviewers. Policy development involves:

  • Research – Understanding cultural norms and contexts around content types. For example, how different countries view nudity.
  • External Consultation – Getting input from independent experts, advisory councils, and civil society groups on policy standards.
  • Multi-Disciplinary Teams – Cross-functional teams of lawyers, human rights experts, engineers, and regional leads collaborate on policy details.
  • Testing – Policies are piloted with actual content to gauge accuracy and impact.
  • Executive Review – Proposed policies are reviewed by leaders across the company including Facebook’s CEO.

This exhaustive process is aimed at developing nuanced content policies that balance safety, voice, and cultural differences globally. The resulting Community Standards document serves as the policy bible for Facebook’s entire moderation process.

Transparency

Facebook releases a Community Standards Enforcement Report every quarter with metrics on how much violative content is detected and removed across different policy areas. This includes data on:

  • Amount of content flagged by automated systems vs users
  • Action taken on content (left up, removed, disabled account, etc)
  • Percentage of appeals where decisions were overturned
  • Amount of content detected proactively before being viewed

The report aims to provide insights into trends in policy violations and how Facebook’s systems are performing in catching abuse. Facebook also selectively shares case studies with the media to highlight how certain complex content issues are handled. Overall, Facebook has tried to pull back the curtain on their content moderation operations as much as feasibly possible.

Oversight Board

In 2020, Facebook assembled an independent Oversight Board to make the final call on some of the most challenging content decisions. The board consists of experts in law, human rights, journalism, and other fields. They have the authority to make binding decisions regarding whether specific content should be allowed or removed under Facebook’s policies. This provides an additional layer of scrutiny and accountability around Facebook’s content rulings. Some key facts about the Oversight Board:

  • Has 40 initial members that will grow to 80 people globally
  • Funded by an independent trust with a $130 million commitment from Facebook
  • Makes decisions based on Facebook’s Community Standards and values
  • Selects cases to review based on significant real-world impact
  • Publishes details on cases they rule on

The Oversight Board is still in its infancy, but over time it will play a major role in interpreting how Facebook’s content policies should apply to complex situations. This independent body will influence how Facebook balances content moderation and free expression moving forward.

Conclusion

Facebook has a multifaceted approach to reviewing the massive amounts of content posted to their platform. Automated systems use artificial intelligence to detect policy violations at scale. Thousands of human reviewers examine reported content and make nuanced judgement calls. Specialized teams handle escalated cases, while the new Oversight Board acts as the supreme court for content decisions. Despite the challenges of moderating billions of users globally, Facebook continues to invest heavily in people, technology, and processes to keep harmful content off their platform while preserving free expression.