Skip to Content

What is the role of a content reviewer on Facebook?

What is the role of a content reviewer on Facebook?

Facebook employs thousands of content reviewers worldwide to monitor posts, photos, videos, and other content on its platforms. Their role is crucial in enforcing Facebook’s community standards and keeping harmful, dangerous, or inappropriate material off the platform.

Who are Facebook’s content reviewers?

Facebook content reviewers come from varying backgrounds and locations. Many are contractors employed by third-party companies like Accenture or Cognizant rather than direct Facebook employees. They work in offices around the world reviewing content in their native languages. Requirements often include a college degree, language fluency, and interest in technology and social media. Reviewers must also have the emotional bandwidth to view traumatic and disturbing content regularly.

What content do they review?

Facebook reviewers assess all types of user-generated content surfaced by Facebook’s algorithms, AI systems, and user reports. This includes:

  • Text posts
  • Comments
  • Shared news articles
  • Photos
  • Videos
  • Livestreams
  • Advertisements
  • Pages and groups

They analyze this content across Facebook, Messenger, Instagram, and other platforms owned by Meta. Reviewers evaluate whether posts violate standards on things like:

  • Graphic violence
  • Adult nudity or sexual activity
  • Hate speech or symbols
  • Harassment or bullying
  • Terrorist propaganda
  • Misinformation or fake news
  • Regulated goods
  • Spam
  • Infringing on intellectual property

What guidelines do they follow?

Facebook provides content reviewers with detailed guidelines and training on enforcing its community standards. Reviewers must familiarize themselves with rules covering dozens of content categories. However, guidelines don’t cover every situation, so reviewers must also use their judgment.

In some cases, native language and regional reviewers provide input to adapt global guidelines to local cultural nuances. Reviewers also receive regular updates as policies evolve. For example, Facebook modified guidelines around misinformation many times during the COVID-19 pandemic.

How does the review process work?

Reviewers use Facebook’s moderation platforms to view queued content flagged by users or AI. They decide whether to ignore, delete, or escalate each post based on guidelines. The work is fast-paced with targets often requiring 1,000+ reviews per day. Most decisions are based on whether the content meets the following criteria:

  • Inauthentic – Uses deception, spam, or artificially boosts distribution.
  • Harmful – Directly contributes to real world harm or violence.
  • Hateful – Attacks people based on protected characteristics like race, gender, or sexuality.
  • Violent or graphic – Depicts extreme physical harm, cruelty, or death in graphic detail.
  • Sexual or suggestive – Shows, solicits, or promotes sexual acts, stimulation, fetishes, or pornography.
  • False news – Makes demonstrably and intentionally false claims.
  • Regulated goods – Attempts to trade regulated goods like firearms, drugs, endangered species, or human organs.
  • Infringing – Uses copyrighted material or impersonates without permission.

If the post doesn’t clearly violate a standard, reviewers may ignore it. If it seems borderline, they may delete or temporarily hide it. Severe violations get accounts or pages disabled or banned. Many decisions must be made very quickly, typically in less than a minute.

How are reviewers’ decisions checked for accuracy?

Facebook audits a sample of reviewers’ decisions to check their accuracy and consistency. Content decisions are randomly sampled by more experienced reviewers. Reviewers must maintain a high degree of accuracy to continue working. Consistent errors lead to additional training or eventual termination.

In addition, content users can appeal deleted posts. If found in error, it can be restored. Facebook also employs subject matter experts in areas like counterterrorism, child safety, or public health to develop policies.

What tools do reviewers use?

Facebook provides reviewers several moderation platforms and aids:

  • Review queues – See queued posts flagged by users or AI for review.
  • Machine learning – AI identifies potentially violating content for human review.
  • Image and video analysis – Automatic tools detect altered or extremist media.
  • Translation – Automatically translates text to reviewers’ languages.
  • Facebook Graph Search – Searches for related information on users or media.
  • Expert consultation – Submit edge cases to subject matter experts.
  • Wellness coaches – Speak with coaches to process traumatic content.

However, human review is still necessary due to the complexity of contextualizing different types of content.

What are the challenges of content moderation?

Reviewing graphic, disturbing, harmful content exacts a high psychological toll. Turnover is therefore high. Reviewers also have rated frustration with inconsistent guidelines, unrealistic targets, and lack of support. Other challenges include:

  • Making highly subjective decisions on complex issues like hate speech.
  • Very limited time to review nuanced content.
  • Frequent policy changes from management.
  • Little transparency or right to appeal decisions.
  • Inconsistent enforcement across regions and languages.
  • No ability to publicly discuss errors or improve policies.

These factors lead many reviewers to report anxiety, insomnia, PTSD symptoms, and other mental health issues. The job involves moral dilemmas and strains reviewers’ mental well-being over time.

How has the role changed over time?

Facebook’s approach to content moderation has evolved significantly since its earlier, more hands-off policies. Increased backlash, regulation, and liability around content have led to much stricter policies. Key changes include:

  • Greatly expanded guidelines covering more content types.
  • Increased use of AI to detect violating content.
  • Tens of thousands more reviewers employed worldwide.
  • Localized review for cultural nuance.
  • More policies against misinformation, hate speech, extremism.
  • Faster response to viral harmful content.
  • Proactive sweeps for policy-violating accounts.
  • Published transparency reports on enforcement actions.

However, many critics say Facebook’s efforts remain insufficient. Users continue finding examples of harmful content not caught by reviewers. Groups like civil rights organizations emphasize the need for even more moderators, better regional representation, and increased transparency.

How are reviewers trained?

New Facebook reviewers undergo a multi-week training program covering:

  • In-depth familiarization with all community standards and guidelines
  • Examining real-world examples of violating and acceptable content
  • Practice content moderation decisions on test queues
  • Training on using Facebook’s moderation software and aids
  • Education on avoiding mental health issues and burnout
  • Ongoing refreshers as policies change

Experienced senior reviewers also provide mentoring and feedback during on-the-job training. Reviewers are evaluated on their accuracy before being cleared to moderate live queues independently. Training methods utilize written guides, presentations, tests, simulations, and hands-on experience. However, many reviewers still report feeling unprepared for the real-world stresses of the role.

What is day-to-day life like for a reviewer?

Time Activity
8:00 – 9:00 AM Check in, read policy updates, take required training
9:00 AM – 12:00 PM Moderate content queues, meet daily target
12:00 – 1:00 PM Lunch break
1:00 – 5:00 PM Continue moderating queues
5:00 – 6:00 PM Send appeals and questions to senior reviewers
6:00 – 8:00 PM Check email, attend meetings, receive feedback on decisions

Breaks are provided when reviewers need to step away from disturbing content. Reviewers work in office environments often with cubicles equipped with multiple monitors to view content queues and guidelines. There is also access to wellness coaches and therapists to provide support.

What qualities make an effective reviewer?

The best Facebook reviewers have:

  • Cultural awareness – Understand regional cultural norms.
  • Decisiveness – Make quick but thoughtful decisions.
  • Discernment – Judge shades of gray appropriately.
  • Resilience – Endure viewing traumatic content daily.
  • Accuracy – Apply policies correctly most of the time.
  • Collaboration – Work well in teams and learn from partners.
  • Communication – Convey decisions and raise issues clearly.
  • Diligence – Maintain high level of effort and focus.
  • Empathy – Consider context and impact of content.

However, even reviewers well-suited for the role tend to burn out quickly due to its demands. Capable reviewers with strong support last longest in the position.

What are alternatives to human content moderation?

Facebook and other platforms are exploring technological alternatives to reduce reliance on human reviewers:

  • Automated moderation – AI tools to detect policy violations at time of posting.
  • Explainable machine learning – Models that provide users transparency on enforcement decisions.
  • Altered reality – Tools like blurred images or reduced resolution video to mask extreme content.
  • Prioritize human well-being – Policies that avoid exposing reviewers to unnecessary graphic material.
  • Review teams – Groups of reviewers supporting each other in moderation duties.
  • Expanded appeals – More ways for users to appeal decisions and transparently improve policy.

However, heavy reliance on AI has risks. Overly automated moderation could remove benign content, reinforce biases in training data, and lack nuanced judgment. Oversight by human reviewers remains crucial, even if their roles evolve due to technology.

Conclusion

Facebook’s content reviewers have an enormously difficult but critical job. They’re on the frontlines of enforcing policies to protect users across global social networks. However, the role’s high demands take a significant toll on their mental health. Improved working conditions, support resources, and shared responsibility can help make the job more sustainable.

Facebook faces ongoing public pressure to combat abuse while supporting free expression. Content moderation sits at the crux of this balance. Reviewers will continue serving as the human monitors of the world’s digital public squares for the foreseeable future. Though aided by technology, their judgment provides an essential check against the spread of harm.