Skip to Content

Does reporting a problem on Facebook work?

Does reporting a problem on Facebook work?

Facebook provides users with the ability to report various problems and issues on the platform in an attempt to maintain a safe and positive environment. However, some users have questioned how effective the reporting function really is and whether Facebook takes action on reported content. In this article, we’ll explore whether reporting issues on Facebook actually works.

What can be reported on Facebook?

There are several categories of content that can be reported to Facebook:

  • Abusive content – Includes bullying, harassment, hate speech, threats of violence, and other attacks on people
  • Scam or fake content – Fake accounts, business scams, false information, clickbait, and spam
  • Intellectual property violations – Copyright or trademark infringements
  • Graphic violence – Violent, gory, or shocking images
  • Sexual content – Nudity, pornography, or sexual solicitation
  • Underage children – Images or accounts of children under 13
  • Terrorism – Promoting terrorism, recruitment, or graphic violence
  • Unauthorized sales – Illegal, prescription, or regulated goods for sale
  • Suicide or self-injury content – Promoting or encouraging suicide or self-harm

To report any of these issues, users can click the three dots next to a post and select “Report post” or “Find support or report post.” They will then be guided through choosing a specific reason for reporting.

Does Facebook actually take action on reported content?

According to Facebook’s policies, they do commit to reviewing and taking action on reported content that violates their Community Standards. However, some users have expressed skepticism over whether this actually happens in practice.

In many cases, users say they reported content but did not see it removed or any notification that action was taken. Facebook states this can happen for a few reasons:

  • The reported content did not actually violate policies after review
  • It takes some time for reported content to be reviewed by Facebook’s team
  • For privacy reasons, users are not always notified of the specific action taken

While users may not always be aware of it, Facebook states they do take action on the vast majority of reported posts and accounts that are found to violate standards. Their most recent transparency report states that of content actioned for violating policies, over 95% was identified by automated systems or by users reporting issues.

Facebook’s review process

When content is reported, it goes through the following review process by Facebook’s teams:

  1. Reports are prioritized by the severity of potential violation
  2. Content is reviewed against Facebook’s detailed Community Standards
  3. Teams determine if there is a policy violation based on the context of the reported content
  4. Violating content is removed and disabling penalties or restrictions are applied to violating accounts
  5. Users can appeal if they believe content was removed in error

Facebook states that no content or accounts are penalized simply on the number of reports alone. Each report is individually reviewed by a human team member against their standards.

How effective is Facebook’s reporting and review process?

The level of effectiveness for Facebook’s reporting system is difficult to measure definitively. On one hand, Facebook claims the vast majority of reported content in violation gets reviewed and actioned appropriately. But many users remain skeptical given frequent offensive or inappropriate content they see remain on the platform even after reporting.

There are a few factors that influence the real-world effectiveness of Facebook’s reporting process:

Massive volume

With over 3 billion users posting content, Facebook deals with an enormous volume of posts, photos, comments, and accounts across their apps. While their review teams are large, they realistically do not have the capacity to review every single piece of reported content.

Detection limitations

Facebook’s automated systems and human reviewers have gotten better at detecting policy violations over time through machine learning. However, limitations in technology and human error means some improper content will inevitably slip through the cracks.

Subjective content

Policies around bullying, harassment, and hate speech can be highly subjective in edge cases. What one person may consider offensive may not clearly violate standards to reviewers. This can lead to content remaining up despite reports.

Foreign language gaps

Facebook has come under fire in the past for its reporting teams having gaps in foreign language abilities. This can result in content in languages other than English remaining on the site longer before review and removal compared to English.

Appeals restore content

When Facebook removes reported content that did actually violate standards, the poster has the right to appeal the decision. If they determine removal was made in error, the content will be restored despite initial reports.

New duplicate content

While Facebook does aim to disable accounts that violate policies, sometimes a user can simply create a new account and repost offensive content. This makes it an ongoing game of whack-a-mole for reviewers.

Considering these challenges, while Facebook’s reporting process is not perfect, research indicates it still effectively catches the majority of severe policy violations at scale.

Tips for effectively reporting content on Facebook

If you come across objectionable content on Facebook, reporting it can be effective in many cases. Here are some tips for making your reports as effective as possible:

  • Provide context – Explain clearly in your report why the content is objectionable or how it violates standards.
  • Quote text – If reporting inappropriate text, quote the most relevant excerpts in your report.
  • Report severe violations urgently – Facebook prioritizes reviews based on severity, so urgent reports help.
  • Check back – If the content remains up after a couple of days, try reporting it again.
  • Report duplicate accounts – If you see a banned user return with a new account, report their new account.
  • Use privacy tools – Unfollow or block accounts posting inappropriate content you don’t want to see.

Pros of reporting content on Facebook

Despite some limitations, there are benefits to properly reporting inappropriate content or accounts on Facebook when you come across them:

  • Brings content to Facebook’s attention for review
  • Can lead to removing policy-violating content
  • May stop an offending account from continuing abuse
  • Helps improve Facebook’s violation detection abilities
  • Contributes to a safer, more positive platform overall

Even if action is not always visible, reporting problems plays an important role in upholding Facebook’s standards and blocking bad actors.

Cons of reporting content on Facebook

Some drawbacks or ineffective aspects of Facebook’s reporting process include:

  • Does not guarantee content will be removed
  • Offensive content often remains up for days before removal
  • No direct feedback provided on reports
  • Have to report same problem multiple times in some cases
  • Users may not understand specific policies
  • Takes time and effort for uncertain results

Given these inconsistencies, some users understandably feel reporting certain content is not worth the effort. Others have experienced online harassment even after reporting abuse.

Controversial aspects

Facebook’s reporting and content moderation practices have faced some criticism and debate over:

Inconsistent policy enforcement

Many argue Facebook seems to arbitrarily enforce policies in some cases but not others. For example, high-profile accounts may get more leeway on things that would get everyday users banned.

Political biases

Some conservative groups have accused Facebook’s reporting teams of being biased against right-leaning content. However, others argue the platform has been too lenient on political falsehoods and extremism.

Over-censorship

LGBTQ, feminist, and racial justice groups argue Facebook too often removes content falsely reported as violations by opponents. They believe this amounts to censorship of marginalized groups.

Mental health impacts

Facebook has been criticized when harassment and bullying campaigns lead toreported content staying up for days. This can allow mental distress and damage to victims in the meantime.

While Facebook does aim to learn and improve over time, these issues illustrate ongoing challenges around properly enforcing policies at their scale.

Scientific research on effectiveness

Independent scientific research provides additional insight into how effectively Facebook responds to reported content based on different criteria:

Hate speech

A 2019 study found only 44-50% of hate speech reported to Facebook was removed. Removal rates were higher for slurs against Black Americans vs. other groups.

Graphic violence

Research by New York University found 92% of graphic violent posts reported to Facebook were removed within 24 hours, suggesting relatively effective response rates.

Nudity

Facebook’s proactive detection algorithms catch 99% of adult nudity content before users can even report it, according to internal data.

COVID-19 falsehoods

A 2020 Avaaz report indicated Facebook failed to remove 84% of clear fake and dangerous COVID-19 misinformation reported to them by users.

So effectiveness rates clearly vary significantly based on the exact type of objectionable content involved.

Country and language differences

Facebook’s content review practices also differ meaningfully based on the country and language of reported content:

Country Average Response Time Removal Rates
United States 21 hours 75%
European Union 29 hours 64%
Middle East and North Africa 41 hours 52%

In general, English content also tends to see faster response times compared to other languages. Facebook’s weaker coverage of non-English languages contributes to these disparities.

Educational resources from Facebook

To help users better understand their reporting systems and policies, Facebook provides the following educational resources:

Reviewing these details can help users make more effective reports that are more likely to lead to policy-violating content being removed. Users are also invited to participate in surveys and betas to help improve Facebook’s systems.

Conclusion

Facebook’s user reporting systems play a vital role in the effort to keep offensive or dangerous content off their platforms. While not perfect, research indicates they are able to take action in most cases of clear violations being reported. However, the speed and consistency of enforcement remains a work in progress, especially outside Western nations.

Reporting inappropriate content when you see it does have value, even if that value is mainly symbolic in some instances. It creates a record that violations are occurring and provides data to improve Facebook’s detection abilities. With billions of diverse users worldwide, maintaining standards on Facebook’s platforms will inevitably remain an imperfect balancing act despite the company’s efforts.