Skip to Content

Why do bots comment on Facebook?

Why do bots comment on Facebook?

Introduction

In recent years, there has been a rise in bot accounts commenting on Facebook posts and pages. Bots are automated programs designed to carry out certain tasks, and in this case they are being used to post comments and interact with users on the platform. The reasons bots are used on Facebook vary, but some of the main motivations include spreading spam or malicious links, artificially inflating engagement, and gathering data. Understanding why these bots target Facebook can give insight into the platform’s vulnerabilities and how to detect bot activities.

Spreading Spam and Malicious Links

One of the most common uses of Facebook commenting bots is to spread spam, advertising, or malicious links. Spammers set up networks of bot accounts that can rapidly post comments containing links across many different profiles and pages. The goal is to drive traffic to monetized or malicious sites by exploiting Facebook’s high visibility and reach. According to Facebook’s estimates, fake accounts are responsible for 3-4% of monthly active users on the platform.

Bots allow spammers to automate commenting at a large scale and lower cost compared to manual work. And it can be difficult for users to identify bot accounts, as many are designed to mimic natural commenting patterns. Facebook’s detection algorithms often play catch-up with spammers who constantly modify their tactics.

Artificially Inflating Engagement

Engagement metrics like comments, likes, and shares are critical for visibility on Facebook. Pages want their posts to be widely seen to reach more users. Bots are sometimes leveraged to artificially inflate these metrics and make posts appear more popular than they really are.

By driving up comments and reactions, bots can trick Facebook’s algorithm into ranking a page or post higher in News Feeds. This benefits pages looking to gain influence, make money through advertising, or simply get their messages in front of more people. It’s a tactic commonly used by purveyors of misinformation.

Gathering Data for Targeting

The data Facebook users provide through commenting and interacting with posts can be very valuable, especially for advertisers and marketers. Bots are deployed to gather this type of data through natural language interactions.

By analyzing user profiles, commenting on posts, and engaging directly with users, bots can collect valuable info on interests, habits, locations, and demographics. This data enables more targeted advertising and messaging campaigns. Bots also spread to extract email addresses, phone numbers, and other personal info from users.

Scale of the Problem

Estimating the full scale of bot activities on Facebook is difficult given bots’ elusive and constantly evolving nature. But Facebook’s own statistics give a sense of the scope:

  • As of Q4 2021, Facebook estimated that fake accounts made up approximately 5% of their worldwide monthly active users (MAUs). With around 2.9 billion MAUs, this equaled around 144 million fake accounts.
  • Facebook notes they “disabled” 1.3 billion fake accounts in Q3 2021. This indicates the large volume of bot creation on their platform.
  • Up to 20% of comments on Facebook posts by large Pages come from fake accounts, according to one analysis from SparkToro.

And the issue may be even more pervasive than what Facebook’s safeguards detect and eliminate. Various estimates suggest there may be 3-4 times more bots spreading disinformation than what analysis of formal platform data indicates. The prevalence also appears to be growing year over year as tactics get more advanced.

Sophisticated Tactics

Basic bots are relatively easy for Facebook to detect and remove through technical signals like duplicated text patterns or suspicious login locations. So bot operators have invested heavily in developing more advanced, human-like activity. Tactics include:

  • Using mixes of bots and real accounts controlled centrally
  • Generating unique text with AI language models
  • Mimicking human comment timing and response patterns
  • Frequently creating new accounts to avoid detection
  • Targeting posts about divisive social or political topics to blend into heated discourse

These approaches make automated commenting activity harder to distinguish from real users expressing opinions online. And the capabilities continue to evolve rapidly with advancements in areas like natural language processing. Facebook is in an arms race with bot operators when it comes to detection.

Financial Incentives Driving Bot Usage

The core reason bots are so prevalent on Facebook is because they enable financially motivated manipulation and misuse of the platform. Key financial incentives include:

Clickbait Revenue

Bots are used to drive traffic to pages filled with ads or monetized content. More clicks mean more ad revenue for spammers and scammers.

Astroturf Political Campaigns

Political groups or foreign entities use bots to posture grassroots support for campaigns and catalyze viral posts. This artificially amplifies their messaging and donations.

Illicit Sales

Bad actors leverage bots to find potential customers and direct them to ecommerce platforms selling illegal or regulated goods like drugs, firearms, or adult content.

Reselling Users’ Data

Data farming bots build profiles on users that can be sold to companies for targeted advertising. The more real the profiles, the more bots are worth.

Ad Fraud

Sophisticated bots mimic real users to generate fraudulent video views, engagements, and clicks on Facebook ads. This earns payouts while bypassing Facebook’s fraud checks.

While Facebook has rules against these activities, the massive reach of their platform makes it an irresistible target for exploitative schemes. And bots operate at such large scale that malicious actors only need a tiny conversion rate to profit. Even with Facebook identifying billions of fake accounts per quarter, plenty slip through the cracks.

Ongoing Challenges Combating Bots

Facebook employs over 35,000 people in trust and safety roles working to address abuses like bots and fake accounts on their platforms. Their approach includes:

  • Leveraging technical signals from accounts like duplicate posts or suspicious login locations
  • Analyzing behavioral patterns to identify coordinated inauthentic behavior
  • Receiving user reports of fake accounts
  • Employing AI tools like machine learning and natural language processing to flag unusual activity
  • Enforcing policies prohibiting fake accounts, spamming activity, misrepresentation, and foreign interference

Despite these efforts, bot operators have proven skillful at circumventing Facebook’s safeguards. Ongoing challenges include:

Volume of Activity

Billions of comments are posted to Facebook daily from all around the globe. Analyzing this massive flood of content in real time is extremely difficult, even with sophisticated AI.

Operator Adaptation

As Facebook cracks down on certain tactics, bot operators quickly tweak their methods to avoid detection. It becomes a game of cat and mouse.

Legitimate Mistakes

Seeking to balance safety and openness, Facebook wants to avoid mistakenly penalizing real users expressing their views. But bot operators try to hide within the grey areas.

Foreign Origins

Many misinformation campaigns and commercially motivated bots emanate from overseas groups. Geographically dispersed operations with lax oversight are harder to pin down.

Data Privacy

Closely analyzing account activities risks infringing on legitimate privacy. Facebook must tread carefully in leveraging account data signals for enforcement.

Combating the ongoing ingenuity of automated threats will require Facebook to further expand its security teams, improve coordination with law enforcement, tighten identity policies, and invest heavily in AI detection tools. The financial incentives driving bot usages will necessitate constant vigilance.

Impacts of Bot Epidemic

The proliferation of bot accounts and activity on Facebook has tangible effects on the experience and integrity of the platform:

  • Decline in authentic interactions as more comments come from bots rather than real users
  • Increase in spam, scams, hate speech, and other toxic content spread through comments
  • Growth of misinformation as bots game News Feed ranking algorithms
  • Erosion of advertising value as metrics get inflated and filled with bot traffic
  • Facilitation of illicit sales and illegal activities using the platform

These outcomes create a degraded environment that undermines users’ trust. And they pose public harms as bots promote extremist messaging or fraudulent schemes.

From a business standpoint, the bot epidemic also puts Facebook’s revenue at risk if advertisers lose faith in the legitimacy of the platform’s engagement and viewing data. Some impacts of the bot problem include:

Lower Social Value

The prevalence of bot profiles and activities on Facebook makes the experience less socially enriching for authentic users. People have fewer genuine connections and consumption gets overrun by spam.

Public Safety Threats

Bots spreading extremist propaganda or inciting real world violence endanger public safety. And scams initiated through bots can have devastating financial consequences on victims.

Undermined Democracy

Political bots that hijack discourse, spread false narratives, and polarize issues are a threat to open democratic deliberation.

Advertiser Losses

Businesses waste ad spending on inflated or fraudulent engagement metrics from bots. They also avoid certain topic areas flooded with bots spreading disinformation.

Effectively managing the proliferation of inorganic accounts and activities on users and society will require diligent oversight moving forward.

Conclusion

The rise of bots on Facebook stems from financial incentives to exploit the platform’s massive reach. Bot operators use a mix of basic spam tactics and cutting edge AI evasion techniques to spread misinformation, fake grassroots support, and data-farming schemes. This degrades the user experience, spreads public harm, and damages the value of Facebook’s ecosystem. Ongoing challenges like volume, adaptation, and data privacy make bots difficult to combat fully. To preserve integrity, Facebook will need to continue escalating its efforts to detect and disable fake accounts and coordinated inauthentic behavior. Increased transparency around data on bot account removals could also help assure users and businesses. Finding the right balance between safety and openness will be critical as bots become more advanced and harder to distinguish from real people.

References

Sources

  • Facebook Community Standards Enforcement Report, Q3 2021
  • Facebook Newsroom – Combating Hate and Misinformation
  • SparkToro Analysis of Large Facebook Pages
  • NPR Report – Facebook Says It Removed More Than 3 Billion Fake Accounts In 2021
  • Forbes – Facebook Disabled 1.3 Billion Fake Accounts In Last 3 Months
  • Wall Street Journal – Facebook Says 11,000 People Work on Community Safety

Related Articles

  • Wired -Facebook has an AI solution for identifying harmful content—if it can fix it
  • MIT Technology Review – Facebook is leaning into AI to fight hate speech, misinformation, and abuse
  • The Verge – Inside Facebook’s epic fight against Russian trolls
  • Harvard Business Review – Can Social Media Companies Curb the Rise of Extremism?
  • Brookings Institute – How artificial intelligence systems could threaten democracy