Skip to Content

What is the 20 rule on Facebook ads?

What is the 20 rule on Facebook ads?

The 20% rule is a guideline for Facebook ad campaigns that recommends spending 20% of your ad budget on testing new ads and 80% on scaling ads that are already working. The idea is to continually test new creative, audiences, and messaging so you can find the best-performing ads, while still dedicating most of your budget to the ads that you know are effective at driving your desired outcome, like sales or lead generation.

What is the reason behind the 20% rule?

The 20% rule exists because Facebook advertising requires constant testing and optimization. With millions of active advertisers and frequent algorithm changes, what works one month may not work the next. If you only focus on scaling existing ads without testing anything new, your results will slowly decline as those assets lose effectiveness. Testing new ads ensures you always have fresh creative and messaging in market so you can adapt as the platform changes.

The specific ratio of 20% spent on testing and 80% on scaling is a rule of thumb based on best practices. The idea is to test frequently enough to generate learnings and find better-performing ads, but not so much that you take budget away from ads you know work. The 80/20 split balances these two needs.

How do you follow the 20% rule?

Here are some tips for implementing the 20% rule on Facebook ads:

  • Calculate 20% of your overall ad budget and dedicate that amount explicitly to testing new ads each month.
  • Create a separate ad set for testing new creative and messaging. Aim for at least 2-3 new ad variations per campaign per month.
  • Set the test ad set budget to 20% of the campaign’s overall budget when launching.
  • Use narrow targeting and lower bid strategies for test ad sets so they don’t spend too much before you have data.
  • Let the test run for 1-2 weeks minimum so you can accurately judge performance.
  • Pause lower performing ads in the test ad set and put additional budget toward better performing variations.
  • Take the 1-2 best ads from the test ad set each month and move them into the scaling ad set.
  • Add the remaining 80% of monthly budget to the scaling ad set to maximize reach for the winners.

What are the benefits of the 20% rule?

Following the 20% rule offers several advantages for Facebook advertisers:

  • Better ad performance – Testing new ads continually yields higher converting creative, audiences and messaging over time.
  • Lower cost per result – The best performing ads drive lower CPAs and CPLs, so more budget goes to results.
  • Adaptability – You avoid optimization stagnation and can easily pivot when algo changes occur.
  • More learnings – Testing gives you broader insights into what resonates best with your audiences.
  • Fresh creative – New ads keep your campaigns from feeling stale to viewers.

Implementing the 20% rule requires more work upfront to plan tests, create new ads, and analyze data. But this effort pays off with higher converting ads over the long run.

Are there any downsides to the 20% rule?

The main potential downsides of the 20% rule include:

  • More work – You need extra time each month to plan and execute new ad tests.
  • Lower initial performance – New untested ads may start with lower results before optimization.
  • Data fluctuation – Frequent testing can cause more variance in your campaign metrics.
  • Less scaling potential – More budget goes to testing instead of purely scaling winners.

However, most experts agree the long-term benefits outweigh these factors. Poorly optimized campaigns eventually plateau or decline. Continual testing provides compounding gains over months and years.

How do you analyze results from testing?

To effectively apply learnings from the 20% test budget, follow these tips:

  • Compare cost per result (CPR) metrics between test and scale ad sets – lower is better.
  • Judge new creatives based on relevance score and engagement metrics like CTR and comments.
  • Review conversion waterfall to see where new ads are falling off.
  • Look for statistically significant differences using t-tests.
  • Check results across both desktop and mobile.
  • Build test ad sets around specific hypotheses.
  • Align conclusions to your overall marketing goals.

Ideally, pull bi-weekly or monthly reports on test ad set performance, and move the top performers into the scaling ad set. Pause clear underperformers quickly so you don’t waste budget.

How do you scale the winning variations?

Once you’ve identified the best ads from testing, here are some tips for scaling them efficiently:

  • Broaden targeting on the winners for greater reach.
  • Increase budgets or bids on the new scaled ads.
  • Replicate the ad copy and creative across multiple ads and audiences.
  • Ensure scaling ad sets have the majority of your budget allocated.
  • Widen the distribution window for scaling ads to continuous.
  • Consider expanding the winning concept across more ad sets.
  • Build lookalike audiences off high converting placements.

Be patient scaling new ads – it takes time for machine learning optimization to find the right users. Monitor closely for at least 2 weeks and tweak the budget up slowly on winners.

How much should you diversify vs. scale your top ads?

This depends on your results, but here are some best practices:

  • Scale the 1-2 very top ads aggressively – dedicate 50%+ of budget here.
  • Put 20-30% toward the next 3-4 good performers.
  • Test 10-20% on new concepts, angles and audiences.
  • Cap investment at 5-10% for middle/low performers until proven otherwise.

Strike a balance between over-diversification (too many low-budget ads) and over-scaling (one big winner receiving all spend). Monitor results over the delivery period and adjust budgets to direct more funds toward success.

What are some key metrics to track?

Critical metrics to track when following the 20% rule include:

  • Cost Per Result (Purchase, Lead etc) – The lower the average CPR, the more efficient your spending.
  • Relevance Score – Helps assess ad creative quality and match rate.
  • CTR – Higher CTR indicates an engaging ad that drives clicks.
  • CPM – Lower CPM means cheaper audience targeting and delivery.
  • Frequency – watch for over-exposure if frequency rises.
  • Conversion Rate – improves when creative resonates.
  • CPC/CPA – optimize bids toward your KPI.

Analyze both new test ad sets and the scaling winners holistically. Compare metrics across creatives, placements and audiences to guide optimization.

What is a good cost per result benchmark?

Ideal CPR benchmarks vary significantly by industry, market and campaign objective. Here are some average ranges to aim for:

Campaign Objective Good CPR Benchmark
Lead Generation $20 – $40
App Installs $2 – $5
Ecommerce Sales 2.0x – 3.5x AOV
Webinar Registration $10 – $25
SaaS Free Trial Signup $5 – $20

Factor in your profit margins, LTV and resources when defining your target CPRs. Use the 20% rule to drive those metrics lower over time.

How frequently should you change your Facebook ads?

As a general rule, aim to test completely new ad variations at least every 2-4 weeks. The 20% methodology supports this cadence. Beyond that, it depends on the campaign:

  • Retargeting – Update creative every 2-4 weeks.
  • Lead gen/sales – Test new offers/messaging monthly.
  • Branding – Quarterly ad refresh is fine.
  • New campaign – Test multiple new concepts before scaling.

Avoid changing your ads too rapidly, as it takes time for Facebook’s algorithms to optimize and for data to stabilize. But don’t go more than 2 months without substantive testing, or your ads will stagnate.

How many ad variations should you test per campaign?

Experts recommend testing at least 3-4 new ad variations per campaign when allocating your 20% test budget. This provides statistical significance across results. More is generally better, but avoid testing too many concepts at once. At 10+ variations, the learning value diminishes as data gets diluted across too many ads. Aim for between 3-7 new ad tests per campaign as a sweet spot.

Should you pause ads that are performing poorly?

Yes, it’s recommended to pause any clearly underperforming ads from your test ad set once you have sufficient data (at least 7-10 days). This prevents wasting additional budget on ads that aren’t working. However, don’t pause all ads too quickly – some require longer optimization periods to fully judge performance. But ads with exceptionally high CPAs or low engagement should be paused within 1-2 weeks.

How do you calculate the 20% test budget per campaign?

Here is a simple formula to calculate 20% test budget:

  1. Total Campaign Budget = $10,000/month
  2. 20% of Total Budget = 20% * $10,000 = $2,000
  3. So, $2,000 per month for testing new ad variations

When launching new campaigns, take historical performance data into account if available. Allow more budget for testing completely new campaign concepts vs. optimizing proven winners. Phase the test budget up slowly to avoid overspending while Facebook gathers data.

Should you ever exceed 20% test budget? When?

It can be smart to exceed 20% test budget in certain situations, such as:

  • Launching an entirely new campaign where no historical data exists.
  • Testing Facebook vs Instagram ads for the first time.
  • A/B testing video vs image ads broadly.
  • Entering a new market or segment.
  • Current results are stagnating and need fresh ideas.

In these cases, consider allocating 30-40% of budget to testing upfront. But still have a plan to scale winners once patterns emerge. You can shift back toward 20% testing budget once learnings solidify.

What are some common mistakes when following the 20% rule?

Some frequent mistakes to avoid include:

  • Changing ads too frequently without giving time for data to stabilize.
  • Testing too many concepts at once, diluting insights.
  • Ending tests after just a few days before statistical significance.
  • Not tracking or analyzing test ad performance rigorously.
  • Failing to scale clearly winning ads aggressively enough.
  • Letting test budget creep far above 20% long-term.
  • Testing new ideas in the middle of campaigns instead of dedicated ad sets.

Structure your testing methodology carefully to maximize learnings. Don’t let great performers languish undiscovered. Stay disciplined to keep test budget around 20% long-term.

What are some examples of successful 20% rule ad tests?

Here are some real examples of successful ad experiments using the 20% rule:

  • An ecommerce site tested product videos vs. image ads. Video drove 2x higher CTR and lower CPAs.
  • A D2C brand tested customer testimonial vs. product benefit messaging. Testimonials won.
  • A mobile game developer tested Level 1 vs Level 10 creative. Level 10 doubled install rates.
  • A B2B SaaS company tested targeting by industry vs. role. Role-based ads performed best.
  • An agency tested emotional vs. rational creative appeals. Emotional won across demographics.

Leverage the 20% budget to answer core strategic questions like these examples. The insights will pay dividends.

Does the 20% rule apply outside of Facebook?

While the 20% rule originated with Facebook ads, its core principles can be applied to other platforms:

  • Google Ads – Test new keywords, ad copy, landing pages.
  • Amazon Ads – Test new selection, images, copy, targeting.
  • Pinterest Ads – Test creatives, placements, audiences.
  • Instagram Ads – Test Stories vs. Feed vs. Reels.
  • TikTok Ads – Test challenges, effects, sounds.

The ability to consistently test and optimize is key across channels. Apply a 20% framework tailored each platform’s best practices.

Conclusion

The 20% rule for Facebook ad testing provides the right balance between discovering high-converting new ads and scaling proven winners. By dedicating 20% of your budget to structured tests each month, you’ll enjoy better performance, lower costs, and greater adaptability over the long-term. Avoid over-testing or changing ads too rapidly. But don’t go stale by never evolving your creative. Target the 1-2 big winners and implement their lessons broadly. With a thoughtful 20% framework, your Facebook ad results will steadily improve.