Facebook split testing is the best way to test ad efficiency. Split testing works by testing multiple iterations of the same ad. It can be a tedious process if you’re unfamiliar with the Facebook Ads platform, but we’ve got your back!
We’ve managed over $636 million in ad spend since AdEspresso started in 2011. Users created over 10 million ads (and counting) using our platform’s advanced split-testing capabilities.
In this complete guide to split-testing, we will go over some of our most insightful experiments. We will also cover testing all aspects of your Facebook campaigns and optimizing for peak ROI.
A split test (or A/B test) is a marketing strategy that tests two elements of a marketing campaign against each other to find out which delivers the best results. A good split test can increase ROI by 10x.
Split testing applies to nearly all areas of marketing. Some examples of what you can test are email, landing pages, blog post titles, and Facebook Ads.
Every variable can be tested. You’d be surprised how even the smallest elements of your design or copy can drastically improve marketing performance.
Here are some common split test types:
- Creative: testing different ad images, fonts, copy, calls to action, etc.
- Audience: targeting different audiences and demographics
- Delivery optimization: running campaigns with and without campaign budget optimization
- Placements: testing placement types, e.g. automatic vs specific
- Product set: comparing the performance of different product sets
- Custom variables: testing anything else within your campaign
Here are some examples of how you can put these test types into action:
- Try different colors of key elements like the call to action button
- Test different types of media, e.g. images vs video
- Test different versions of copy, e.g. “Try AdEspresso” vs “Start optimizing your Facebook Ads with AdEspresso”
- Play around with your call to action, e.g. “Sign Up” vs “Count me in!”
- Rearrange the elements of your ad or page, e.g. place the signup form on the left or right side of the page
The best way to show off the value of Facebook split testing is by example.
This first example is from Strategyzer, who worked with the AOK Marketing team and our own marketing expert Sarah Sal to increase ticket sales for their event.
For their first Facebook ad campaign, they split tested audiences and different images. The ad looked like this:
This approach cost Strategyzer $4,433.53 in around 3 weeks, and they got only one sale in return.
To improve these results, Sarah started studying Strategyzer’s content, including case studies and business stories. She used storytelling to write ads that gave the audience a taste of what they’d learn by attending the event.
Here’s an example of the improved ad:
By changing the tone and length of the ad copy, Sarah was able to achieve some incredible results – and took the event from 1 purchase to 92, with an average CPA of $123.45.
For those of you doing the math, that’s a 96.72% decrease in cost from changing the copy alone.
Here’s another example, where AdEspresso split tested two different Facebook Ads images:
As you can see, the Ad on the right resulted in a cost per conversion improvement of more than 100%!
Remember that not every split test will improve performance. You might test a new design only to discover that the original was more effective. Don’t let this stop you from testing other variables.
The key to successful split testing for Facebook ads is strategy. You need to test the metrics and variables most relevant to your goals, as this will strongly affect your ROI.
Structure and budget will also heavily impact the results of your split test. If you want to run a successful split testing experiment on Facebook, here’s what you need to know:
“All elements” would be the ideal answer to this question, but that is unrealistic.
A reliable split test requires every ad you test to generate a good amount of data (this could be conversions, clicks, likes, etc.). Testing hundreds of images or demographic audiences at once will likely result in random, untrustworthy data.
Testing multiple elements can quickly get out of hand. Think about this: testing 5 pictures, 5 titles and 5 demographic targets will result in the creation of 125 (5 x 5 x 5) different ads.
The key to success is prioritizing your tests so that you get reliable results and can optimize your ads accordingly.
But how do you decide your priorities? Based on our analysis, these elements provide the biggest gains:
- Post copy
- Placement (where your ads are displayed)
- Landing page copy and design
- Custom audiences
- Relationship status
- Purchase behaviors
- Education level
- Ad type
- Bidding (lowest cost with or without cap)
- Optimization (clicks, conversions, engagement)
This list is general — some elements won’t apply to your business, and you may already know the best course of action for others. To streamline your Facebook split testing process, make a short list of about 4 experiments (relevant to your business) you’d like to start with.
With categories defined, it’s time to start. You should begin with broad experiments, refining as you get results for faster ad optimization.
Let’s look at an example. To promote our Facebook Ads Lead Generation eBook, we first tested two very different designs:
Wow, we did not see that coming. We were pretty sure that the photographic ad would have performed better. However, the data didn’t confirm our guess, and numbers never lie.
Once we had this figured out, we started split testing smaller elements of the winning combination:
Had we tested 10 different variations of every ad design we had in mind from day 0, collecting reliable data to optimize the campaign would have taken weeks. By testing broader variations first and then fine-tuning the winner, we were able to boost our performance by 143% in just 4 days and then improve by another 13%.
This approach can be applied to almost every Facebook ad split test you might think of. Rather than testing 10 age ranges right off the bat, start by comparing younger users (13-40) to older users (40-65). Take your results and refine further tests to compare ages within the winning range (i.e. 13-18, 18-25, 25-35, 35-45).
Decide how you’ll define success and failure before creating your first split test.
The number of ad performance metrics Facebook offers might seem overwhelming at first. All of them can be used to measure the success of your campaigns.
Here are a few metrics paid social media specialists usually use to define success:
- Cost per result (CPR)
- Ad impressions
- Ad frequency
- Click-through rates
- Cost per click (CPC)
- Cost per impression (CPM)
- Cost per conversion (CPC or CPCon, also known as cost per action/CPA)
If you aren’t an expert, it’s a good idea to start with a single metric. Testing multiple metrics at once may get overwhelming, and can sometimes produce confusing results.
For example, ads with a great click-through rate can have a high cost per click. Other ads can have a terrible click-through rate and a great cost per action.
Pick the metric that impacts your business growth the most. Cost per conversion is a good starting point, as it has the greatest impact on growth for most businesses.
Advanced advertisers can track revenue generated by each conversion and use the ROI as the key metric.
AdEspresso simplifies this process by highlighting the metric we predict will be most valuable to you:
Now that you have your key metric and a basic framework for testing it, it’s time to organize your split tests within the Facebook ad campaign structure.
As you probably know, Facebook advertising has a 3 layer structure composed of campaigns, ad sets and ads.
Let’s dig into how (and when) to use them.
Running split tests across multiple campaigns makes data hard to analyze and compare. This technique does come in handy when you’re testing 2 extremely different variations like bidding type or ad type (e.g. a standard news feed ad vs a multi-product ad).
The ad set is where you define budget and audience targeting. This makes ad sets the best place to create audience split tests.
Example: If you have a $10 budget and want to test a male vs female audience, you can create 2 ad sets, each with a $5 budget, one targeting men and the other targeting women.
Ads contain your designs. This is where you test images, texts, headlines, etc.
If you’d like to split test your Facebook Ads with 5 pictures and 2 genders, the best setup according to Facebook best practices is:
Ad set 1 – Target Men – $5
Ad set 2 – Target Women – $5
This setup does come with one drawback. At the ad set level, you can define a budget for every experiment, ensuring each one receives a relatively even number of impressions. This is not possible at the Ad level.
Uneven distribution of the budget often occurs as a result. Some experiments will receive a lot of impressions, consuming most of the budget and leaving other experiments with fewer impressions. Why does this happen? Facebook can be overly aggressive in determining which ad is better, spending the majority of the budget on its ad of choice.
In the example above, one of the images received 3 times more impressions and spent 3 times the budget of the other.
Here’s an alternative ad set level structure that will help you avoid this issue:
Ad set 1 – Target Men – $1
Ad set 2 – Target Men – $1
Ad set 3 – Target Men – $1
Ad set 6 – Target Women – $1
Ad set 7 – Target Women – $1
Ad set 8 – Target Women – $1
While this structure usually results in more reliable split tests, it can increase overall costs as multiple AdSets compete for the same audience.
It’s important to note here that Facebook now has features that make it unnecessary to test for gender and age. After running an ad set, you can check the performance breakdown by age and gender.
Split testing for this criteria does allow you more control over optimization, though, as you have the power to allocate just 10% of your budget to a certain age or gender.
Set the right budget for your split test
A meaningful split test requires data, and running Facebook Ads does come at a cost.
How do you know when you’ve gathered enough data? Say you’re testing 5 different images. Before choosing a winner, each ad needs to generate 10-20 conversions.
Here’s a calculator for deciding when results are statistically relevant.
The broader the difference between each variation’s performance, the sooner you’ll reach statistical relevance. Small differences are less accurate and require more validation.
If your average cost per conversion is $1, you’ll require a budget of at least $50 ($1 x 5 images x 10 conversions). Of course, the higher the budget, the better results you’ll get.
If your main metric is clicks, you won’t need such a high budget because they tend to be less expensive. More expensive conversions will require a higher budget.
Remember, it’s important to make sure your budget is properly allocated before you set up your Facebook ads split test. Testing too many variations with too small a budget won’t provide you with reliable data
Struggling to calculate your ad budget? Try using our handy campaign budget calculator.
Be prepared to lose some money A/B testing. It’s all part of the process. Not all experiments will be successful. It’s important that you keep this in mind throughout the process.
Don’t stop a split test after a few hours because one variation seems especially expensive. Things can and will change quickly.
In the span of a few days, the clear loser could become the overwhelming winner. Accept that you might lose some money, and give every experiment time to yield results. It’s worth it.
Split testing is a long game and every experiment, successful or not, brings you closer to understanding your audience.
Keep an eye out for two additional risks:
This issue can occur when you test many demographic audiences (for example, 2 genders x 5 interests x 5 age ranges = 50 tests) and end up with many ad sets, each with minimal reach.
While these tests still yield useful information, allocating money to niche audiences can drive costs up as Facebook will try every way possible to reach, for example, the 2,000 users who live in San Francisco, are 18-19 years old, and are interested in Italian folk music.
This hypersegmentation gets expensive. To avoid the issue, make sure your audience size is large enough so that each variation will still target a sizable user base.
Hopefully, the content you promote for Facebook ad split testing will be engaging, but not “viral hit” material.
Split tests for post design can limit the organic impact of your ads. When users see a post with 1,000 likes, the high number can influence them to engage.
However, split testing the design with 10 variations spreads the engagement across 10 different Facebook ads (i.e. posts that get promoted). Rather than with one post with super high engagement, you may end up with 10 posts with average engagement, diminishing the potential amplification.
If your content has high virality potential, try to adopt optimization strategy #3 from our tips below.
Now that you know how to split test Facebook ads, the work isn’t finished. Your next goal is to optimize your ad campaigns for maximum ROI.
Once your experiments yield reliable data, there are many strategies you can adopt to do so.
Here are our 3 favorites:
This is the most common option. Once you have reliable data, pause the underperforming ads and keep the best ones running.
If you’re using AdEspresso, knowing what’s working and what isn’t — and acting accordingly — is a piece of cake. We even provide daily tips you can action with a simple click:
This might be our favorite strategy. Instead of stopping all of your underperforming ads, redistribute the budget among them.
This allocates most of your money toward your top experiments, leaving the underperforming ones with a small ad set budget of $1 per day.
Why do this? To collect more data, of course. You will be able to continue monitoring your worst experiments to see if anything changes in the future (at a very low cost).
AdEspresso makes this process easy as well. You can even create automated rules that automatically distribute your budget across ad sets based on their performance:
This strategy is used more for landing pages and emails than for social media advertising, but its effectiveness makes it perfect for Facebook ads split testing.
Run experiments with the lowest possible budget to get reliable data. Once you have a winner for each experiment, spend all your remaining budget creating a new campaign with the winning ads and demographic audiences.
No matter what strategy you choose, keep testing. Whenever you find a winner, try allocating a small portion of your budget to set up a new campaign and further split test your successful Facebook ads. Your target audience is always changing, and your ad strategy should remain flexible.
Want to see these strategies in action? Here are some examples from our own experiments.
Click the title of each experiment to view a complete breakdown of targeting, optimization, budget and results.
Split testing Facebook ads is one of the most effective ways to drastically improve ad spend ROI.
It can also help you understand who your customers are and what they need most, informing future content creation.
Already started split testing for Facebook Ads? Share your experience and ask questions in the comments!