The only way to truly know whether your ad is performing at peak efficiency is by testing multiple iterations of the same ad. Also known as split testing, this method of analysis can be tedious at times, especially if you’re unfamiliar with Facebook Ads.
But don’t worry, we’ve got your back!
Since the birth of AdEspresso in 2011, we’ve had over 10 million ads created right here on our platform with our advanced split-testing capabilities.
This means that we’ve seen an incredible variety of ads from nearly every type of business. Not only that, but we’ve also seen millions of images, headlines, and call to actions across all of those verticals.
In this article, we’ll cover some of the most insightful experiments we’ve conducted, and give you a complete A-Z guide on how to split test all aspects of your Facebook campaigns to improve your campaigns’ ROI exponentially.
What is a split test?
A split test (also known as an A/B test) is a marketing strategy where two elements of a marketing campaign are tested against each other to analyze which one will deliver the best results.
Split testing can be applied to nearly everything you can think of: emails, landing pages, blog post titles, and of course, Facebook Ads. A good split test can result in drastic ROI improvements, even as high as 10x!
Everything can be tested, and even the smaller elements can drastically improve your marketing performance. Here are some examples of the most common split tests:
- Colors of key elements like the Call To Action button
- Images or video
- Copy (Try AdEspresso vs Start optimizing your Facebook Ads with AdEspresso)
- Call to Actions (Sign Up vs Count me in!)
- Element Position (Signup form on the left or right side of the page)
- Audience Targeting (Gender, age, location)
Does split testing really work?
Instead of going into a lengthy paragraph telling you that split testing works, we’ll use some of our favorite experiments to show you the results you can achieve.
For their first Facebook ad campaign, they just split tested audiences and different images, and the ad looked like this:
That approach cost Strategyzer $4,433.53 in around 3 weeks, and they got only one sale in return.
To improve these results, Sarah started studying Strategyzer’s content, including case studies and business stories, and used storytelling to write ads that gave the audience a taste of what they’d learn by attending the event.
Here’s an example of the improved ad she ran:
By changing the tone and length of the ad copy, Sarah was able to achieve some incredible results – and took the event from 1 purchase to 92, with an average CPA of $123.45.
For those of you doing the math, that’s a 96.72% decrease in cost from changing the copy alone.
Here’s another example, where AdEspresso split tested two different Facebook Ads images:
Cost per conversion: $2.673
Cost per conversion: $1.036
As you can see, the Ad on the right resulted in a Cost per Conversion improvement of more than 2x!
Of course, not every split test will result in performance improvement. Sometimes, you’ll test a new design just to discover the original one was working better. This is part of the game and should not dissuade you from testing other variables.
What elements of a Facebook Ad should you split test, and how?
Everything would be a great answer to this question. It’s also a very unrealistic one.
For a split test to be reliable, every ad you’re testing will need to generate a good amount of data (conversions, clicks, likes … you name it). Testing hundreds of images or demographic audiences at once will likely result in pretty random data which you cannot trust.
When testing multiple elements, things can quickly get out of hand. Just consider that testing 5 pictures, 5 titles and 5 demographic targets would result in the creation of 125 (5*5*5) different ads!
It’s very important to prioritize so that you’ll have reliable results to optimize upon quickly.
We analyzed millions of dollars in Facebook Ads split tests and here are the elements that provided the biggest gains:
- Post Text
- Placement (where your ads are displayed)
- Landing Page
- Custom Audiences
- Relationship Status
- Purchase Behaviors
- Education Level
- Ad Type
- Bidding (Lowest Cost with or without cap)
- Optimization (Clicks, Conversions, Engagement)
Of course, not all of them may apply to your business, or you may already have an answer for some of them. Start making a short list of the experiments you want to start with. I’d pick no more than 4.
Once you’ve defined the categories for your split tests it’s time to start. Begin with broad experiments and once you’ve got results you can refine them, allowing you to optimize your ads much faster.
Let’s do an example, to promote our Facebook Ads Lead Generation eBook, we first tested two very different designs:
Wow, we did not see that coming. We were pretty sure that the photographic ad would have performed better. However, the data didn’t agree and numbers never lie. Once we had this figured out, we started split testing smaller elements of the winning combination:
Had we tested 10 different variations of every ad design we had in mind from day 0 it would have taken us weeks before having reliable data to optimize the campaign. By testing broader variations first and then fine-tuning the winner, we were able to boost our performance by 143% in just 4 days and then improve by another 13%.
The same approach can be used with most of the Facebook ads split tests you can think of. Instead of immediately testing 10 age ranges, why don’t you first test if young users (13-40) will work better than older ones (40-65). Once you have an answer, just refine your experiment to further test within the winning range (i.e. 13-18, 18-25, 25-35, 35-45).
Pick the metrics that will define success or failure
Before you rush to create your first split test, you need to decide how will you define which experiment won and which lost.
CTR, Clicks, Spent, CPC, Cost per Conversion, Conversion Rate… Facebook offers so many metrics it can be overwhelming. Also, sometimes, they can be contradictory. I often see ads with a great CTR but a high CPC, while other ads have a terrible CTR but deliver a great CPA.
Unless you’re an expert, pick one single metric that you’ll use to judge your split tests.
For most of the campaigns, we suggest using the Cost Per Conversion. It’s simple and is the one that impacts the growth of your business the most.
If you’re a more advanced advertiser, you could track the revenue generated by each conversion and use the ROI as your key metric.
To make things simple, AdEspresso immediately highlights the metric that we think will be the most useful for you:
AdSet & Ads: How to structure your test
Now that you have an idea on what to test and a basic framework for testing it, let’s dig further and see how to organize your split tests with the Facebook ad campaign structure.
As you know Facebook Advertising has a 3 layer structure: Campaigns, AdSets and Ads.
Let’s see how and when to use them.
I’m not a big fan of running split tests across multiple campaigns. It becomes extremely hard to analyze and compare the data.
You may, however, want to do it from time to time when you’re testing two extremely different variations such as the bidding type or the type of ad (a standard Newsfeed Ad vs a Multi-product Ad).
The AdSet is where you define the budget and the audience targeting. Since this is where we define our audience, this is also the best place to create our audience split tests.
If you have a $10 budget and want to test Male vs Female, you can create 2 AdSets with a $5 budget each, one targeting Men and one Women.
Ads contain your designs and are usually used, within an AdSet to test images, texts, headlines.
If you’d like to split test your Facebook Ads with 5 pictures and 2 genders, the best setup according to Facebook Best Practice is:
- Adset 1 – Target Men – $5
- Image 1
- Image 2
- Image 3
- Image 4
- Image 5
- Adset 2 – Target Women – $5
- Image 1
- Image 2
- Image 3
- Image 4
- Image 5
There’s just one drawback in this setup. While at the AdSet level you can define a budget for every experiment and thus get sure every experiment receives a pretty even amount of impressions, this is not possible at the Ad level.
This often results in an uneven distribution of the budget where some experiments will receive a lot of impressions and consume most of the budget leaving others with fewer impressions. This is due to Facebook being over aggressive determining which ad is better and driving to it most of the Adset’s budget.
Here’s an example:
One of the images tested received nearly 5 times the impressions and 4 times the budget of the other.
The only viable alternative is to test everything at an AdSet level with a structure like this:
- Adset 1 – Target Men – $1
- Image 1
- Adset 2 – Target Men – $1
- Image 2
- Adset 3 – Target Men – $1
- Image 3
- Adset 6 – Target Women – $1
- Image 1
- Adset 7 – Target Women – $1
- Image 2
- Adset 8 – Target Women – $1
- Image 3
This structure can result in more reliable split tests, however, it works better with higher budgets and may slightly increase your overall costs (due to multiple AdSets competing with each other for the same audience).
Set the right budget for your tests
Facebook Ads split testing has a cost. As we already said, in order to run a meaningful experiment you’ll need to gather enough data.
Say you’re testing 5 different images. Before picking a winner you’ll want each ad you’re testing to have generated at least 10 or 20 conversions. You can use this calculator to understand if your results are statistically relevant.
The broader the difference between each variation’s performance, the sooner you’ll reach statistical relevancy. Small differences are less accurate and require more validation.
If your average cost per conversion is $1, to successfully split test your images you’ll need to set up a budget of at least $50 ($1 * 5 images * 10 conversions), $100 would be even better.
Of course, if your main metric is clicks, they usually come much cheaper and you’ll need a lower budget. On the other hand, if you have very expensive conversions, you’ll need a much higher budget.
Remember, before setting up a Facebook ads split test, be sure you’ve enough budget allocated. I often see new advertisers fail because they try to test 250 different variations with a budget of a few dollars.
If you need to calculate your ads budget but don’t know where to start, you can use our handy budget calculator here.
What can go wrong with split testing
First of all, let me remind you: Be prepared to lose money or don’t even bother starting a split test.
Not all the experiments will be successful and not all the data will be reliable after a couple of days. You need to be aware of this when you test. Do not stop a split test after a few hours just because one variation seems extremely expensive – things can and will change quickly.
After a few days, what seemed like a clear loser could become a stunning success. Take your time, accept that you may lose some money and give every experiment its time. It’s worth it, we’re playing a long term game and every experiment, successful or not, will increase your understanding of your audience and your long term success.
Now that we’ve cleared the water, here are two additional risks that you want to take into account.
#1) Over-testing your audience could increase your costs
If you’re testing many demographic audiences (i.e. 2 Genders * 5 Interests * 5 Age ranges = 50 tests) you may end up creating many AdSets, each with a very small reach.
While the information gathered with such tests are still useful, allocating money on very niche audiences can drive your costs up as Facebook will try in any way to spend your $100 Adset budget to reach, for example, the 2,000 users who live in San Francisco, are 18-19 years old, and are interested in Italian folk music.
This hyper-segmentation can be very expensive. If you run many split tests on your Facebook Ads’ audience, be sure to have a large audience size so that each variation will still target a pretty big user base.
#2) Design Testing can impact Virality
Hopefully, most of the time you’re going to promote interesting and engaging posts, but not really ‘viral hits’ that can gather hundreds or thousands of likes and shares.
However, when you do, split tests on the design may limit the organic impact of your ads. It’s pretty simple: if someone sees a post with 1,000 likes they’ll assume it’s good, and like it or share it. This behavior can amplify the reach of your ad.
However, if you split test the design of the ad and test 10 variations, you’re going to spread your social proof across 10 different Facebook Ads (which are simply posts that get promoted). Instead of one incredibly successful post with 1,000 likes, you may end up with 10 posts with 50 likes each, diminishing your potential amplification.
If you fall in this category and you promote content with a high virality my suggestion is always to adopt optimization strategy #3 from our tips below.
How to optimize your Facebook Ads based on your split test results
Facebook Ads Split Testing should not be an end in itself. Our goal is to optimize our campaigns and thus get more results while spending less.
Once your experiments start generating reliable data, there are several strategies you can adopt, these are our 3 favorites:
Strategy #1: Stop underperforming ads
This is by far the most commonly used. As soon as you have reliable data, simply pause the under-performing ads and only keep your best ones running.
If you’re using AdEspresso, it’s extremely simple to understand what’s not working and stop it – we even provide you with daily tips you can action with a simple click:
Strategy #2: Redistribute budget
Right now this is my favorite strategy. When relying on AdSets for split testing, instead of totally stopping the underperforming ads, you can simply redistribute the budget among them.
This way you can have most of the budget allocated to your top experiments while leaving the worst one with a small AdSet budget of $1 per day. This way you can keep monitoring them and see if anything changes in the future at a very low cost.
Strategy #3: Test and Scale
This strategy is often used for landing pages and emails but not that often in the advertising world. However, it’s very effective and there’s no reason not to use it.
Run your experiments with the minimum budget possible to get reliable data and once you have a winner for each experiment, create a new campaign with just the winning ads and demographic audiences and let it spend all your remaining budget.
No matter what strategy you pick, you should always be testing. When you’ve found a winner, try to allocate a small part of your budget to set up a new campaign to further split test your Facebook Ads.
Inspiration: split test examples from our own Facebook ads
Finally, here are some additional examples from our own experiments.
You can click the title of each experiment to view a complete breakdown of targeting, optimization, budget, and results.
There’s no doubt that Split Testing your Facebook Ads is one of the most effective ways to drastically improve your results (aka get more bang for your buck 🤑).
With AdEspresso we spend thousands of dollars every month testing everything, even the smallest changes, to constantly improve our ads and so far we have achieved improvements up to 10x.
Testing also helps us to better understand who our customers are and what they need the most.
What about you? Have you already set in place a continuous split testing process for your Facebook Ads? What was your greatest success? What was your most surprising result? Share your experience and your questions in the comments!