How many times have you heard the quote “Half the money I spend on advertising is wasted; the trouble is I don’t know which half” ?
I guess very often, but what you may not know is that this quote dates back to the end of the 18th century.
Nowadays this fascinating marketing gem is totally outdated. When dealing with online advertising, not only you can know which half of your advertising money is generating the best ROI, you can also test hundreds of different variations of your ads and allocate your budget in real time accordingly.
This is exactly what we are talking about today: Facebook Ads Split Test (aka A/B Test)
Keep reading and you’ll learn from A to Z every secret on how to split test all aspects of your Facebook advertising to understand what’s working and what’s not and ultimately improve your campaigns’ ROI exponentially.
What is a split test?
A split test is a marketing strategy where two elements of a marketing campaign are tested against each other to analyze which one can deliver the best results.
Split testing can be applied to everything: emails, landing pages, blog post titles and of course, Facebook Ads.
A good split test can result in huge ROI improvements that can easily be in the 10x range.
Everything can be tested and even the smaller element can drastically improve your marketing performance. Here are some examples of elements usually tested:
- Colors of key elements like the Call To Action
- Texts (Try AdEspresso Vs. Start optimizing your Facebook Ads with AdEspresso)
- Call to Actions (Sign Up Vs. Count me in!)
- Element Position (Signup form on the left or right side of the page)
- Audience Targeting (for advertising, Men Vs. Women)
Does split testing really works?
In both his presidential campaigns, President Obama made intensive use of split test for his donation landing pages and emails. These tests helped optimize every communication sent out to users and maximized the donations that reached the stunning amount of $690 million. Check out these two subject lines that where tested:
- I will be outspent – $2,540,866
- The one thing the polls got right – $403,603
The first subject line performed 529% better than the other and generated $2,137,326 more funding for the campaign.
Obama’s team performed hundreds of split tests, and every time they resulted in improvements from a small 5% to an amazing 500%. Here are two examples of the different landing pages they tested:
Here’s another example, where we split tested two different Facebook Ads images:
Cost per conversion: $2.673
Cost per conversion: $1.036
As you can see, the Ad on the right resulted in a Cost per Conversion improvement of more than 2 times.
Of course, not every split test will result in a performance improvement. Sometimes, you’ll test a new design just to discover the original one was working better. This is part of the game and should not dissuade you from testing.
What elements of a Facebook Ad should you split test, and how?
Everything would be a great answer to this question. It’s also a very unrealistic one.
For a split test to be reliable, every ad you’re testing will need to generate a good amount of data (conversions, clicks, likes … you name it). Testing hundreds of images or demographic audiences at once will likely result in pretty random data which you cannot trust.
When testing multiple elements, things can quickly get out of hand. Just consider that testing 5 pictures, 5 titles and 5 demographic targets would result in the creation of 125 (5*5*5) different ads!
It’s very important to prioritize so that you’ll have reliable results to optimize upon quickly.
We analyzed million of dollars in Facebook Ads split tests and here are the elements that provided the biggest gains:
- Post Text
- Placement (where your ads are displayed)
- Landing Page
- Custom Audiences
- Relationship Status
- Purchase Behaviors
- Education Level
- Ad Type
- Bidding (oCPM, CPC, CPM)
- oCPM Optimization (Clicks, Conversions, Engagement)
Of course, not all of them may apply to your business, or you may already have an answer for some of them. Start making a short list of the experiments you want to start with. I’d pick no more than 4.
Once you’ve defined the categories for your split tests it’s time to start. Begin with broad experiments and once you’ve got results you can refine them, allowing you to optimize your ads much faster.
Let’s do an example, to promote our Facebook Ads Lead Generation eBook, we first tested two very different designs:
Wow, we had not seen that coming. We were pretty sure that the photographic ad would have performed better. However, the data didn’t agree and numbers never lie. Once we had this figured out, we started split testing smaller elements on the winning combination:
Had we tested 10 different variations of every ad design we had in mind from day 0 it would have taken us weeks before having reliable data to optimize the campaign. By testing broader variations first and then split testing smaller fine tunes of the winner, we were able to boost our performances by 143% in just 4 days and then improve by another 13%.
The same approach can be used with most of the Facebook ads split tests you can think of. Instead of immediately testing 10 age ranges, why don’t you first test if young users (13-40) will work better than older ones (40-65). Once you have an answer, just refine your experiment to further test within the winning range (i.e. 13-18, 18-25, 25-35, 35-45)?
Pick the metrics that will define success or failure
Before you rush to create your first split test, you need to decide how will you define which experiment won and which lost.
CTR, Clicks, Spent, CPC, Cost per Conversion, Conversion Rate… Facebook offers so many metrics it can be overwhelming. Also, sometimes, they can be contradictory. I often see Ads with a great CTR but an high CPC, while other ads that have a terrible CTR, deliver a great cost per conversion.
Unless you’re an expert, pick one single metric that you’ll use to judge your split tests.
For most of the campaigns I suggest using the Cost Per Conversion. It’s simple and is the one most impactful for the growth of your business.
If you’re a more advanced user, you could track the revenues generated by each conversion and use the ROI as your key metric.
To make things simple, in AdEspresso, we immediately highlight for you the metric that we think will be the most useful for you:
AdSet & Ads: How to structure your test
Now that you have an idea on what to test and a basic framework for testing it, let’s dig further and see how to organize your split tests with the Facebook ad campaign structure.
As you know Facebook Advertising has a 3 layer structure: Campaigns, AdSets and Ads.
Let’s see how and when to use them.
I’m not a big fan of running split tests across multiple campaigns. It becomes extremely hard to analyze and compare the data.
You may however want to do it from time to time when you’re testing two extremely different variations such as the bidding type or the type of ad (a standard Newsfeed Ad vs a Multi-product Ad).
The AdSet is where you define the budget and the audience targeting. They’re the natural best level where to test your demographic audiences.
If you have a $10 budget and want to test Male Vs. Female you can create 2 AdSets with a $5 budget each, one targeting Men and one Women.
Ads contains your designs and are usually used, within an AdSet to test images, texts, headlines.
If you’d like to split test your Facebook Ads with 5 pictures and gender, the best setup, according to Facebook Best Practice is:
- Adset 1 – Target Men – $5
- Image 1
- Image 2
- Image 3
- Image 4
- Image 5
- Adset 2 – Target Women – $5
- Image 1
- Image 2
- Image 3
- Image 4
- Image 5
There’s just one drawback in this setup. While at the AdSet level you can define a budget for every experiment and thus get sure every experiment receives a pretty even amount of impressions, this is not possible at the Ad level.
This often results in an uneven distribution of the budget where some experiments will receive a lot of impressions and consume most of the budget leaving others under-tested. This is due to Facebook being over aggressive determining which ad is better and driving to it most of the Adset’s budget.
Here’s an example:
One of the images tested received nearly 5 times the impressions and 4 times the budget of the other.
The only viable alternative is to test everything at an AdSet level with a structure like this:
- Adset 1 – Target Men – $1
- Image 1
- Adset 2 – Target Men – $1
- Image 2
- Adset 3 – Target Men – $1
- Image 3
- Adset 6 – Target Women – $1
- Image 1
- Adset 7 – Target Women – $1
- Image 2
- Adset 8 – Target Women – $1
- Image 3
This structure can result in more reliable split tests, however it works better with higher budgets and may slightly increase your overall costs (due to multiple AdSets competing with each other for the same audience).
Set the right budget for your tests
Facebook Ads split testing has a cost. As we already said, in order to run a meaningful experiment you’ll need to gather enough data.
Say you’re testing 5 different images. Before picking a winner you’ll want each ad you’re testing to have generated at least 10 or 20 conversions. You can use this calculator to understand if your results are statistically relevant.
The broader the difference between each variation’s performance, the sooner you’ll reach statistical relevancy. Small differences are less accurate and requires more validation.
If your average cost per conversion is $1, to successfully split test your images you’ll need to setup a budget of at least $50 ($1 * 5 images * 10 conversions), $100 would ideally be even better.
Of course if your main metric are clicks, they usually come much cheaper and you’ll need a lower budget. On the other hand if you have very expensive conversions, you’ll need a much higher budget.
Remember, before setting up a Facebook ads split test, be sure you’ve enough budget allocated. I often see new users fail because they try to test 250 different variations with a budget of few dollars.
What can go wrong with Split Testing
First of all, let me remind you: Be prepared to loose money or don’t even bother starting a split tests.
Not all the experiments will be successful and not all the data will be reliable after a couple of days. You need to be aware of this when you test. Do not stop a split test after few hours just because one variation seems extremely expensive, things can change quickly.
After a good sample of data what seemed like a clear loser could become a stunning success. Take your time, accept that you may lose some money and give every experiment its time. It’s worth it, we’re playing a long term game and every experiment, successful or not, will increase your understanding of your audience and your long term success.
Now that we’ve cleared the water, here are two additional risks that you want to take in account.
Over testing your audience could increase your costs
If you’re testing many demographic audiences (i.e. 2 Genders * 5 Interests * 5 Age ranges = 50 tests) you may end up creating many AdSets, each with a very small reach.
While the information gathered with such tests are still useful, allocating money on very niche audiences can drive your costs up as Facebook will try in any way to spend your $100 Adset budget to reach, for examples, the 2,000 users who live in San Francisco, are 18-19 years old, and are interested in Italian folk music.
This hyper segmentation can be very expensive. If you run many split tests on your Facebook Ads’ audience, be sure to have a very big overall reach so that each variation will still target a pretty big user base.
Design Testing & Virality
Most of the time, hopefully, you’re going to promote interesting and engaging post but not really viral hit that can gather hundreds or thousands of likes and shares.
However, when you do, split tests on the design may limit the organic impact of your ads. It’s pretty simple: if someone sees a post with 1,000 likes they’ll assume it’s good, and like it or share it. This behavior can incredibly amplify the reach of your ad.
However, if you split test the design of the Ad and test, let’s say, 10 variations, you’re gonna spread thin the potential virality across 10 different Facebook Ads (which are simply posts that get promoted). Instead of one, incredibly successful post with 1,000 likes, you may end up with 10 posts with 50 likes each, diminishing your potential amplification.
If you fall in this category and you promote content with an high virality my suggestion is always to adopt optimization strategy n.3 from the next chapter.
How to optimize your Facebook Ads based on your split test results
Facebook Ads Split Testing should not be an end in itself. Our goal is to optimize our campaigns and thus get more results spending less.
Once your experiments start generating reliable data, there are several strategies you can adopt:
1) Stop underperforming ads
This is by far the most commonly used. As soon as you have reliable data, simply pause the under-performing ads and only keep your best ones running.
If you’re using AdEspresso, it’s extremely simple to understand what’s not working and stop it, we even provide you with daily tips:
2) Redistribute budget
Right now this is my favorite strategy. When relying on AdSets for split testing, instead of totally stopping the under performing ones, you can simply redistribute the budget among them.
This way you can have most of the budget allocated to your top experiments while leaving the worst one with a small AdSet budget of $1 per day. This way you can keep monitoring them and see if anything changes in the future at a very low cost.
Again, AdEspresso makes this process very simple and you can even create automated rules to automatically distribute your budget across AdSets based on their performances:
3) Test and Scale
This strategy is often used for landing pages and emails but not that often in the advertising world. However, it’s very effective and there’s no reason not to use it.
Run your experiments with the minimum budget possible to get reliable data and once you have a winner for each experiment, create a new campaign with just the winning ads and demographic audiences and let it spend all your remaining budget.
No matter what strategy you pick, you should always be testing. When you’ve found a winner, while you enjoy your improved performance, also try to allocate a small part of your budget to setting up a new campaign to further split test your Facebook Ads.
Inspiration: Split Test examples from real world Facebook Ads campaigns
Finally here are some examples of Facebook Ads Split tests performed by top companies in their space.
There’s no doubt that Split Testing your Facebook Ads is one of the most effective way to drastically improve your ads.
With AdEspresso we spend thousands of dollars every month testing everything, even the smallest changes, to constantly improve our Ads and so far we have achieved improvements up to 5x. And that’s not all. Testing also helps us understand better who our customers are and what they need the most.
What about you? Have you already set in place a continuous split testing process for your Facebook Ads? What was your greatest success? And the result you did not expect?
Share your experience and your questions in the comments!