Facebook Ads A/B TestingI have to apologize. When you started reading this guide, you probably hoped for us to reveal the secret sauce to Facebook advertising with zero effort. I’m afraid I have to tell you there’s no secret sauce or magical wand. While guides like this can be extremely helpful to get you started, the only way to discover what really works to promote your product is a lot of hard work and testing.
That’s why in this chapter, we’ll discuss how to successfully perform Facebook Ads split testing.
What is an A/B Test ?A/B testing, also called split testing, is a method by which you find out which ad headlines, body copy, images, call-to-actions, or a combination of the above work best on your target audience.
Unless you’ve already created a lot of Facebook Ad campaigns for your product, it’ll be pretty hard for you to predict what kind of ad design will work better for you or which demographic audience will be more likely to buy your product. This is where a thorough Facebook Ads A/B test comes in handy: You can quickly test multiple ads’ designs and target audiences to uncover the most effective ones.
Let’s do an example, assuming we want to test two different designs for our eBooks’ lead generation:
|TEST 1||TEST 2|
|10,000 Impression||10,000 Impression|
|237 Clicks (CTR: 2.37%)||187 Clicks (CTR: 1.87%)|
|28 Sales (Conversion rate: 11.81%)||16 Sales (Conversion rate: 8.55%)|
|Spent: $150||Spent: $150|
|Cost per Sale: $5.35||Cost per Sale: $9.37 (+75.14%)|
You can clearly see the benefit of this A/B test! Ad n.2 performed much more poorly than Ad n. 1 and this resulted in an increase of 75.14% of the advertising cost to generate a new sale. To put this in perspective, if you had published only a campaign with Ad n.2 and let it run on a larger scale with a $2,000 budget, you’d have wasted $858 generating only 213 sales versus the 373 sales you’d have in your pocket using Ad n. 1! This is the beauty of split testing: It allows you saves big money!
There are endless opportunities for testing Facebook Ads: Titles, texts, links, images, age, gender, interests, locations, and so on. The one downside? A test can quickly become huge! Let’s assume you want to test five images, four ad titles, and five precise interests targeting. This means you’d have to create 4*5*5 ads to test all the possible combinations — a total of 100 ads!
Can you imagine manually creating 100 ads on the Facebook Ads Manager? It would take forever! Even if you did it, you’d still have to deal with an analytics interface not meant for analyzing all of this data!
This leaves you with two options: Either create smaller experiments with four or five variations OR use an external Facebook Ads Manager specifically designed for A/B testing (shameless plug: AdEspresso could be a good solution — and we offer a free 14 day trial!).
Overall, once you start dealing with a budget higher than $2,000 per month, our suggestion is to go for an external Facebook Ads tool that will make your life easier by saving both time and money.
What to test in Facebook Ads and how?When setting up a Facebook Ads split test, the most critical decision is what you want to test. It’s a hard pick as the options are virtually endless:
Possible Ads’ Design Tests: Title, image, text, landing page, link description, call to action, placement.
Possible Ads’ Targeting Tests: Country, city, language, age, gender, relationship status, precise interests, behaviors, advanced demographics (household income, family composition, and so on), custom audiences, mobile devices, school, work, title, and many more.
Let’s start assuming you don’t strictly need to test everything — and you don’t need to test everything in the same campaign as we’ll discuss later on in the budget section. Particularly if you’re on a tight budget, it’s critical to pick the right elements to test. We want to start with those experiments that are likely to have the biggest impact on our ROI.
Let’s define a framework to plan your first few rounds of experiments.
1. Exclude useless experiments: This is strictly related to your business. If you’re selling beauty products for women, you won’t need to test the other gender as you want to advertise only to women. The same applies for country if your company is not selling across borders. Before starting a split test, take some time to think about what you know about your customers and what doesn’t need to be tested.
2. Define two macro experiments: Don’t overdo it — start with only a couple experiments. For example, test two or three different pictures and the gender. In your first round of A/B tests, you want to be disruptive: Test very different variations. It’s pointless to test the same image with white and grey background since the impact will be marginal. Instead, test two totally different images, such as one photo with a human and one with an illustration. You could also highlight different value propositions. The same applies to most of the elements you can test. It’s pointless to start testing age ranges like 18-19 versus 20-21. Instead, try 18-40 versus 41-80.
3. Split test different elements and fine tune existing ones: Once your first round of Facebook Ads split tests have generated results and you have a winner, there’s no time to rest. Now it’s time to test new factors such as interests and titles, while still using a disruptive approach and fine tuning new, more granular experiments than previously tested. If you saw that people aged 18-40 are performing much better, you can now refine the experiment testing 18-30 vs 30-40.
4. Never stop testing: Once you’ve found the perfect combination of ads’ design and target audience… well it’s time to start testing again! Scale the budget on what’s working but, at the same time, start testing everything from scratch. Test totally different images, texts, interests, and so on. Remember, when dealing with Facebook Advertising, the lifespan of a design or target is pretty short. At some point, people will get tired of your ads and you’ll have saturated the audience you’re targeting (either because they all became customers or they’re not interested in your product). Always keep testing so when the moment comes, you’ll be ready to scale the budget on a brand new campaign.
With this framework in mind it’s time to go to the drawing board and plan your first experiments — the sooner you start, the better edge you’ll have over your competitors.
To jumpstart your experiments, we did a quick analysis on more than $3 million in Facebook Ads A/B tests that have been managed through AdEspresso. Here are the factors testing impacted in terms of customer campaign performance:
- Precise Interests
- Mobile OS
- Age Ranges
- Relationship Status
- Landing Page
- Interested in
Have fun testing!
Using AdSets for a relevant split testFor an accurate split test there is one fundamental building block: The distribution of impressions or budget must be even among all the experiments you’re running.
It’s pretty hard and unreliable to understand what’s working better for you among two different pictures if one received 100,000 impressions and the other one 1,000. There would be no statistical relevancy. Data about the second picture with only 1,000 views would be totally unreliable and you could not declare a winner.Unfortunately, this is the exact behavior you can expect from Facebook Ads: Facebook quickly picks a favorite ad that will receive most of the impressions and budget while all other advertisements will receive too little exposure to be seriously compared. Sounds familiar, doesn’t it?
Fear not! Facebook (partially) listened to the complaints and recently restructured the campaign’s structure to make it more simple to run a reliable Facebook Ads split test. With the introduction of AdSets, you can now allocate your advertising budget in a more granular way. Instead of having a common budget shared by all the ads of the campaign, you can now create groups of ads (called AdSets), each with a dedicated budget. Exactly what we needed!
The results are amazing. Let me show you some data from two identical AdEspresso’s campaigns, one running A/B tests with multiple AdSets and one without:
Wonderful! Every country had nearly the same budget spent for our test with multiple AdSets. Let’s see the AdEspresso reporting for the same campaign created without AdSets:
This is an awful and totally inconclusive A/B test. United States alone got more than all the other three countries together! France and the UK received such a small percentage of the budget they likely generated just a couple of conversions. This is not enough information to find a winner for our test!
Facebook’s best practice would be to create an AdSet for each demographic targeting you’re testing (i.e. one for males 18-25, one for females 18-25, one for males 25-35, and one for females 25-35). While we generally agree, if your main focus is split testing the ads’ designs, we’d suggest you to create one AdSet with a dedicated budget for each design element you’re testing.
The right budget for Facebook Ads split testingLet’s close this chapter with one of the toughest questions we come across: What’s the right budget to run a split test?
The answer? It depends. While in theory you can create a split test of hundreds of ads with as little as $5 per day, this would mean having to wait months before getting results that are statistically relevant.
However here are some basic suggestions to setup a good budget for your Facebook Ads split tests:
- Create a small campaign either in CPC or oCPM with no tests in order to understand an average Cost Per Conversion
- Once you have a basic Cost Per Conversion, set up your split test. Ideally, you want to allocate a budget big enough to allow each ad to receive at least 20 conversions. So, if you’re testing five pictures and your cost per conversion is $2, you want to have a budget of $200 ($2*5*20).
- There are several constraints that you’ll need to respect in terms of budget. For example, when bidding on CPC, your AdSet must have a budget at least 2x the highest ad’s bid. In addition, each AdSet must have at least $1 budget.
Overall, the general suggestion is to not overdo. It’s pointless to create hundreds of experiments inside a single campaign unless you have thousands and thousands of dollars in budget. Start with micro experiments, testing only few elements and give them a reasonable budget.
You’ll want to start optimizing your campaign fast — it’s extremely helpful to get quick and reliable results for a few important split tests rather than running a long campaign to have results for hundreds of different tests all together.