It’s wonderful just how easy it is to set up a Facebook Ads campaign. In no time at all, you can create campaigns, set your goals, choose a budget, and your ads are going to be seen by thousands of people.
And almost immediately, you’re going to find out what you did wrong.
Your ads aren’t going to be perfect from the start. It might take a while to even get customers to your website, and you might make some expensive mistakes along the way.
Try not to get discouraged. You can learn a lot from failed Facebook Ads if you know what to look for. Even unsuccessful campaigns will give you a ton of valuable information. The whole point is that you can find what’s not working and iterate quickly!
We started a t-shirt company, King’s County Threads, to see whether we could use Facebook Ads to find our perfect audience. No single campaign will tell us if we hit the mark, so we created a framework we could use repeatedly to test whether our idea about the audience for our t-shirt was right.
Here’s the process that we used to set up our first campaign and how we iterated to make the second campaign (and beyond) even better.
The Structure of Our Facebook Ads Experiment
In case you missed our earlier posts (and you don’t want to click on the links), let’s get you up to speed on King’s County Threads, our upstart t-shirt company.
We wanted to grow a business using only Facebook Ads, so we started with an idea that hit close to home for us. The L train, one of the busiest and most iconic subways in New York City, might be shut down for repairs for several years. As a regular L train rider, I’m pretty upset about switching from public transport to Uber for the next few years.
But I’m not the only one. There’s been a steady stream of stories about freaked out New Yorkers who’d prefer to keep their trains running. People don’t want to picket, but they want some way to express their feelings.
With the L train shutdown looming, we had our idea—a t-shirt combining a familiar slogan with the cause of the moment— and the start of our business. It’s perfect for our experiment, because it’s incredibly easy to set up an e-commerce t-shirt site with minimal overhead and sync it to Facebook. And we had a clear idea of our audience—young New Yorkers in Manhattan and Brooklyn with a sense of humor and a desire to keep the L train running.
We named our company King’s County Threads, after Brooklyn’s official county name, set up a website using Shopify, and designed our shirt.
Measuring Our Goals
Every Facebook Ad needs a clear goal. Without one, you’re sure to waste money. In this case, our goal was to sell shirts. We didn’t want to just rack up impressions and traffic, we wanted to ship product.
Luckily, there’s an easy way to measure your goals in Facebook Ads. Facebook conversion pixels let you track whether people who view your ads take a specified action on your website. If you want to track just about anything — whether it’s signing up for your newsletter or purchasing a product — it’s an absolutely necessary tool.
You can set up multiple pixels on different pages of your website that will let you track different goals for different campaigns. In our case, we wanted to measure how many people who clicked on our ads actually bought a shirt.
We used a “purchased” pixel on Facebook, which generated a custom pixel for our website.
On Most websites, you might need to go into the backend to install your pixel, but e-commerce platforms like Shopify make it easy. It’s pretty much as simple as plugging in the conversion pixel on your dashboard checkout page.
A clear goal will answer the most important advertising question: “Am I making more money than I’m spending?” But the point of running multiple campaign iterations isn’t just to measure success or failure. It’s to understand why and how they’re performing the way they are.
Choosing the Right Targets And Budget
Our overall goal stayed the same over each campaign, but we’d test out changes in our different variables based on how our ads performed. Those variables included:
- Ad placement
- Headline and Text
We can compare variables across different campaigns, but each individual campaign can also compare ads with different variables using split tests. We’d use a combination of both in each campaign.
We set a limit of $20/day for our ads to keep our budget from running away from us. Advertising on Facebook is based on a bidding system where companies compete for the chance to show ads to users, although you never see the process. You can set your bids as high or low as you want, but we initially went with an automatic bid. That meant Facebook would choose bids that would optimally get our ads in front of people while in our budget.
The “Science” Behind the Plan
Our process isn’t technically scientific. We have a rough idea of our target audience, and we’re not expecting to get it right the first time, or second time. It’s all about collecting more data.
In a proper science experiment, or a fair test, you only test one variable at a time to properly measure cause and effect. We changed multiple variables between our first and second campaigns, because we wanted broad answers to whether we’d identified the right market for our t shirts.
Though we were changing multiple variables, we wanted a framework for improving each campaign. We came up with a process that kept the focus on our goal and gave us clear guidelines for our next steps whether the campaign succeeded, or didn’t.
Everything revolved around our goal: whether we could sell t-shirts profitably using Facebook Ads. If we could, we would stick with the parameters of our campaign, tweaking and optimizing it or testing it with a new audience, for instance a custom audience of people who visited our website.
If our campaign didn’t work, we had a two step process.
- Analyze. We’d look at the numbers: how our ad performed on different platforms, whether our metrics, like cost per click (CPC) or relevance score, were positive, and how we were performing with different demographics.
- Identify Major Problems: We might have a few issues, but we’d focus on the most significant problems to determine how we’d revise our next campaign.
Our plan was to run campaigns for a week at a time. With our process in place, if things were working well, we could tweak and refine. If not, we could go back to the drawing board.
When you’re planning your first Facebook campaigns, keep these points in mind:
- Don’t plan to run a single campaign. Especially if you’re testing a product for a new audience, or trying out a new product altogether, treat your campaigns as a series of tests to improve over time. Choose a limited budget and timeframe so you can get quick results that let you iterate often and get closer to your goal.
- Set up a process for when you succeed, as well as when you fail. Keep a clear goal in mind to evaluate success or failure, but make sure that either way you have clear steps for how you’ll proceed at the end of each campaign.
Our First Campaign
We had a lot of questions to answer, so we cast a wide net to get our ads in front of a lot of people, in a lot of places. Our parameters for the ads were:
- Placement: desktop, right column, mobile, audience network, and Instagram.
- Geography: Every zip code surrounding the L Train in Brooklyn and Manhattan
- Interests: Internet humor sites like Lolcat, Fail Book, and The Oatmeal, and the phrase or sign “Keep Calm and Carry On”
- Picture: One picture
- Bidding: Automatic
We also decided to A/B test two variables, so we had 4 ads in total:
- Headline: What would get people’s attention: “The L Train is Shutting Down!” or “Freaking Out About the L Train?”
- Age group: Would our product be more popular with high school and college-age students (16-23) or young professionals (23-29)?
It was our first campaign, so we were hoping to get some quick information on how users were engaging with our audience that we could build on in subsequent campaigns.
What Were The Results?
Here’s how the numbers for our first campaign looked on the AdEspresso Dashboard.
At first blush, there are some positives. Facebook is telling us we had an almost 2.6% click-through rate (CTR) and a low cost per click (CPC) of about 10 cents. But we quickly realized there was something wrong.
First, we didn’t achieve any conversions, aka we didn’t sell any t-shirts. That’s not that surprising for our first campaign, and it’s not terrible as long as we have some useful information on how our ads performed in different places and with different audiences.
Our first campaign failed the first test of our process: it didn’t meet our goal.
That meant we’d start on the first phase of our revision process: analyze the numbers.
Here’s where we bumped into a major problem. Our metrics looked okay, but when we checked on how our ad performed across different placements, we noticed that 99% of our ads went to the Audience Network.
The Audience Network is a series of Facebook ad placements on mobile apps. That means people can see your ads when they’re listening to music, or playing a game on their phone. It opens up all sorts of new mobile venues for your ads with the same audience targeting and results tracking you’d get on Facebook itself.
It gives advertisers a powerful way to expand the reach of their ads on mobile devices, and there’s nothing inherently wrong with getting a lot of our ads placed there. But we wanted to see how our ads would do on different platforms within Facebook! Without anything to compare, we don’t know whether our ads would be more or less successful depending on where they’re placed.
Worse than that, it looked like all our metrics were suspect. Facebook was telling us that we had 298 conversions from our ads, but as we said, there were no sales.
Our A/B tests showed a clear advantage to the younger demographic, but since they were based on the same odd conversion numbers, it seemed like they too were probably suspect.
So, what happened? Why did all of our ads gravitate toward the Audience Network, and why did Facebook report inaccurate conversion numbers?
The first question is easier to answer. Facebook optimizes ad campaigns toward the most successful placement. That can ultimately save you money, or get your ads to a better audience.
But it can also start a feedback loop where Facebook notices one placement is doing better than others, so it devotes more resources to that placement, until your ads are only shown in one place. It’s happened before, and it appears to be what happened to us.
There are two ways to avoid this problem:
- Choose fewer placement options—this is what we tried in our next campaign.
- Use Ad Sets to divide your ads by different placement options. If each Ad Set has the same budget, you’ll get a neutral comparison of how your ad performs across different placements.
As for the why we were getting inaccurate conversion numbers, it’s not totally clear what happened. We know our conversion pixel was in the right place because Shopify made it so easy to place it on the checkout page. And we know it works accurately on other placements thanks to our second campaign (see below). So we suspect that it has to do with the Audience Network and how Facebook tracked people who came from their mobile apps to our landing page.
We spent $47.03 on our first campaign. It wasn’t a success, and unfortunately it didn’t give us useful numbers we could bring to our next campaign, but it did identify a major problem area. We would incorporate that into our second campaign to see whether we would get more useful takeaways.
If you want to get useful information out of your first campaign like we did, follow these tips:
- Look at the big picture. There are lots of stats you can use to judge the success of a campaign, but if you’re not achieving your primary goal, or the numbers seem wildly off, something’s wrong. Stop the campaign and figure out where the problem is.
- Facebook ad optimization is a double-edged sword. Facebook will try to put your ads in the most effective place for your audience, but that could mean they’ll devote all your resources to one placement or demographic. The best way to avoid this is to divide up variables by Ad Set and assign a budget to each set.
Our Second Campaign
We made some significant changes between our first and second campaigns. We needed to address our major problems from the first campaign, but we also used the time in between campaigns to think more about our ideal audience.
First, to deal with the major problem in our first campaign we changed our placements, limiting our ads to the desktop newsfeed and Facebook’s right column. Ads tend to do better on the newsfeed, where people spend most of their time, but are usually cheaper to place on the right column.
Second, we limited our geographic range. We realized that every zip code along the L train was overly broad, and that our theoretical buyer personas were probably people living in a smaller area of North Brooklyn and Manhattan. We decided to move west and target Union Square area, East Village, Williamsburg, East Williamsburg.
For similar reasons, we changed up our interest targeting as well. We’re not just looking for people who appreciate a good laugh and live in New York, we want people who care about current events and take an active interest in their community. So we broadened our interests to people who liked news sites like the Huffington Post, or who went to graduate schools in New York City.
We used Facebook’s automatic median bid optimization to set bids for our ads, but our narrower demographic was substantially more expensive. It would have cost us $90 a day for our original four ads to get to our new targets, which was too much for an early experiment. Instead of going with automatic optimization, we lowered our bid to the minimum amount.
We also decided our headlines weren’t sufficiently different to warrant an A/B test, so we only split test our ads based on age. That would also extend our budget without compromising useful information.
With our more limited targeting range, and smaller number of placements, we felt confident that we’d get more useful information from our second campaign.
What Were The Results?
Here’s how the raw numbers for our first and second campaigns looked on the AdEspresso Dashboard.
First, the good news. We got our first sale! That’s a really modest success, but Facebook correctly reported our single conversion, so we felt a lot more confident that the numbers for our second campaign were accurate.
Looking at our placement information, we also saw the sort of more even distribution we were expecting:
Our A/B test showed a nearly even distribution of clicks across both age groups, and a higher CTR for the older group. Our sale also came from the 23-33 demographic, so all in all it looks like we were right to question the results of our first campaign A/B test.
That’s the good news. But what did it cost us to get that one new customer?
We spent $88.24 in total for our second week-long campaign and even though we sold a shirt, that didn’t hit our goal to do so at a profit. That meant we had to take a deeper look at our numbers and identify what went wrong.
Once we analyzed our metrics more closely, we quickly realized that our ads weren’t reaching the right audience.
Not only was our spending almost doubled from the first campaign to the second, but our CPC went way up to over a dollar per click, while our relevance score, which measures how relevant an ad is to its target demographic, was a mediocre 5.5.
The biggest problem metric was frequency. If someone sees an ad and it doesn’t interest them, then seeing it 2, 3, or more times won’t change their mind. In fact, they might get annoyed which makes any subsequent, different ads they might see even less effective. It’s the potentially awful downside to the familiarity principle.
Our ad frequency was 7, which means on average the same people were seeing our ad 7 times. That’s not good—in a study of 500 Facebook ad campaigns, ads with a frequency score of 7 had 120% higher CPC and 40% lower CTR than on the first impression.
- Certain metrics, like frequency, will tell you very quickly that your ad is not working. If your ads are consistently being shown to the same people over and over, you know that you have a problem with your targeting. At that point, stop the campaign, save your money, and start planning the next one.
- Finding the right audience is a process. When you don’t have any existing customers, it’s going to take time to test your ideas about whether there’s a proper market for your product. Be prepared to regularly iterate on campaigns, and don’t get discouraged by initially poor results.
The Next Steps
After two campaigns, we could have continued to build on the data we learned, especially from the second campaign. There were a lot of variables, like the picture and ad text, that we could test. But it was clear that we had a lot of work to do on our audience targeting.
That’s why we have our Process. As long as we consistently stick to it, every new campaign builds on previous ones, getting closer to our mark. We know where we want to go.
However, we are changing things up a bit.
Our original t-shirt idea was based on a major local event that affected us, from which we built out an idealized audience. But while these campaigns were going on, and New Yorkers were fretting about the L train, a much bigger event is captivating the nation:
Unlike our L Train shirt, where we were testing the ground for a potential audience, there’s a ton of evidence that there’s a large, preexisting audience who are definitely feeling the Bern.
Since it’s the height of campaign season, we’re refocusing our campaigns around our new shirt to see if we can build on a pre-existing audience and quickly build a successful campaign.
That’s All (for now), Folks! Stay tuned!