Facebook Clicks are all Fake! Twitter is full of bots! This woman is where all of your clicks are coming from!
You’ll have read all these claims about fake traffic all around the web, normally from some disgruntled advertiser who spent hundreds of thousands of dollars on an ad campaign only to find it wasn’t the panacea to all his problems.
We’ve read all these complaints as well, and they just didn’t ring true. We know that Facebook ads can work really well, so whenever we read these articles we’re always skeptical and want to see the numbers behind these campaigns. Of course, advertisers that are failing don’t want to be scrutinized so don’t release their data.
Therefore we went out and got our own.
This week we did what we do best here at AdEspresso—we ran an experiment to test in a data-driven way if Facebook, Twitter, and Linkedin are really all useless and only driving fake clicks.
The results surprised us as much as they’ll surprise you!
Our Experimental Setup
The experiment itself was as simple as it could get. We decided to do the public a service and promote our old and very successful post detailing why you should never buy Facebook Likes.
Our experiment looked like this:
- Channels: 3 social networks—Facebook, Twitter, and Linkedin
- Budget: A maximum of $200 on each social network
- Time Frame: 1 week
- Locations: 2 countries—United States and Canada
- Targeting: Users with a strong interest in social media
- Bidding terms: Default for each network (letting them work their magic)
Each of the ads for each of the channels had the same image. Though each network has different limits for the text, we also aimed to keep the copy as consistent as possible between the channels as well.
These were the ads we used:
Each ad linked to Bit.ly and then was redirect to our website, setting the UTM tags in the URL so that Google Analytics knew where all the traffic was coming from. This way we were tracking the number of clicks through multiple sources and were able to compare and contrast the metrics at multiple stages.
We ended up with 4 data sources:
- The channel’s metrics—The number of clicks the internal metrics from Facebook, Twitter, and Linkedin said that we received on each ad.
- Bit.ly—The number of times each Facebook, Twitter, or Linkedin specific bit.ly URL was called.
- Google Analytics—The number of visits we had from each of Facebook, Twitter, or Linkedin according to our internal Google dashboard.
- Our Webserver’s logs—The raw data showing each individual visit to our site from Facebook, Twitter, and Linkedin.
This is a small experiment. Across the 3 channels, we only spent $589, with all channels keeping within the $200 maximum. You need to keep in mind 2 things:
- Don’t consider these numbers as an indicator of how well each channel works. This is about fake clicks, not about which channel you should use. We’ll be running plenty of experiments into that in the future.
- Based on the kind of content you’re promoting and your audience, you’ll see very different results. This is a small scale experiment. If we were doing this properly to look into the efficacy of different channels, we would be testing the ad images, copy, and targeting, and we’d put more money in to get more traffic.
OK, enough words, now numbers!
We were looking for the answer to this simple question: Were we flooded by worthless bots and pouring our money down the drain?
No! In fact, we found the opposite.
That’s the simple answer. Though this was a small scale experiment and the results weren’t astronomical, we actually ended up receiving more traffic than we paid for! This is without optimizing our campaign, going deep into targeting, or any further testing.
So let’s start looking at the results.
Impressions & Clickthrough Rates
Let’s start right at the beginning. Approximately the same amount was spent in each channel, so how many impressions and clicks did our ads generate, according to the metrics from each individual channel?
|Source||Impressions||Channel Metrics||CTR||Social Actions|
|27,333||213||0.78%||16 Likes, 4 Shares, 11 Comments|
|38,294||99||0.26%||10 Retweet, 2 Replies|
|5,674||35||0.62%||7 Social Actions|
We got by far the most impressions from Twitter, accounting for 53.7% of all the impressions our ads received. Facebook received the second most with 38.3% of impressions, and Linkedin was a distant third, with just 8% of all impressions.
However, our Facebook ad received over double the number of clicks our Twitter ad received, and over 6X the amount of clicks the Linkedin ad got. Therefore the clickthrough rate (CTR) for Facebook was the highest at 0.78%. With its small clicks matching its small impressions, Linkedin was second with a CTR of 0.62%, and Twitter the distant third place this time with a CTR of 0.26%.
Though Facebook seemed to have performed well with the highest clickthrough rate, that wasn’t the objective of the experiment. We wanted to see whether those clicks were genuine or fake clicks.
So are all those clicks coming from bots or click farms?
Webserver Traffic & Bots
To find out about all these fake clicks we need to start looking at our own webserver logs and at bit.ly.
|Source||Channel Metrics||Bit.ly Clicks||Webserver Clicks|
If we chart that data, we can start to see what is going on.
In the case of both Facebook and Twitter, Bit.ly is reporting more clicks received than the internal channel metrics. What’s more, our own server logs reported even more clicks! In each case, our server logs are reporting over 2.5X the amount of clicks than any of the internal metrics are.
Is all that extra traffic just bots? When taking a closer look at the server logs, we can start to that there are some bots mixed into the normal traffic:
22.214.171.124 - - [06/Nov/2015:04:51:25 +0100] "GET /r/LikesFb HTTP/1.1" 200 1156 "-" "bitlybot/3.0 (+http://bit.ly/)"
126.96.36.199 - - [06/Nov/2015:04:51:25 +0100] "GET /r/LikesFb HTTP/1.1" 200 1136 "-" "bitlybot"
188.8.131.52 - - [06/Nov/2015:23:37:54 +0100] "GET /r/LikesFb HTTP/1.1" 302 797 "-" "Mozilla/5.0 (TweetmemeBot/4.0; +http://datasift.com/bot.html) Gecko/20100101 Firefox/31.0"
184.108.40.206 - - [06/Nov/2015:23:37:54 +0100] "GET /r/LikesFb HTTP/1.1" 302 797 "-" "Mozilla/5.0 (TweetmemeBot/4.0; +http://datasift.com/bot.html) Gecko/20100101 Firefox/31.0"
220.127.116.11 - - [07/Nov/2015:00:16:30 +0100] "GET /r/LikesFb HTTP/1.1" 302 841 "-" "rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, firstname.lastname@example.org)"
OK, Bots do exist. We can see them from Datasift (checking out our page 330 times!), Bit.ly’s bitlybot, and rogerbot from our friends at Moz.
We manually checked every single entry in the server logs to identify bots, or at least those bots that are good enough to identify themselves as bots. In total there were 12 individual bots that were crawling our page. It could be that there were more that didn’t identify themselves, malicious bots that were disguising themselves as regular browsers. Once we removed the bots, these were our internal traffic numbers:
|Channel Metrics||Bit.ly||Webserver Clicks||Webserver Clicks (no bots)||Unique IPs||Unique Class C|
This brings the traffic down to a more reasonable level. The traffic numbers sans bots closely resemble the bit.ly clicks, and it seems that all the non-bot traffic was coming from unique IP addresses.
This is incredibly important. When we see that the vast majority of our traffic was coming from unique IP addresses it means we can start to be confident that it is real traffic. At least 90% of our traffic is coming from these unique IPs.
We also looked into the unique Class C addresses.
An IP address looks like this: 18.104.22.168 composed of 4 numbers. You can find out your IP address by simply googling “what’s my IP?” Class C are the first 3 numbers of that IP Address (192.196.0).
Why do we care about this?
It’s because devices that are connected to the same router or server, and therefore in the same room or office are likely to share those first 3 numbers, and only the 4th number will change. The first computer will be 22.214.171.124, the second device 126.96.36.199, and so on. These sequential IP addresses are exactly what you’d expect to see from bots or a click farm where numerous computers are connected to the same local network.
This is exactly what we don’t see.
Because most of our traffic was coming different IPs we can be extremely confident that this was natural traffic. The chance that those were fake clicks, coming from bots or click farms is incredibly thin.
A constant complaint that we hear is that the numbers in Google Analytics is much lower than Facebook reports. But that isn’t what we have found in our experiment.
Here are those numbers again:
|Channel Metrics||GA Sessions||GA Bounce Rate||GA Time on Site|
With both Facebook and Twitter we actually saw more traffic on Google Analytics that we initially saw on the internal channel metrics. With Facebook we saw 60% more traffic in Google Analytics than on Facebook itself. Only with Linkedin did we see few Google Analytics sessions than clicks reported in the internal metrics.
With the sole exception of LinkedIn, Google Analytics is reporting more visitors than the advertising platforms are. This is the exact opposite of what everyone thinks happens. There is supposed to be some great fraud going on, but as this data shows, if anything Facebook and the other social advertising channels are under-reporting the amount of genuine traffic they are sending your way.
When looking at the bounce rate, it’s not quite as low as we are used to, but they aren’t incredibly bad rates, and definitely good for an unoptimized ad sent to a cold blog post.
What’s more, these visitors aren’t leaving in just a few seconds as you’d expect if they were fake clicks. The average time on site from Facebook and Twitter was over a minute (1:26), suggesting that people were actually reading the post.
So why are all our numbers—bit.ly, our webserver, Google Analytics—so much higher than those reported by the internal channel metrics?
Because these channels are already doing a good job of filtering out the bot numbers. All the bots that we found crawling about our webserver logs had already been discounted by the social advertising metrics. They’re not perfect, but they understand that people don’t want to count bots and fake clicks as genuine traffic, so are getting rid of them from their numbers.
You, as the advertiser, are not being charged for what Facebook, Twitter, and Linkedin thinks are fake clicks and bots.
However, that’s not the end of the story. Bots are usually not counted by Google Analytics, yet that system is reporting more sessions that Facebook.
Amazing things happen when you share great content
The numbers you see if Facebook, Twitter, and Linkedin are the number of clicks that have specifically been generated by the ads, and are the clicks that you’re actually paying for.
But once you post content to a social network suddenly its reach grows far beyond that initial ad. If the ad is good, and as importantly, if the content is good, people start to like the ad, comment on it, and share it.
The story we promoted, a study on fake likes, is one our our top performing blog posts, and users on each of the social networks engaged with it, sharing it with their friends and around their networks, generating organic, free traffic.
How do we know this? From this graph:
Even after the campaign was over, when there were no more ads out there on any of the social networks, Facebook and Twitter still sent 75 more visitors to the page. These are users that saw it shared earlier, bookmarked it and wanted to read it later.
You Don’t Want Clicks, You Want Conversions
All of these people complaining about fake clicks are missing the point. You shouldn’t care about clicks, fake or otherwise. Even the most genuine of clicks are useless if they don’t convert.
You should be using social advertising to grow your business, and you can only do that if the people who visit your site through these sites convert to customers.
So did these social ads lead to conversions?
|Source||Channel Metrics||Newsletter Signups||eBook Downloads||AdEspresso Trials|
Facebook was the most effective channel, with over 15% of all visitors converting, either signing up to the newsletter, downloading and eBook, or in 4 instances, starting a trial with AdEspresso. That’s 34 people moved further down our funnel from a simple, unoptimized Facebook ad campaign.
Remember, we weren’t promoting AdEspresso, or driving traffic to a landing page—the visitors were going simply to a blog post. From there, a total of 10 visitors went further and downloaded one of our eBooks, and a total of 28 ended up as newsletter subscribers.
This is what is important. Not fake clicks, not any clicks—just conversions.
This experiment shows 1 thing conclusively:
Fake clicks are not the dramatic problem people claim.
Yes, when you use social advertising, some bots or fake clicks will get through the net and you’ll end up paying for some of these. But it is nowhere near as big an issue as all the conspiracy theorists will have you believe.
So why do they perpetuate this myth? In some cases it’s because they don’t really understand how this advertising works, have made mistakes and want to blame anyone but themselves. Other commentators no exactly what they are doing and are using the issue to drive traffic to their own posts telling you how terrible the problem is.
You shouldn’t care about fake clicks. For one, they are not the problem people think, and two, what you should really care about are conversions and return on investment, ROI. If customers coming through are converting and you’re able to make money from customers in that channel, then it’s working. If not, whether it’s because of fake clicks, bad ads, or inefficient targeting then you need to leave the channel or improve your campaign.
If it works, pump more money into that channel, it not then find another channel that will work for you. But don’t blame it on fake clicks!
The AdEspresso University
This is the end of our first AdEspresso University study. Each month AdEspresso University we will be running experiments looking at all aspects of social marketing, from whether your clicks are fake, to how often you should run your campaign, to what bid to use.
The results will be published at AdEspresso University 1 month before they appear on this blog, giving University members advance access to the best online resources for improving all their social marketing.
Every month we’ll spend $1,000 to run these tests and find answers. We’ll partner with one of our University members, running the tests with them, giving advice, and promoting their business.
For only $19 per month, AdEspresso University members will get early access to all these experiments, along with a plethora of courses, tools, and examples to boost their social marketing and grow their business.