Seasoned advertisers understand the importance of being able to test and implement different variations of their ads. Simple A/B tests can be crucial in building performance comparisons and revealing what is and isn’t working for an advertising campaign.
The methodology is no different on Facebook. Advertising on the platform is an intricate affair that can produce a compelling return when strong strategies are implemented. As such, conducting A/B tests is a common practice among many Facebook advertisers.
But up until recently, running tests on Facebook has been a complicated and convoluted process. It was possible to run A/B tests, but doing so was accompanied by major limitations. The biggest of them being the inability to prevent audience overlap.
This overlap meant that test results may not be thoroughly conclusive. But now, with the Facebook Split Testing feature, advertisers are given a much more refined and robust means of experimenting with different components of their ads.
What is Facebook Split Testing?
Facebook Split Testing helps advertisers better understand how various facets of an ad affect overall campaign performance.
Similar to A/B testing, Facebook Split Testing gives advertisers on the platform the ability to implement different versions of their ads to see what works best and how to optimize campaigns for better results moving forward.
The feature is still relatively new and is currently going through a gradual roll-out, meaning it’s not quite available to everyone just yet. Additionally, not all objectives are currently being supported yet either. Facebook Split Testing can currently be utilized with the following business objectives:
- Website conversions
- Mobile app installs
- Lead generation
If your campaigns are not utilizing any of these objectives or you simply do not have the feature available yet, you’ll need to wait until testing is accessible by all accounts.
The feature’s main appeal
Facebook Split Testing has the potential to be a really great tool for advertisers. While it’s similar to A/B testing many are already accustomed to, the value Split Testing offers shouldn’t be overlooked.
“Previously, if you wanted to test anything, you had to either create different ad sets that had the same audience in it or create entirely separate ads; and there was no way of being certain of any existing overlap” said Sarah Rogers, Senior Social Strategist at CPC Strategy.
This overlap can be a major imperfection in A/B testing on the platform. There was no way of ensuring a single user didn’t see the same ad in each set and it was difficult to make certain different ads were shown in equal amounts. This, in turn, could impose significant blemishes on your results. Facebook Split Testing fixes this.
“Now, instead of having to test multiple groups against different sets of creative or something along those lines, you’re literally splitting an audience in half. The users in your groups are completely different people and there’s no overlap whatsoever. It’s a true A/B test.” Rogers went on to say.
The ability to definitively split audiences in such a way is a capability the platform has sorely been lacking, and one advertisers should be welcoming with open arms.
How Facebook Split Testing Works
After creating multiple ad sets, the Facebook Split Testing feature allows you to run them simultaneously to see what strategies produce the best performance results.
The testing process is carried out by doing the following:
- Your audience will be divided into random, non-overlapping groups who are shown ad sets with identical creative. This randomization ensures the test is conducted fairly, gives each ad set an equal chance in the auction, and doesn’t allow other factors to skew the results of the group comparison.
- Each ad set tested has one distinct variable, differentiating it from the other. These variables can be different audience types, placements, or delivery optimizations. Facebook will then duplicate the ads and only change the specified variable. To ensure the most accurate results, only one variable can be tested at a time.
- Split Testing is based on people, not cookies, and gathers results across multiple devices.
- Performance of each ad set is measured according to your campaign objective. After the test is complete, the results are recorded, compared, and sent to you for review.
- Advertisers can split test up to 3 different ad sets (an A/B/C test) within a single campaign.
- Minimum budget is variable based on number and size of splits (generally ~$1,500) and campaign length will be anywhere between 3 and 14 days
“The new testing process is going to be very helpful in driving meaningful results. Now, you can thoroughly test a lot of things out. The parts that are going to affect your CPCs, your return on spend, everything that’s important to you – it’s all going to be tied to concrete results now” Rogers went on to say.
Another look at testing variables
As previously mentioned, target audience, delivery optimization, and placements are currently the only variables that can be tested against each other. Within a single campaign, only one variable can be tested at a time.
Audiences: Facebook Split Testing is only available for saved audiences. If you do not currently have any saved audiences, you will need to set those up before you can begin split testing.
- Example: testing saved audience (A) versus saved audience (B). this can include the likes of different custom audiences, different lookalike audiences, and different interest-based audiences.
Delivery optimizations: with a single ad, your optimization choice specifies what to value most when an ad is delivered. You can, potentially, choose to optimize for link clicks in one test and conversions in another.
- Example: optimize for conversions with a window of 1 day (A) versus optimizing for conversations for a conversion window of 7 days (B) versus optimizing for link clicks. Testing optimizations is going to be one of the biggest variables you can use. Whether it’s website conversion versus link click optimization, or any other combination, Split Testing is going to make finding out which which optimizations work best much easier.
Placements: lastly, you can run split tests on placements. You can select automatic placements or customize placements to define where your ads will appear. You can test two custom placements against one another, but Facebook advises against doing so.
- Example: automatic placement (A) versus customer placement (B). The results of this variable are going to be pretty straightforward, but enormously important, nonetheless.
“A big test we currently do across a few different clients is seeing what they’re optimizing for” Rogers explained.
“Lets say we want to look at optimizing for a one-day conversion window versus a seven-day conversion window and see which has the better return.”
“We can run A/B tests similar to this, but they may not be as conclusive as they could be because there’s no way of definitively knowing what the overlap is. We can gather the results from these tests and put together a pretty good idea of how well they’re performing, but there is room for inaccuracies.”
“With Facebook Split Testing, I think it’s going to provide more conclusive results. I think it will be very valuable for us and our clients in helping to guide what we can learn about existing campaigns.”
For more information on Facebook Split Testing, please contact email@example.com
Additional Facebook Advertising Resources: