A/B testing, or split testing, is a method for determining which ad creatives are most effective for your business. By comparing two different creatives and tracking user interactions, you can statistically determine which one is best.
Unfortunately, many businesses use A/B testing incorrectly. A poorly run test is, at best, throwing money away, and could do significant harm if it leads you to inadvertently select a poorer performing creative.
We've collected five of our most essential A/B testing best practices to ensure you get maximum value from every test you run:
1. Test for Conversion Rate Rather Than Clicks Where Possible
Many businesses make the mistake of testing for clicks, falsely believing that the creative that gets the most attention must be the best. This is often incorrect.
In a rigorous study of more than 263 million display impressions, comScore found that clicks had a very low correlation (0.01) with conversion. This result means that there is almost no association between the two variables.
One creative might attract a high number of people to click on it, but if the headline and message are out of sync with the offer, it could make few sales. Another creative that accurately communicates the value proposition may attract fewer clicks, but those who do click will convert at a much higher rate.
A great CTR is good, but sales are more important. In many cases, you'd be better off testing conversion rate over click-through rate.
2. Have a Clear Hypothesis for Your Test
It's tempting to design several wildly different ads and then test them against each other to see which works best. This is great in the initial stages when you're searching for ideas - but only up to a point. When you test two completely different creatives against each other, you can see which works best, but you won't know why - and it's the why that you really want. Understanding why one ad beats another allows you to apply those lessons to other ads aimed at the same audience.
For example, the conclusion 'creative B converts 35 percent better than creative A' is useful, but the conclusion 'creative B converts 35 percent better than creative A because we added interactivity' is even better. With the second statement, you've now got significant evidence that the audience you're targeting prefers interactive ads and you can use that information with other creatives.
To achieve this type of conclusion, you need to start with a hypothesis that tests one factor. For example, in the previous example, our hypothesis would be 'adding interactivity will increase conversion rate.'
This is far more powerful than the hypothesis 'adding interactivity, changing the image and revising the copy will increase conversion rate.' If it works, so what? You've made an improvement, but you haven't really learned anything because you don't know which change contributed to the increased conversion rate.
3. Test the Most Important Elements First
Google might have tested 41 different shades of blue for its links, but that doesn't mean you should focus on such small details. Unless you're a billion-dollar corporation, it's likely you have a restricted budget, which means you need to test the changes that have the potential to make the biggest difference first.
In a typical ad, these will be:
• Colors and Images - Does your ad perform better with a photo, graphic or just a plain background? Do your customers react better to seeing the image of a man or a woman?
• Value Proposition - How does varying the copy in your text affect your CTR or conversion rate?
• Call to Action - Would customers rather 'Sign Up,' 'Take Action' or 'Buy Now'?
Only once you've optimized all these areas, you can then move onto smaller details.
4. Test Your Customers, Not Just Your Creatives
If you're advertising on a platform like Facebook, you have the option to narrow down the audience for your ad. This is almost always a good idea, as it ensures your ad spend goes on individuals who are your target market.
However, even within your target market, there will be variations, and individual ads will perform better for some customers than for others. You can test different countries, interests, age ranges, relationship statuses, genders and much more.
There's probably no point showing an ad that only converts 20-something mothers to women aged 60 or over - you're just throwing away money. Even if they click, they're unlikely to make a purchase.
For example, quirky brand Cath Kidston sells fashion, beauty and lifestyle products to women of all ages. But a creative advertising their range of changing bags is likely to do best when targeted at just a section of their market - mothers and women who are expecting. Of course, this is just an assumption - which an effective A/B test will prove (or disprove).
Remember: You should test the same creative against two different demographics, or two different creatives against the same demographic. This makes it quicker and easier to determine the result.
5. Can't Test Everything? Artificial Intelligence Can Help
The biggest problem with A/B testing is that you need a lot of samples. This is prohibitive on both your time and budget and encourages marketers to take shortcuts. New advances in artificial intelligence mean predictive tools can be used to determine the likely success of an ad before you've spent a single dollar on advertising, allowing you to test more creatives, faster.
ReFUEL4, in particular, enables your business to pick the right creative and launch an ad campaign quickly. Quality creatives are produced by a global network of thousands of designers, analyzed by our advanced AI and ranked according to predicted ad performance. This saves you time and money from having to test all your creatives, before you can decide which of them are likely to yield higher ROI based on your campaign's unique objectives and targeting.