No matter how well you know your audience, they'll always surprise you. However much research you conduct, however well you know your people, there are always places you haven’t looked. It’s a science built on nuance, and there’s room for improvement on almost any campaign. This is where A/B and split testing comes in.
Being able to test elements in real time is like having a heart monitor. It lets you see exactly how you’re doing on a moment-to-moment basis, giving you a heads up when it’s time to stop doing X and start doing Y. Do your subject lines perform better with or without emojis? Are your landing pages too copy-heavy or just right? What generates more engagement on your social media channels: videos or static posts?
Let the numbers be your guide. By relying on A/B testing to let your audience answer these questions, you can create immensely impactful, data-led ad creative.
A/B testing is sometimes referred to as "split" or "multivariate" testing. It's the method of comparing two versions of a marketing element, like a landing page or app layout, to see which performs better.
Your audience is split at random. One segment sees one design, while the other sees a variation. Based on the metrics you’re using to measure success (click-through rates, bounce rates, or time spent on site, etc.) you can determine which option is better suited to your audience.
It’s important to note that while everything can be A/B tested, not everything needs to be A/B tested. A/B tests are best used when you feel there are two approaches that could work, and you’d like data to show you which is better.
Note that A/B and split testing shouldn't be used to test opinion-based biases, such as background color or font choice.
A/B tests and split tests are essentially the same thing, and are often used interchangeably. ‘A/B’ refers to the elements being tested, while the ‘split’ is in reference to the audience.
Once your test has reached the required sample size or predetermined time frame, the results garnered from the audience split will indicate whether element A or element B was preferred.
A/B and split tests are easy to conduct. They require low to no investment, and offer real-time results. Apart from leveraging data to guide business decisions, A/B tests provide a number of advantages, including:
You can A/B test virtually anything in your marketing ad creative catalog, from your homepage look and feel to what your CTA's color should be. While access to this kind of granular data is helpful, try not to A/B test for the sake of testing.
The key is to ensure that:
A/B testing is something that should be used on an ongoing basis. Provided you know how to interpret the data, the insights can be invaluable.
Here’s a simple example. Suppose you want to run a promotion, and you’re planning to send a number of emails to your existing customers. You want this promo to be seen by as many people as possible, so a strong subject line is critical. You decide to A/B test two possible approaches:
Subject line A: → Don’t miss these MASSIVE SAVINGS!! ←
Subject line B: We’re running a promotion, and we think you’ll love it.
In this instance, the test is based on sound reasoning. You want to use the most effective subject line formula for your promotional campaign, and the two options have clear differences.
By splitting your audience and sending each segment a different subject line, you can measure open rates in real time. You can see which subject line has a better open rate after one hour (telling you which drives more urgency), and check back in over two or three days (telling you which was more successful in general). Based on these results, you'll know which of the two lines resonates with your audience, and can use that subject line approach for the balance of the emails.
You can’t put your brand to the test without creative to test with. And since creative quality is the most important variable in marketing performance, marketers simply cannot fall short in design. The digital market is too saturated; powerful businesses are too easy to scroll by to put anything less than a full tank into your creative engine.
This is where even the most ambitious brands often fall short. They know their brand, and they know how to read the litmus on each test. But proper testing calls for different creative in each iteration—twice the amount of creative must be developed to get a proper read on performance. Without variation in the assets, you’re essentially A/B’ing the same thing.
Dedicated, always-on design teams are the silver bullet here. Veteran design stewards like those at Superside have a storied history of getting brands the creative they need without adding heads or exploding budgets, leaving creative marketers to focus on strategy.
PointCard, a fintech with an eye on becoming the standard payment platform for everyday purchasing, found this out firsthand: They watched their CTR more than triple on social ads.
Within a few months of their partnership, they used their design support team to create nearly 30 creative ad concepts, turning them into 441 static and motion social media ads resulting in a 240% increase in CTR and a click-to-install conversion rate growth of 275%—nearly 90% above their campaign targets.
That’s the power of bountiful, brand-forward creative. When the creative marketer’s pocket is never empty, they can finally find their brand’s sweet spot.
Most A/B tests follow the steps below. However, you can modify any of these steps to suit your business' needs. The length of the test or the audience segments are common variables.
An effective A/B test should produce usable results. This requires knowing where missed opportunities or customer pain points may exist. A good example of this is a product page: How big should the image be? Where's the pricing located, and what does the CTA look like? A product page built to fit its audience makes for a smoother shopping experience, and that makes for higher sales.
When designing or building your test elements, see that both have their own potential appeal. Creating a variation simply because you need something to test your original design against will give you unhelpful, uninformative data. In the product page example, this might look like a layout with a hero image versus three smaller images of equal weight.
Don’t forget to run the A/B test with your site designs optimized for mobile as well as desktop.
How long do you want to run your test, and how will you measure the results? You may also decide to test using only a percentage of your total audience. For example, you could use 20% of your audience split into two groups, allowing you to serve the remaining 80% of your audience the content that performed better.
For a product page A/B test, the measurable metrics could be bounce rates, abandoned carts and completed transactions monitored over a one or two week period.
Depending on which testing tool you use (more on those below), you'll be able to view the data and results as they happen. Along with whatever metrics you already have in place, this data will also include the number of customer engagements, the drop-off rate, and the percentage of respondents still to act.
This is the exciting part. Compare like-for-like results for both variations and see how they stack up. Sometimes the results might be very similar, and in these cases, it's often worth repeating the test rather than trying to uncover insights where there may be none.
Rounding off the product page example, it could be possible that Design A ranked higher in completed transactions and had lower bounce rates, but was outperformed by Design B in your abandoned cart metric. Use this information to analyze your designs, and make an educated guess as to why that might be.
For truly excellent ad creative based on informative, involved split tests, let dedicated design partners fulfill your design requirements. Get the assets you need when you need them, providing any ad creative feedback and input with ease.
The point of conducting an A/B test is, in part, to determine which element or feature better serves the brand. But there's also significant insight gleaned from understanding why your customers behaved the way they did. Think about how to use that data to make informed decisions going forward. Without getting too myopic (which can lead to manifesting "insights" that don't exist), spend time on the results and make sure you can use as much of the feedback as possible.
There are a number of A/B testing tools available. Google offers a free marketing platform called Optimize, designed for small and medium-sized businesses. Among a host of website assistance features, Optimize enables businesses to test site and app variations.
Full-feature paid tools include VWO, the world’s leading A/B testing tool and ultimate split testing platform. VWO allows businesses to A/B test anything and measure the impact across all platforms.
Optimizely (no relation) is another excellent tool, able to test and optimize every single customer touchpoint across any customer journey.
Despite its fairly simple structure, there are a few mistakes you can make when A/B testing. To help you avoid the most common ones, we’ve listed them below.
An email A/B test is a great illustration of this mistake. If both the subject line and the email content are A/B tested concurrently, the results will be muddled, completely invalidating the test and making it impossible to ascertain what element influenced what decision.
Keep your tests simple.
Testing for the wrong amount of time, or for the wrong audience is equally destructive. Ensure that you’re measuring your A/B elements using relevant metrics. Don’t measure a landing page by revenue generated if that page doesn’t feature a sales option, for instance.
This seems obvious, but occasionally a test will indicate something that contradicts what was, in your mind, a certainty. You might be tempted to back yourself and go against the data, but this is rarely the right call. Data is unbiased. Cherry-picking when to ignore data limits the success of any A/B test to your own perception.
Before running an A/B test, especially a long one, assess the effect that any external factors could have (holidays, competitor movements etc.). Unless significant external factors are weighed in, results can easily be skewed or misinterpreted.
Often the insights gained from an A/B test can create the need for further testing. Full optimization is a long-term goal, and requires combining multiple tests with a well thought-out strategy.
To ensure you have testable social media content that resonates with your audience, let Superside handle your social media design and ad creative.
The streaming service regularly runs a massive number of A/B tests with a sample size of about 100,000 users. Personalization is the name of the game, and this rigorous and highly structured series of tests allows Netflix to present a platform that looks completely different to each user.
This non-profit organization wanted to increase donations from first-time users, and believed that adding more information to their landing page would do the trick. The A/B test proved that hypothesis correct, with the more information-rich landing page seeing an 11.5% increase in donations.
When this European apparel brand wanted to increase the average revenue per user, it A/B tested different homepage designs. The test showed the version that followed best practice, and kept product images to a uniform size that increased the average revenue per user by over 17%.
Before you start using A/B and split testing to start making better calls, familiarize yourself with the following best practices:
David is a Senior Content Marketing Specialist at Superside. A former journalist with bylines too numerous to enumerate, he brings his love of storytelling and semantics to the marketing world. Recognizing the sizable gaps in the creative-as-a-service (CaaS) sector, he jumped at the chance to fill the creative void for ambitious brands. In his off hours, he enjoys loud music, making vegan meals and being made fun of for making vegan meals. He’ll gladly talk to you about any of the above on LinkedIn.