Read Time: 13 min

13 Email A/B Testing Mistakes that Limit Your Success

Categories

“You have to test, otherwise you are just making an educated guess,” said Stuart Clark of Red C at Litmus Live London 2017. Email A/B testing was a recurring topic at all three Litmus Live events last year, as well as this year. That’s no surprise because it’s one of the most powerful opportunities for email marketers to iterate and improve their campaigns.

A/B testing can help you uncover high-impact changes in your subject lines, email design, landing pages, and more. How big can the gains be?

“Ecommerce brands that A/B test
their emails generate 20% more revenue
on average.”

—Alex Kelly (MailChimp)
at Litmus Live 2017

Tweet this quote →

To A/B test your emails, all you need to do is present two variations to two different groups of your subscribers and then listen. The metrics will tell you their preference, which you can then act on either in the short-term by sending the winning variation to more of your audience, or in the long-term by applying that preference to future campaigns.

That sounds simple—and to a degree, it is, thanks to email service providers and testing software providers rolling out functionality that makes email A/B testing much easier to execute than in past. However, it’s also rather simple to mess up your email A/B testing and either come to the wrong conclusion or undermine your results completely.

To avoid that fate, here are 13 tips that will ensure you’re getting the most out of your email A/B testing:

1. Test your automated and transactional emails, not just your broadcast and segmented emails.

Nearly 39% of brands never or rarely A/B test their broadcast and segmented emails, according to Litmus’ 2018 State of Email Survey.

Fewer than One-Third of Brands Include an AB Test

That’s a missed opportunity, but there’s even more money left on the table when you look at automated and transactional emails. More than 65% of brands never or rarely A/B test their automated emails, and 76% never or rarely A/B test their transactional emails.

Vast Majority of Brands Rarely or Never AB Test Their Triggered Emails

That’s a shame because while A/B testing your broadcast and segmented emails gives you a competitive edge, A/B testing your automated and transactional emails gives you a huge edge. Successful email programs are 58% more likely than less-successful programs to A/B test their triggered emails at least once a year. And successful programs are 53% more likely to A/B test their transactional emails at least once a year.

Both of those actions are in the top 10 of 20 Things Successful Email Marketing Programs Do.

While triggered emails are often lauded as set it and forget it programs, remember that Nothing in Email Marketing Is ‘Set It and Forget It.’ These emails account for the majority of email marketing revenue at more than 13% of brands, according to Litmus’ 2018 State of Email Survey. And that percent is only going to grow in the years ahead.

2. Focus your email A/B testing efforts on campaign elements that are most likely to move the needle on performance.

Sometimes tiny tweaks to minor elements can move the needle for you, but it’s generally wise to focus your testing on key email elements, such as subject lines, calls-to-action, and images.

According to our survey of 3,000 marketers, this is where most brands currently focus their email A/B testing efforts. You should, too.

Subject Lines and CTAs Are Most Tested Email Elements

In addition to those elements, your automated emails have other elements that are worth testing. In my book, Email Marketing Rules, I recommend testing:

  • Different trigger logic for automated emails
  • How quickly to send the message after it’s triggered
  • Whether to send a series of automated emails
  • The delay between automated emails in a series
  • Under what conditions an automated email in a series is skipped
  • Under what conditions an automated email series ends

3. Understand whether your testing will get you closer to a local maximum or a global maximum.

Are you testing incremental tweaks or radical changes to your email? It’s important to understand which because, as Indeed Product Manager Janie Clarke pointed out at Litmus Live San Francisco, small tweaks can help you get to the top of a local maximum, but won’t help you discover a new global maximum. To find that, you’ll need to try an entirely different approach.

For example, testing the color of a button or testing a text link versus a button is only going to help you get closer to optimizing that email design—or reaching a local maximum. However, testing a copy-heavy design versus an image-heavy design or an interactive email versus a text-only email, for instance, might help you find the best way to communicate your message—or discovering a global maximum.

Local Versus Global Maximum

4. Limit your A/B tests to one thing at a time.

Unless you’re doing multivariate testing, you’re going to want to keep your A/B tests limited to one change per test. For example, you could test…

  • A green button versus a blue button
  • Email copy with social proof included versus no social proof
  • A lifestyle hero image versus a product hero image
  • A percent-off discount versus a dollars-off discount

Having more than one difference between versions A and B makes it difficult to clearly determine which element led to the difference in performance.

5. Have a clear hypothesis.

Don’t make random changes just to see what might work. Know what you are trying to achieve and have a solid rationale for why what you’re testing will help you achieve the desired goal.

For instance, if you’re trying to boost conversions, you might create one version of your email copy where the call-to-action is above the fold to make it more visible and another version where content precedes the CTA and tries to generate strong interest in the CTA. Also, if you wanted to see if addressing your new subscriber by name in the subject line of your welcome email boosted conversions, you would test one subject line with the personalization and another without.

Sites like Behave.org can give you lots of ideas of tests to try based on what other brands have done.

6. Choose a testing victory metric that is aligned with your campaign goal.

We can’t stress this enough: Be sure to test far enough down the funnel—which in most cases means testing as far down the funnel as you can. Most email campaigns try to generate either email conversions or sales conversions, so your email A/B tests should focus on moving those metrics, too.

Where some marketers go wrong is that they think subject lines can only affect opens, email content can only affect clicks, and landing page content can only affect conversions. Not true! The different stages of an email interaction don’t operate in isolation. They all work together, because subscribers experience them all together.

When you embrace that, you realize that the goal of a subject line isn’t to generate opens. It’s to generate openers who are likely to convert. And similarly, the goal of email content isn’t to generate clicks. It’s to generate clickers who are likely to convert.

Not convinced? It’s easy to confirm for yourself. Just run a few subject line A/B tests and look at how the different subject lines affect activity all the way down the email interaction funnel.

Also, who cares if subject line A generates more opens than subject line B if the latter generates more conversions? And who cares if email content A generates more clicks than email content B if the latter produces more conversions? We guarantee that your boss will prefer more conversions.

State of Email Analytics

2018 State of Email Analytics

How do your email analytics capabilities stack up against your peers? See what email metrics brands measure (and what tools they use to do so) with our first-ever State of Email Analytics report.

Get your copy →

7. Use test audience segments of similar subscribers.

Just like you control the changes that you make to versions A and B of your email, you also need to control who gets each version. You want your two test groups to be composed of the same kinds of subscribers, whether that’s new subscribers, subscribers who are customers, or subscribers in a particular geography, for instance.

8. Use test audience segments of active subscribers.

Similarly, you’re going to want to ensure that both groups of test recipients contain active subscribers who are regularly engaging with your emails. Otherwise, if version A goes to a group of subscribers who are much more active than the group that got version B, then version A is more likely to “win” for reasons that potentially have nothing to do with what’s in version A.

The one caveat is if you’re testing reengagement emails, in which case you want to target inactive subscribers, of course.

9. Ensure that your testing groups are large enough that your results will be statistically significant.

If your test audience is too small then the results of your test might just be due to randomness. Make sure that your results are statistically significant by having a big enough test audience.

Kissmetrics, AB Testguide, and Optimizely all have online calculators to help you out.

10. Use holdout groups, when appropriate.

For any email that you test, consider the effect of your subscribers not receiving that email at all. Having a holdout group of subscribers who don’t receive an email is how you can measure that effect.

Holdout groups are especially valuable when it comes to testing automated emails. For instance, if you’re testing the performance of a new shopping cart abandonment email, you want to make sure that a portion of cart abandoners don’t get a cart abandonment email at all. Doing that allows you to measure whether your cart abandonment email is annoying subscribers or disrupting their natural shopping behaviors.

However, you can use a holdout group for any email you send to make sure that email is actually having a positive effect on your subscribers. Enterprise software company Atlassian has set up a system that requires testing of every email, including mandatory holdout groups, Atlassian Senior Product Manager of Engagement Platform Jeff Sinclair told attendees of Litmus Live San Francisco.

11. Create a test plan so you test regularly and record test results.

Ad hoc A/B testing is inefficient because it’s sporadic and unfocused. To get the most out of your A/B tests, you need a plan. Develop a testing schedule that records:

  • The theories you are trying to confirm
  • Which emails you are using to test each theory
  • The results of each test and how they impact your future testing plans

Aim to include an A/B test in at least half of your broadcast and segmented promotional emails, as our research indicates that brands get little competitive advantage when they test less frequently than that. Similarly, brands report a significant increase in success when they A/B test their triggered and transactional emails at least once every 6 months.

You’ll also want to have a test plan because you’ll need to…

12. Confirm the results of tests.

Any single A/B test is never conclusive forever. In the short term, any lift that you see may be the result of the novelty effect. Subscribers are attracted to the new, which can give any change you make versus the control a lift.

However, the novelty effect fades pretty quickly. So if you run the same test two or three times over a period of time, you’ll wring out any novelty effect and be able to see the true impact of the change.

In the long term, consumer behaviors and attitudes change. The composition of your email list can also change over time depending on subscriber acquisition practices, changes to your product or service offerings, expansion into or retreat from certain geographies, and other factors.

The more definitive the success of a test, the longer you can probably wait to confirm it again. But eventually, you’ll want to periodically reconfirm every test at least once or twice—which, again, is why an A/B testing plan is critical.

13. Share the results of your email A/B tests with other channel owners at your company.

Jonathan Pay of Holistic Email Marketing told Litmus Live London attendees to be sure to share their email A/B testing insights with their web, social media, and ad teams. That’s great advice, because email marketing learnings can fuel success in other channels.

Poor coordination across channels and departments was identified as the biggest challenge facing email marketers in 2018, according to a Litmus poll of more than 600 marketers. Sharing A/B testing results is just one more way that brands can foster an omnichannel approach to marketing and the customer experience.

Share A/B Test Results

To get the most out of your email A/B testing efforts…

Follow those 13 recommendations:

  1. Test your automated and transactional emails, not just your broadcast and segmented emails.
  2. Focus your email A/B testing efforts on campaign elements that are most likely to move the needle on performance.
  3. Understand whether your testing will get you closer to a local maximum or a global maximum.
  4. Limit your A/B tests to one thing at a time.
  5. Have a clear hypothesis.
  6. Choose a testing victory metric that is aligned with your campaign goal.
  7. Use test audience segments of similar subscribers.
  8. Use test audience segments of active subscribers.
  9. Ensure that your testing groups are large enough that your results will be statistically significant.
  10. Use holdout groups, when appropriate.
  11. Create a test plan so you test regularly and record test results.
  12. Confirm the results of tests.
  13. Share the results of your email A/B tests with other channel owners at your company.
Chad S. White

Chad S. White

Chad S. White was the Research Director at Litmus