Back to Blog

Four A/B Testing Best Practices

  • - LiveRamp
  • 4 min read

A/B testing is at the core of evidence-based marketing. Learning how to conduct A/B tests that provide valid and generalizable results is a long-term commitment. However, it’s worth the time put in because ultimately, a solid A/B testing program can drive significant business impact.

There are many reasons to A/B test, and one of the most common use cases is testing the incremental impact of media. This test answers the question, What impact did different media touches drive together versus separately? Knowing the answer to this question can help you more efficiently allocate media budgets to drive better business results.

Below are four A/B testing best practices that will help you start developing a successful program:

1. Start simple.

When an organization is new to A/B testing, mistakes in the end-to-end testing process are inevitable. If you start simple, your team can focus on getting the fundamentals correct and more easily identify and fix issues.

Follow these steps to set up a single-channel segment, segment-agnostic test:

  1. Generate a simple, revenue-based hypothesis you’d like to test.
  2. Design your test plan—decide which audiences you’d like to receive media and determine the amount of exposure and revenue lift you’ll need to cleanly read your test.
  3. Plan your media to adhere to your test requirements.
  4. Create control and test splits for your audiences. Your test audience should be eligible to see media. Your control audience should not.
  5. After your test has run, measure the lift in revenue between your control and test audiences.

Once you have the basics down, consider adding complexity by:

  • Running simultaneous tests and cross-channel testing
  • Splitting testing into multiple test groups (A/B/C/…/Z testing)
  • Measuring incremental lift in nontraditional conversion points

As you add variables, remember that a simple test leading to clear, confident decision-making provides more business value than a sophisticated test with poor data integrity.

2. Document the decisions your company would make depending on the results of your incrementality testing.

Tests don’t tell you the exact ROI of your media. They give you a range that can be wide or narrow depending on how long you run the test and how many consumers are included.

The precision of your test results should match the resolution you need to make important decisions. If your organization requires a certain level of confidence before it justifies adjusting media strategy or shifting spend, then you can estimate how long the test needs to run based on the rate at which people enter the test and the size of the impact you expect to see. This allows you to limit the risk of running a test that doesn’t yield business value.

Note that this step requires close alignment between media and analytics teams. If you haven’t collaborated on a project yet, incrementality testing is a good place to start.

3. Create test/control splits after audiences have been synced with an identity resolution platform.

The closer test and control splits are made to the environment where the media is being served, the more you’re able to reduce noise. If you do this, you won’t include people in your count that can’t be matched or influenced through media.

As an example, brand A was tempted to create test and control groups across different audience segments within their CRM. By trying to avoid the hassle of reading test performance without media exposure logs or a match-back file, they would, in fact, create another problem—not knowing which consumers matched and which didn’t at the end of their testing period.

Instead, onboard the entire audience you want to work with through an identity resolution platform to enable confidence in reading people-based insights.  

4. Use your CRM data for A/B testing.

The inclusion of CRM data can greatly enrich the quality of insights. The inclusion of CRM in A/B testing enables a more robust measurement of performance between audience segments as well as the enterprise impact of digital against the bottom line.

When using CRM files as an input to read testing, work with your analytics resources and CRM teams to have a good understanding of what percentage of total sales/conversions are tracked within CRM.

Purchases made by loyal audiences are more likely to be trackable within a POS or CRM system than purchases made by new customers. If you don’t take steps to extrapolate, you’ll underestimate the ROI for media run against new customers.

This is especially relevant for retail, where POS systems don’t necessarily collect information to make a CRM connection during a conversion.

Calculate true ROI—yes, you can!

While the above is a complex process, it can be a game-changer in how organizations measure the effectiveness of their marketing spending. Iterated A/B testing enables the continuous refinement of customer insights. Its ability to provide a causal link between specific interactions and purchases makes it invaluable in the calculation of true ROI—what all businesses want at the end of the day.

Boost Your ROI: 40 Ways to Make Every Marketing Dollar Count

Download Now