An electric buzz fills the air as you anxiously approach the slot machines. You reach into your pocket, grab a few coins, pull the handle, and wait. What happens next is anyone’s guess.
Running paid ad campaigns without experiments is no different than gambling. If you aren’t experimenting with your audiences, ads, and assets, you’re placing big bets on what you think will win over your audience (pun intended). Unfortunately, you’ll lose more often than not—the native ad channels make sure of that.
But what if I told you there’s a way to decipher the mystery of paid advertising—a thrilling way to game the system—and launch campaigns that consistently hit the jackpot? And…what if I told you we have native customer data to prove it.
It’s not luck—it’s science. But before we dig into the numbers, here’s a quick primer on experimentation, the Metadata way.
An experiment is a fancy way to say “test.” Experimenting in Metadata allows you to test different elements of your campaigns, including the audience, creative, text, channels, content, and offers.
By experimenting, you can understand what truly resonates with your audience, what’s falling flat, and what adjustments you can make to drive more pipeline and revenue. Metadata handles all of the permutations, dollar allocation, and performance tracking so you don’t have to.
Once you push your experiments to the channels, Metadata’s optimizer figures out which experiments are working, stops the ones that aren’t via auto-pause, and shifts any remaining spend to what’s moving the needle.
To elaborate, Metadata makes sure you don’t waste spend on experiments that aren’t actually driving revenue. For example, you can create a rule that tells Metadata to stop spending on an experiment if it’s not generating a lead after a certain number of clicks.
Here’s a bold claim we’re proud to stand by: our customers reap the benefits of paid ad experimentation no matter how many experiments they run.
Before you jump into the data, we want to mention a bias in our sample. Our customers are tip-of-the-spear innovators who are all in on pursuing unconventional tactics. By proxy of partnering with Metadata, they’re already prone to experiment, adjust, and adapt.
Ok, down to business. Here’s a look at our own in-platform customer breakdown of monthly experiments from Metadata customers:
Number of experiments (monthly) | Return on investment |
---|---|
0-100 | 1.4X |
101-500 | 4.3X |
500-1,000 | 15.99X |
1,001-2,000 | 3.82X |
2,001-5,000 | 3.03X |
Yes, you’re reading those numbers correctly. The X means “times.” When we say “ROI”, we’re talking about the opportunity amount divided by the spend. And we’re not talking about influenced opportunities, these are genuine triggered opportunities. That means they clicked your ad and eventually became a sales opportunity.
So what does this data tell me?
Following the audience type thread from above, we wanted to find out which audience groups were most effective during experimentation.
The clear winner is Retargeting at 15X which makes sense. They were interested before, but they weren’t ready to buy or just needed to hear the message again. The Native LinkedIn audience—home of B2B paid ads—also ranks high at 11X ROI, but comes at a huge cost. And LinkedIn fueled by Salesforce data rakes in almost 9X ROI but is much easier on the wallet.
Are you a believer yet? If so, here’s how to get going.
As tempting as it may be to dive into Metadata and start experimenting immediately, let’s pump the brakes; there are a few steps you should take first, and, interestingly, the first one doesn’t have anything to do with Metadata.
Before using experiments to zoom in on the nitty-gritty of your campaigns, look at the big picture beyond experiments. What’s the overall goal of your demand generation strategy?
Try and limit yourself to just one of the following:
While this step may seem far removed from your experiments, the answer is fundamental. For example, if you’re trying to create demand, you may experiment more with brand awareness ads on Facebook than if demand expansion was your goal. Similarly, suppose your goal is to revive demand. In that case, you might consider experimenting with LinkedIn’s thought leader ads to see if they spark good conversations with prospects.
Once you understand your overall outcome, you need an experimentation framework. It doesn’t have to be anything fancy, but having a way to frame your ideas, prioritize your experiments, and make smart decisions will pay off, especially as you scale.
Here’s a simple framework we use that focuses on each experiment’s impact, effort, and confidence.
It’s almost time to launch some experiments, but there’s a bit of dirty work to do first. Before you experiment with anything, it’s vital that you make sure every part of your campaign is ready to rumble.
I’m talking about:
I see a lot of experimentation strategies, and the best ones all have one thing in common: They evolve quickly. Like, really quickly.
Is a set of ads not resonating with your audience? Fine. Try new ones.
Is one of your audiences clicking on your ads but not converting on the landing page? No problem. See if another offer type will do the trick.
As you conduct experiments, Metadata will provide you with golden nuggets of wisdom that can—and should—inform your future strategy. But to get anything out of those insights, you have to apply them to your campaigns.
If you don’t cross your t’s and dot your i’s before launch, you’ll find yourself in a holding pattern. Just think about how slow your strategy would evolve if you stopped every few weeks because your design team had to whip up new ads or you needed to write fresh copy.
You’re good to start experimenting now but don’t get carried away. Instead, test a few foundational elements, like your audiences. While you don’t have to start here, I’d encourage it.
Why? Because your audiences are the foundation. They’re the glue that holds your campaigns together. Without the right audiences, you’re up a creek without a paddle. Every other cog in your strategy revolves around your audiences, so get them right and go from there.
Test audiences built from different data sources, figure out which ones respond, and use those audiences—even if it’s just a handful—as your launchpad. Then, test elements around them, including the ads and assets themselves.
But again, test one variable at a time. Once you launch those experiments, pinpoint what’s working (Metadata will do this for you), pause what’s not (Metadata will do this for you, too), and apply your learnings to the next round of experiments.
Need a few ideas to get started? Try these:
Pro Tip: Keep your naming conventions as simple as possible, so it’s easy to visualize results and source actionable insights.
How to Name Your Paid Ad Experiments | |
---|---|
Formula | Example |
Country_Objective | US_Demos |
QuarterYear_Country_Audience_Objective_Giftcard | Q124_US_WebVisitors_Demos_Giftcard |
Objective_Message_Variation | Demo_Giftcard_V1 |
Objective_Type_Message_Variation | Demo_Video_Giftcard_V3 |
My biggest gripe with native ad channels is that you can only optimize toward vanity metrics like leads, impressions, and clicks. There’s nothing inherently wrong with this—we’ve all done it—but this approach won’t cut it if you’re truly trying to maximize your budget.
Sure, you could have Facebook or LinkedIn optimize toward cost per lead (CPL), which I guess is better than nothing, but you’re still miles away from pipeline becuase the native ad channels aren’t connected to your customer relationship management (CRM) tech.
Metadata does things differently. Because Metadata integrates with CRMs like Salesforce and HubSpot, you can move beyond vanity metrics and optimize toward what’s actually driving your bottom line.
Let’s say you’re running an experiment to see if image ads perform better on Facebook or LinkedIn, and these are the high-level results:
Channel | Clicks | Cost per Lead (CPL) | Cost per Opportunity (CPO) |
---|---|---|---|
10,000 | $60 | $2,500 | |
7,500 | $150 | $2,000 |
If these were your results, Metadata would spend more on LinkedIn because those campaigns and experiments are driving a lower CPO, despite Facebook getting way more clicks and a lower CPL. Long story short, Metadata chases what matters—pipeline and revenue—a goal that the native ad channels just can’t achieve.
Stories of overnight success are almost always exaggerated. The same goes for anyone telling you that their experiments and optimizations netted a return in some insanely short period of time. When experimenting in Metadata, I tell our customers to expect to spend between $15-$20K for at least a quarter before they start seeing any meaningful results.
Will you see improvements before then? Absolutely. Metadata will figure things out quickly and get your campaigns trending up and to the right. But a true experimentation strategy is a long-term investment. So, instead of diving into Metadata’s dashboard daily, give your experiments at least a quarter to run before you come to any conclusions. Then, when the quarter is up, you have a decision:
Take your time and let Metadata do its thing. The wait is worth it. Trust me.
If you ask me what I like most about experimenting in Metadata, I’d put scale at—or near—the top of my list. With Metadata, you can run as many experiments across as many channels as your budget can handle.
Want to test image ads on social?
How about a dozen audiences on LinkedIn?
Do you want to see which calls to action (CTA) lead to the most opportunities?
These answers are just a few clicks away in Metadata and come without the button-pushing that’d come with running the same experiments in the native ad channels (or at least trying to run them).