Aug 16 2023
Clare McKinley

Why the Real Social Media Cage Match Is Advertisers vs. Brand Safety Threats

Share:

What a strange time we live in. Congressional hearings about aliens, traumatized orcas attacking boats, TikTokers making thousands per day doing whatever this is

So, what’s next? A cage match between two of the wealthiest and most influential people in the world?

Well, yes. Maybe.

This proposed cage match between Elon Musk and Mark Zuckerberg is a news story that’s just too bizarre to ignore—and given that it would take place between two social media titans, it got us thinking about who social media marketers are currently in the ring with. One of the main opponents that comes to mind: the dizzying and ceaseless swarm of brand safety threats, including hate speech, misinformation, and disinformation.

So, whether you care about the feud between Musk and Zuckerberg or not, allow us to use this outlandish spat as an opportunity to explore the ongoing cage match between advertisers and social media’s brand safety risks. Even more, we’ll hand you all the tricks you’ll need to emerge victorious.

Round 1: The Feud

As advertisers know well, brand safety is no joke: the consequences of your ad running next to the wrong kind of content are pretty terrifying. But don’t worry, we’ll use the Musk vs. Zuck cage match to lighten the mood of this rundown—after all, the idea of two billionaires duking it out inside a cage is pretty darn funny. So for those who aren’t familiar, here’s the setup:

It all started with the news of Meta’s plans to release Threads, a platform the company described as their "response to Twitter." Shots fired!

Musk responded to the news with a simple post on X (formerly Twitter): “I’m up for a cage match if he is lol.” Zuckerberg then posted Musk’s challenge to his Instagram story, adding the text “Send Me Location.” While Zuckerberg initially proposed August 26 as a date to hold the match, he has since said it’s unlikely to happen at all (not even at the Colosseum).

So while the cage match between Zuckerburg and Musk is purely hypothetical at this point, social media advertisers still face a very real opponent: threats to brand safety on social media thanks to misinformation, disinformation, and hate speech.

Round 2: The Real Opponent

Social media presents some unique brand safety challenges to digital advertisers due to how quickly hate speech, misinformation (false information), and disinformation (false information that is “deliberately intended to mislead”) can spread on the platforms.

Social media algorithms are designed to deliver content that's most likely to trigger user engagement. And according to an analysis from advocacy group the Integrity Institute, “content that contains misinformation tends to get more engagement–meaning likes, views, comments, and shares–than factually accurate content.”

The content in question could be as bizarre as an image of Pope Francis wearing a very stylish Balenciaga puffer jacket, or as harmful as false news stories about political candidates. In fact, both liberal and conservative lawmakers have proposed bills designed to hold social platforms accountable for the impact of the amplification of hate speech, misinformation, and disinformation, because of the disastrous effects that amplification can have.

While lawmakers, advocacy groups, and social media users have gained fluency around the spread of hate speech and misinformation on social media over the past few years, the explosion of generative AI will likely only stoke the fires. In fact, OpenAI, the company that created ChatGPT, has expressed concern over the tool’s potential role in spreading mis- and disinformation multiple times, and industry researchers say these tools will make it easier to create believable false content, such as a fake article written by ChatGPT accompanied by a fake accompanying photo generated by Midjourney.

Why? Generative AI tools can quickly create large amounts of false and misleading content for free. Like this, for example:

For brands advertising on social media, the situation presents some significant concerns, with 99.5% of industry professionals saying they believe generative AI poses a brand safety and misinformation risk to digital marketers. And the consequences of brand safety missteps can be dire: 65% of consumers report that they are “likely or very likely to stop buying from a brand that advertises next to misinformation”; and 73% of consumers “agree or strongly agree that they would feel unfavorably towards brands that have been associated with misinformation.”

All in all? The presence of hate speech, misinformation, and disinformation on social media is a formidable opponent for advertisers. It’s kinda like facing down this guy. But don’t worry, the match isn’t over yet! In period three, we’ll share how advertisers can defend themselves against these threats.

Round 3: The Champion’s Defense

Who's to say what a cage match between Zuck and Musk would look like. Would Elon use his signature move, the “Walrus”? Would Zuckerberg’s jiu jitsu skills take Musk down in seconds? Though strategies for this hypothetical cage match are still forthcoming, there are some clear steps social media marketers can use to stand up against their opponent and come out on top—like this:

First, it’s important that brands prioritize investing time in continuous social media monitoring. By keeping a close eye on your social media presence, you're more likely to spot (and delete) harmful content before consumers start to associate it with your brand. To do this effectively, organizations must train their people to better identify hate speech, misinformation, and disinformation so that they can proactively monitor all the brand’s social pages, posts, and ads for problematic content. While hate speech should be fairly easy to spot, there are a variety of resources that outline how to identify mis- and disinformation as well as fake media (guides from NPR and the Washington Post can be good places to start). This should be an ongoing learning process for your team—especially as AI continues to develop.

Next, it’s always helpful to find technological solutions to help fight the problem on your behalf. Tools like NOBL—which use natural language processing and machine learning algorithms to help advertisers find high-quality, brand safe inventory—can be a great way to ensure your brand steers clear of risky and disreputable real estate across the programmatic landscape.

Finally, while it’s never fun to think about, you should be sure to have a plan in place for if and when your brand is linked with harmful social content. That plan should focus on clearly condemning the false or harmful content in question—an essential step, as research shows that consumers will perceive a brand more positively if it actively denounces misinformation.

Post-Match Recap: Brand Safety on Social Media

Regardless of whether or not Zuck and Musk ever find themselves duking it out in a cage, advertisers can train for their ongoing match by deeply understanding brand safety threats on social media, and then using our three-step method to clobber them. Just remember that this adversary is still developing: As social media platforms change and generative AI evolves, these threats (and the best ways to protect yourself against them) will continue to change as well. And as the old saying goes, the best defense is a good offense, so be sure to so keep tabs on this rival to ensure it doesn’t hit you with any underhanded maneuvers when you least expect them!

Generative AI is disrupting the world of marketing in more ways than just its role in the spread of misinformation. Learn more about GenAI’s benefits to marketers and advertisers in our report, Generative AI and the Future of Marketing.

Get the Report