Revolutionize your political strategy with illumin Elect

Find out more
AI-generated content on Meta - Heart Symbol
Mar 19, 2024

Using AI-generated content? Meta says it must be labeled

Dayna Lang
Author Dayna Lang

Marketers relying on AI-generated content for their social media could find themselves in hot water in 2024. Meta’s top policy executive announced on February 5, 2024, that it will require all users to label AI-generated audio and visual content on its platforms.  

The social media giant’s long-term goal is to develop technology to detect and label all AI-created content on its platform internally. But, until that technology becomes a reality, it requires users to label the content themselves. 

Nick Clegg, the company’s president of global affairs, during an interview with Reuters, said:

“Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow.”

Clegg did not specify what penalties Meta would implement for violating this policy.

This announcement came after a wave of generative AI scandals in 2023. Sports Illustrated was just one of several publications landing itself in hot water for using AI to write content.

The public reaction to AI-generated content shows readers value a human touch; using AI to create content comes with its pitfalls. 

These scandals pave the way for tech companies, like Meta, to write policies requiring the labeling of generative AI. Public backlash proved the necessity for businesses to distinguish between AI and human-created content.

Meta states:

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies.”

Meta further adds: “People often come across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.” 

AI-generated content to be labeled on Meta going forward

As generative AI becomes harder to distinguish, internet users are becoming wary and Meta is looking to ease their worries. But it’s not just user contentment on the line – there are broader political implications to some AI capabilities. 

Deep Fakes have also become a major concern for celebrities and politicians of all stripes. The technology has been used to maliciously create images of public figures, in some cases opening users up to defamation lawsuits.  

It’s a strategic move on Meta’s part to try to remove themselves from this equation, saying: “We must help people [distinguish] when photorealistic content they’re seeing has been created using AI.” 

In addition to requiring users to label AI-generated content, it will apply “Imagined with AI” labels to images users create with its Meta AI feature.

In the future, the company will also look for ways to label all AI materials themselves, removing any room for user error. 

This is a challenge. Although some AI companies are starting to include signals in their image generators, these signals are insufficient. Signals on generated AI don’t currently apply to audio or video content and aren’t used by all AI generators. 

Until these markings are mandatory, the social media giant is on its own.

Meta knows this and is working instead to develop its own classifiers to automatically detect AI-generated content on its platforms, protecting its users from misinformation and protecting itself from potential lawsuits as AI becomes more powerful.

micro-cta@2x
Made for marketers

Learn how illumin unlocks the power of journey advertising

Get started!

To see more from illumin, be sure to follow us on X and LinkedIn where we share interesting news and insights from the worlds of ad tech and advertising.