Vlog recording for live video interview, with camera set up in foreground and person in background sitting on floor surrounded by boxes.
Feature

YouTube's AI Image Labels: What Marketers Need to Know

7 minute read
Pierre DeBois avatar
SAVED
YouTube's new labeling tool ensures content integrity and viewer trust in the deepfake era.

The Gist

  • AI transparency. YouTube is introducing a content label tool that appears on videos when creators indicate that an AI-enhanced video is being shared.
  • Creator responsibility. The tool, a part of its AI content guidelines, requires content creators to disclose altered videos.
  • Misinformation combat. YouTube’s effort is to reduce misinformation and deepfake videos that appear in platform content.

People depend on nutrition labels to understand the nutrients in their groceries. Marketers will soon have a similar experience to identify when a video contains deepfake imagery. Let’s take a closer look at YouTube's AI Image Labels.

YouTube has launched a labeling tool for its videos that contain AI-created imagery to protect users from being overwhelmed with deepfake images and videos.

A robot director stands next to a director's chair, a camera and cameraman, lighting fixture and an assistant holding a movie clapperboard against a blue back ground in piece about YouTube's New AI Image Labels.
YouTube has launched a labeling tool for its videos that contain AI-created imagery to protect users from being overwhelmed with deepfake images and videos.besjunior on Adobe Stock Photos

Related Article: Unmasking Deepfakes: How Brands Can Combat AI-Generated Disinformation

YouTube's AI Image Labels

YouTube announced that creators must indicate if an uploaded video contains AI-generated imagery. The indicator for YouTube's AI image labels appears as a response to a question when a creator uploads a new video in YouTube Creator Studio. The upload interface will ask the creator to indicate whether the video contains altered content, specifically asking if any of the following describes their content:

  • "Makes a real person appear to say or do something they didn't say or do."
  • "Alters footage of a real event or place."
  • "Generates a realistic-looking scene that didn't occur."

Related Article: Using the Gemini YouTube Extension to Improve Your Content Strategy

YouTube’s AI Allowances

There are some allowances that may be beneficial for creators in certain instances, such as briefly using AI-generated images to support a point or explain product features.

The creator indicates with a "Yes" or "No." When "Yes" is selected, an "altered or synthetic content" label is added to the video description when it is displayed on YouTube or YouTube Shorts.

Systems like YouTube's AI image labels are getting noticed by business experts who have immersed their expertise in AI. John “Colder ICE” Lawson, CEO at ColderICE Media and ecommerce expert who has spoken frequently on the latest generative AI business trends, noted his appreciation for the guidelines. He stated on his Facebook page, “While AI cloning tools are getting better, the rules are getting stricter. And they should IMHO!...always be transparent that they [images] are AI-generated. Ethical AI use is best.”

Some leeway for creators exists when determining if certain video alterations are inconsequential. No disclosure is required for unrealistic imagery, such as animated images or someone riding a unicorn through a fantastical world and for minor changes such as color adjustment or beauty filters.

Related Article: Using YouTube Channel Analytics to Manage Customer Experiences

Designed to Support Existing & Future Policies

YouTube's new AI image labels system is designed to support YouTube's existing and future policies prohibiting excessively manipulated content. YouTube's policies aim to prevent misleading viewers with content that poses a serious risk of harm, such as the infamous Tide Pod challenge in 2018. This controversial social media challenge went viral, with teens recording videos of themselves eating a Tide detergent pod and challenging others to do the same. The videos spread across social media, creating a widespread crisis that led Facebook and YouTube to quickly announce they would remove such videos from their platforms.

But that was 2018. 

Related Article: Fighting Deepfakes With Content Credentials and C2PA 

Learning Opportunities

Fighting Deepfakes in 2024

The potential for media to go viral in 2024 has increased significantly. A variety of image-generation tools, such as Midjourney and Leonardo.AI, have sparked curiosity among online users, from marketers to tech enthusiasts, to create their own AI-generated visuals. AI-generated videos can provide intriguing storytelling opportunities, while AI-generated images can attract attention and engagement from social media followers.

Related Article: 5 Bill Gates Takes on the Future of Artificial Intelligence

Deepfake Videos and Images & Misleading Viewers

Unfortunately, AI visual content also has the potential to present false imagery that can mislead people about what is realistic. If the image is of a famous person and the viewer is unaware that the video has been altered or synthetically created, then the image can mislead someone into believing a message deceptively sent as propaganda.

Related Article: Lured by AI? Why AI Projects Fail and How to Safeguard Your AI Strategy

A Growing Problem

Deepfakes often involve famous individuals, from celebrities to political figures. They have become more problematic since advancements in 2017 incorporated machine learning modeling. Models can now be trained on imagery in a way that images from two separate instances can be seamlessly blended into a new image or video — this technique is called faceswapping or photoswapping.

One example of a photoswap with faces is a video of someone saying something but their face has been replaced with someone else's using their image pixels. That is how deepfake videos claiming a person said something they didn’t are born.

Unfortunately, photoswapping can be used without the explicit consent of the people involved. The ability to train models has given deepfakes more realistic appearances, while widespread access has made photoswapping more likely to be used by hackers and bad actors. 

Related Article: Can We Fix Artificial Intelligence's Serious PR Problem?

What YouTube's AI Image Labels and Standards Mean for Protecting Viewers

YouTube's approach of relying on creators' judgment regarding deepfakes in their videos does pose a risk. By allowing self-regulation of uploaded AI-generated videos, YouTube is trusting content creators to be honest about their submissions. Some industry critics have likened this approach to asking a bank robber scoping out a local branch if they plan to rob it in the near future.

YouTube’s Countermeasures

However, YouTube has countermeasures in place on its platform, both currently active and planned for rollout in the coming months. YouTube intends to allow requests for the removal of AI-generated images or other synthetic imagery. This will provide recourse for individuals who discover they have been included in a deepfake without their permission.

Marketers & the Importance of Ensuring Content Authenticity

YouTube's approach aligns with what OpenAI CEO Sam Altman recently stated, that AI will replace 95% of creative marketing work. This means professionals at all levels, from individual influencers to marketing teams, will be working with AI and will need to ensure content authenticity when using YouTube or YouTube Shorts as part of their content strategy.

Enhancing Trust

YouTube's AI image labels are designed to enhance trust between creators and their audience. As YouTube competes with other platforms to attract influencers and content creators, creating a labeling system brings necessary focus to their platform. The label benefits creators by promoting content transparency, enabling the audience to feel comfortable viewing videos they consider trustworthy.

An Ongoing Effort

YouTube has relied on a combination of human reviewers and machine learning to monitor compliance with community guidelines and identify new patterns of guideline breaches and abuse. YouTube's moderation team will need to periodically assess how its video label guidelines align with instances where a user contested the labeling of a video upload. For example, the team could examine how frequently users contest videos that show an altered place.

The Continued Blurring

The distinction between human and synthetic content will blur further as image generators become more popular. Today, two-thirds of the world's population uses the internet, and seeking information is their main purpose for being online. People increasingly rely on social media for information, particularly breaking news, so users will need to be able to identify the difference between fake and real material instantly and more clearly.

Final Thoughts

YouTube's AI image labels are a positive initial step that other platforms will likely adopt in some form, but it is primarily aimed at users who create content on the platform. Solutions for educating viewers will also be crucial in helping them understand where the boundary lies between real and synthetic content.

About the Author

Pierre DeBois

Pierre DeBois is the founder and CEO of Zimana, an analytics services firm that helps organizations achieve improvements in marketing, website development, and business operations. Zimana has provided analysis services using Google Analytics, R Programming, Python, JavaScript and other technologies where data and metrics abide. Connect with Pierre DeBois:

Main image: hin255