How marketers can mitigate bias in generative AI

Humans have biases and because genAI models are trained on data created by humans, they are likely to be as biased as that dataset. How can marketers mitigate risks?

Chat with MarTechBot

This article was co-authored by Nicole Greene.

Technology providers such as Amazon, Google, Meta and Microsoft, have long sought to address concerns about the effects of bias in datasets used to train AI systems. Tools like Google’s Fairness Indicators and Amazon’s Sagemaker Clarify help data scientists detect and mitigate harmful bias in the datasets and models they build with machine learning. But the sudden, rapid adoption of the latest wave of AI tools that use massive large language models (LLMs) to generate text and artwork for marketers presents a new class of challenges. 

Generative AI (genAI) is an incredible breakthrough, but it’s not human, and it’s not going to do exactly what people think it should do. Its models have bias just as humans have bias. The rapid commercialization of genAI’s models and applications have moved sources of bias beyond the scope of tools and techniques currently available to data science departments. Mitigation efforts must go beyond just the application of technology to include new operating models, frameworks, and employee engagement.

Marketers are often the most visible adopters of genAI

As the leading and most visible adopters of genAI in most organizations — and the people most responsible for brand perception — marketers find themselves on the front lines of AI bias mitigation. These new challenges often require sensitive human oversight to detect and address bias. Organizations must develop best practices across customer facing functions, data and analytics teams, and legal to avoid damage to their brands and organizations.

Marketing’s most basic function is to use tools to find and deliver messages to the people most likely to benefit from the business’s products and services. Adtech and martech include predictive, optimization-driven technology designed to determine which individuals are most likely to respond and what messages are most likely to move them. This includes decisions like how to segment and target customers and customer loyalty decisions. Since the technology relies on historical data and human judgment, it risks cementing and amplifying biases hidden within an organization, as well as in commercial models over which marketers have no control.

Allocative and representational harm

When algorithms inadvertently disfavor customer segments with disproportionate gender, ethnic or racial characteristics due to historical socioeconomic factors inhibiting participation, the result is often described as “allocative harm.” While high-impact decisions, like loan approvals, have received most attention, everyday marketing decisions such as who receives a special offer, invitation or ad exposure present a more pervasive source of harm. 

Mitigating allocative harm has been the aim of many data science tools and practices. GenAI, however, has raised concerns about a different type of harm. “Representational harm” refers to stereotypical associations that appear in recommendations, search results, images, speech and text. Text and imagery produced by genAI may include depictions or descriptions that reinforce stereotypical associations of genders or ethnic groups with certain jobs, activities or characteristics. 

Some researchers have coined the phrase “stochastic parrots,” to express the idea that LLMs might mindlessly replicate and amplify the societal biases present in their training data, much like parrots mindlessly mimicking words and phrases they were exposed to. 

Of course, humans are also known to reflect unconscious biases in the content they produce. It’s not hard to come up with examples where marketing blunders produced representational harms that drew immediate backlash. Fortunately, such flagrant mishaps are relatively rare and most agencies and marketing teams have the judgment and operational maturity to detect them before they cause harm. 

GenAI, however, raises the stakes in two ways. 

First, the use of genAI in content production for personalized experiences multiplies the opportunities for this type of gaffe to escape review and detection. This is due to both the surge in new content creation and the various combinations of messaging and images that could be presented to a consumer. The prevention of representational bias in personalized content and chatbot dialogs requires scaling up active oversight and testing skills to avoid unanticipated situations arising from unpredictable AI behaviors. 

Second, while flagrant mistakes get the most attention, subtle representational harms are more common and difficult to eliminate. Taken individually, they may appear innocuous, but they produce a cumulative effect of negative associations and blind spots. For example, if an AI writing assistant employed by a CPG brand persistently refers to customers as female based on the copy samples it’s been given, its output may reinforce a “housewife” stereotype and build a biased brand association over time. 

Dig deeper: Third-party data in advertising — Best of the MarTechBot

Addressing harms in genAI

Subtle representational bias requires deeper levels of skill, contextual knowledge, and diversity to recognize and eliminate. The first step is acknowledging the need to incorporate oversight into an organization’s regular operations. Consider taking these steps:

  • Address the risk. Bias infects genAI through both its training data, human reinforcement and everyday usage. Internal and agency adoptions of genAI for content operations should be prefaced by targeted education, clarification of accountability, and a plan for regular bias audits and tests. 
  • Formalize principles. Align all stakeholders on principles of diversity and inclusion that apply to the specific hazards of bias in genAI. Start with the organization’s stated principles and policies and incorporate them into bias audits as they relate to these principles. Set fairness constraints during training and involve a diverse panel of human reviewers to catch biased content. Clear guidelines and ongoing accountability are crucial for ensuring ethical AI-generated content. 
  • Account for context. Cultural relevance and disruptive events change perception in ways that genAI is not trained to recognize. LLMs’ assimilation of impactful events can trail events and changing societal perception. Marketing leaders can advise communications and HR on how to enhance diversity, equity and inclusion training programs to include AI-related topics to prepare teams to ask the right questions about existing practices and adoption plans. They can also ensure that the test data includes examples that could potentially trigger bias.
  • Collaborate vigorously. Assure that marketing personnel work closely with data specialists. Curate diverse and representative datasets using both data science tools and human feedback at all stages of model development and deployment, especially as fine-tuning of foundational models becomes more commonplace. As marketers consider AI-driven alterations to staff and training, prioritize scaling up review and feedback activities required for bias mitigation.


If marketing leaders follow these steps when addressing internal genAI regulations, they will be protecting their brand in a major way, which can pay huge dividends down the line. While even the major players in the space are looking to address bias within genAI, not everyone takes all of these steps into account, which can lead to major blindspots with their genAI-led projects. 

Email:


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Andrew Frank
Contributor
Andrew Frank is VP Distinguished Analyst with Gartner for Marketing Leaders Practice. He specializes in best practices for data-driven marketing, including how organizations can use data to drive sales, loyalty, innovation, brand value and other business goals. He also focuses on emerging marketing technology and trends, including marketing applications of artificial intelligence (AI) and machine learning, algorithmic marketing, and marketing in emerging environments such as metaverse and Web3. Frank also specializes in advertising technology and business trends. His research focuses on new opportunities in digital advertising leveraging mobile, social, video, and advanced TV platforms and channels, and utilizing advanced targeting, metrics, interactive design, and real-time ad operations.

Fuel for your marketing strategy.