Opinions expressed in this article are those of the sponsor. MarTech neither confirms nor disputes any of the conclusions presented below.

How to protect against and benefit from generative AI hallucinations

As AI takes center stage in marketing, it is subject to machine error – ironically, only humans can check fallibility.

Chat with MarTechBot
Optimove Hallucination Image

As marketers start using ChatGPT, Google’s Bard, Microsoft’s Bing Chat, Meta AI or their own large language models (LLM), they must concern themselves with “hallucinations” and how to prevent them.

IBM provides the following definition for hallucinations: “AI hallucination is a phenomenon wherein a large language model—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

“Generally, if a user makes a request of a generative AI tool, they desire an output that appropriately addresses the prompt (i.e., a correct answer to a question). However, sometimes AI algorithms produce outputs that are not based on training data, are incorrectly decoded by the transformer, or do not follow any identifiable pattern. In other words, it ‘hallucinates’ the response.”

Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights, said in a CNN blog post that the problem is that LLMs are simply trained to “produce a plausible-sounding answer” to user prompts.

“So, in that sense, any plausible-sounding answer, whether accurate or factual or made up or not, is a reasonable answer, and that’s what it produces. There is no knowledge of truth there.”

He said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to how his young son would tell stories at age four.

“You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian added. “And he would just go on and on.”

Frequency of hallucinations

If hallucinations were “black swan” events – rarely occurring – they would be something marketers should be aware of but not necessarily pay much attention to.

However, according to studies from Vectara, chatbots fabricate details in at least 3% of interactions – and as much as 27%, despite measures taken to avoid such occurrences.

“We gave the system 10 to 20 facts and asked for a summary of those facts,” Amr Awadallah, Vectara’s chief executive and a former Google executive, said in an Investis Digital blog post.  “It is a fundamental problem that the system can still introduce errors.”

According to the researchers, hallucination rates may be higher when chatbots perform other tasks (beyond mere summarization).

What marketers should do

Despite the potential challenges posed by hallucinations, generative AI offers plenty of advantages. To reduce the possibility of hallucinations, we recommend:

  • Use generative AI only as a starting point for writing: Generative AI is a tool, not a substitute for what you do as a marketer. Use it as a starting point, then develop prompts to solve questions to help you complete your work. Make sure your content always aligns with your brand voice.
  • Cross-check LLM-generation content: Peer review and teamwork are essential.
  • Verify sources: LLMs are designed to work with huge volumes of information, but some sources may not be credible.
  • Use LLMs tactically: Run your drafts through generative AI to look for missing information. If generative AI suggests something, check it out first – not necessarily because of the odds of a hallucination occurring but because good marketers vet their work, as mentioned above.
  • Monitor developments: Keep up with the latest developments in AI to continuously improve the quality of outputs and to be aware of new capabilities or emerging issues with hallucinations and anything else.

Benefits from hallucinations?

However, as dangerous as they can potentially be, hallucinations can have some value, according to FiscalNote’s Tim Hwang.

In a Brandtimes blog post, Hwang said: “LLMs are bad at everything we expect computers to be good at,” he says. “And LLMs are good at everything we expect computers to be bad at.”

He further explained, “So using AI as a search tool isn’t really a great idea, but ‘storytelling, creativity, aesthetics – these are all things that the technology is fundamentally really, really good at.’”

Since brand identity is basically what people think about a brand, hallucinations should be considered a feature, not a bug, according to Hwang, who added that it’s possible to ask AI to hallucinate its own interface.

So, a marketer can provide the LLM with any arbitrary set of objects and tell it to do things you wouldn’t usually be able to measure, or it would be costly to measure through other means – effectively prompting the LLM to hallucinate.

An example the blog post mentioned is assigning objects with a specific score based on the degree to which they align with the brand, then giving AI a score and asking for consumers who are more likely to become lifelong consumers of the brand based on that score.

“Hallucinations really are, in some ways, the foundational element of what we want out of these technologies,” Hwang said. “I think rather than rejecting them, rather than fearing them, I think it’s manipulating these hallucinations that will create the biggest benefit for people in the ad and marketing space.”

Emulating consumer perspectives

A recent application of hallucinations is exemplified by the “Insights Machine,” a platform that empowers brands to create AI personas based on detailed target audience demographics. These AI personas interact as genuine individuals, offering diverse responses and viewpoints.

While AI personas may occasionally deliver unexpected or hallucinatory responses, they primarily serve as catalysts for creativity and inspiration among marketers. The responsibility for interpreting and utilizing these responses rests with humans, underscoring the foundational role of hallucinations in these transformative technologies.

As AI takes center stage in marketing, it is subject to machine error. That fallibility can only be checked by humans—a perpetual irony in the AI marketing age.

Pini Yakuel, co-founder and CEO of Optimove, wrote this article.


About the author

Optimove
Optimove is the first Customer-Led Marketing Platform ensuring marketing always starts with the customer instead of a campaign or product. Customer-led marketing has been proven to deliver brands an average increase of 33% in customer lifetime value. It is the only customer-led marketing platform powered by 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action. In Gartner's 2023 Magic Quadrant for Multichannel Marketing Hubs, Optimove was highest in execution/furthest in vision among Challengers. In Gartner's companion report, it ranked #1 Multichannel Marketing Journey Orchestration.

Fuel for your marketing strategy.