Skip to Content

Learn how to get the most out of AI with our latest tips and resources.

Is Your Generative AI Making Things Up? 4 Ways To Keep It Honest

generative AI hallucinations
Generative AI hallucinations are not likely to go away completely, but you can lessen their impact on your business. [Image/Malte Mueller]

Generative AI sometimes returns incorrect information, known colloquially as “AI hallucinations.” Here’s what you can do to protect your business and customers. 

Generative AI chatbots are helping change the business landscape. But they also have a problem: They frequently present inaccurate information as if it’s correct. Known as “AI hallucinations,” these mistakes occur up to 20% of the time.

“We know [current generative AI] has a tendency to not always give accurate answers, but it gives the answers incredibly confidently,” said Kathy Baxter, principal architect in Salesforce’s ethical AI practice. “So it can be difficult for individuals to know if they can trust the answers generative AI is giving them.” 

You might hear those in the computer science community call these inaccuracies confabulations. Why? Because they believe the psychological phenomenon of accidentally replacing a gap in your memory with a false story is a more accurate metaphor for generative AI’s habit of making mistakes. Regardless of how you refer to these AI blunders, if you’re using AI at work, you need to be aware of them and have a mitigation plan in place. 

The big trend

People have gotten excited (and maybe a little frightened, especially when used at work) about generative AI and large language models (LLMs). And with good reason. LLMs, usually in the form of a chatbot, can help you write better emails and marketing reports, prepare sales projections, and create quick customer service replies, among many other things. 

In these business contexts, AI hallucinations may lead to inaccurate analytics, negative biases, and trust-eroding errors sent directly to your employees or customers. 

“[This] is a trust problem,” said Claire Cheng, senior director, data science and engineering, at Salesforce. “We want AI to help businesses rather than make the wrong suggestions, recommendations, or actions to negatively impact businesses.”

It’s complicated

Some in the industry see hallucinations more positively. Sam Altman, CEO of ChatGPT creator OpenAI, told Salesforce CEO Marc Benioff that the ability to even produce hallucinations shows how AI can innovate. 

“The fact that these AI systems can come up with new ideas, can be creative, that’s a lot of the power,” Altman said. “You want them to be creative when you want, and factual when you want, but if you do the naive thing and say, ‘Never say anything you’re not 100% sure about’ — you can get a model to do that, but it won’t have the magic people like so much.”

For now, it appears we can’t completely solve the problem of generative AI hallucinations without eradicating its “magic.” (In fact, some AI tech leaders predict hallucinations will never really go away.) So what’s a well-meaning business to do? If you’re adding LLMs into your daily work, here are four ways you can mitigate generative AI hallucinations.

1. Use a trusted LLM to help reduce generative AI hallucinations

For starters, make every effort to ensure your generative AI platforms are built on a trusted LLM. In other words, your LLM needs to provide an environment for data that’s as free of bias and toxicity as possible. 

A generic LLM such as ChatGPT can be useful for less-sensitive tasks such as creating article ideas or drafting a generic email, but any information you put into these systems isn’t necessarily protected

“Many people are starting to look into domain-specific models instead of using generic large language models,” Cheng said. “You want to look at the trusted source of truth rather than trust the model to give you the response. Do not expect the LLM to be your source of truth because it’s not your knowledge base.”

When you pull information from your own knowledge base, you’ll have relevant answers and information at your fingertips more efficiently. There will also be less risk the AI system will make guesses when it doesn’t know an answer. 

“Business leaders really need to think, ‘What are the sources of truth in my organization?’” said Khoa Le, vice president of Service Cloud Einstein and bots at Salesforce. “They might be information about customers or products. They might be knowledge bases that live in Salesforce or elsewhere. Knowing where and having good hygiene around keeping these sources of truth up to date will be super critical.”

Say hello to Einstein Copilot

Your trusted conversational AI assistant for CRM gives everyone the power to get work done faster. It’s a total game-changer for your company.

2. Write more-specific AI prompts

Great generative AI outputs also start with great prompts. And you can learn to write better prompts by following some easy tips. Those include avoiding close-ended questions that produce yes or no answers, which limit the AI’s ability to provide more detailed information. Also, ask follow-up questions to prompt the LLM to get more specific or provide more detailed answers. 

You’ll also want to use as many details as possible to prompt your tool to give you the best response. As a guide, take a look at the below prompt, before and after adding specifics. 

  • Before: Write a marketing campaign for sneakers.
  • After: Write a marketing campaign for a new online sneaker store called Shoe Dazzle selling to Midwestern women between the ages of 30 and 45. Specify that the shoes are comfortable and colorful. The shoes are priced between $75 and $95 and can be used for various activities such as power walking, working out in a gym, and training for a marathon.

Need help with your generative AI strategy?

Whether you’re starting out with AI or already innovating, this guide is your road map to delivering a trusted program blending data, AI, and CRM. The goal? Helping your teams focus on high-value tasks and build stronger customer relationships. 

3. Tell the LLM to be honest

Another game-changing prompt tip is to literally direct the large language model to be honest. 

“If you’re asking a virtual agent a question, in your prompt you can say, ‘If you do not know the answer, just say you do not know,’” Cheng said. 

For example, say you want to create a report that compares sales data from five large pharmaceutical companies. This information likely will come from public annual reports, but it’s  possible the LLM won’t be able to access the most current data. At the end of your prompt, add, “Do not answer if you can’t find the 2023 data” so the LLM  knows not to make up something if that data isn’t available.

You can also make the AI “show its work” or explain how it came to the answer that it did through techniques like chain of thought or tree of thought prompting. Research has shown that these techniques not only help with transparency and trust, but they also increase the AI’s ability to generate the correct response.

4. Lessen the impact on customers

Le offers some things to consider to protect your customers’ data and business dealings. 

  • Be transparent. If you’re using a chatbot or virtual agent backed by generative AI, don’t pass the interface off as if customers are talking to a human. Instead, disclose the use of generative AI on your site. “It’s so important to be clear where this information comes from and what information you’re training it on,” Le said. ”Don’t try to trick the customer.”  
  • Follow local laws and regulations. Some municipalities require you to allow end users to opt in to this technology; even if yours doesn’t, you may want to offer an opt-in. 
  • Protect yourself from legal issues. Generative AI technology is new and changing rapidly. Work with your legal advisors to understand the latest issues and follow local regulations
  • Make sure safeguards are in place. When selecting a model provider, make sure they have safeguards in place such as toxicity and bias detection, sensitive data masking, and prompt injection attack defenses like Salesforce’s Einstein Trust Layer.

Generative AI hallucinations are a concern, but not necessarily a deal breaker. Design and work with this new technology, but keep your eyes wide open about the potential for mistakes. When you’ve used your sources of truth and questioned the work, you can go into your business dealings with more confidence.

Get started with an LLM today

The Einstein 1 Platform gives you the tools you need to easily build your own LLM-powered applications. Work with your own model, customize an open-source model, or use an existing model through APIs. It’s all possible with Einstein 1.

Ari Bendersky Contributing Editor

Ari Bendersky is a Chicago-based lifestyle journalist who has contributed to a number of leading publications including the New York Times, The Wall Street Journal magazine, Men's Journal, RollingStone.com and many more. He has written for brands as wide-ranging as Ace Hardware to Grassroots Cannabis and is a lead contributor to the Salesforce 360 Blog. He is also the co-host of the Overserved podcast, featuring long-form conversations with food and beverage personalities.

More by Ari

Get the latest articles in your inbox.