An Air Canada airplane in flight
News Analysis

Exploring Air Canada's AI Chatbot Dilemma

6 minute read
Michelle Hawley avatar
SAVED
Discover the implications of Air Canada's AI chatbot mishap and learn essential strategies for deploying AI in customer experience the right way.

The Gist

  • AI hallucinations. AI chatbots can hallucinate 3% to 27% of the time, leading to incorrect or misleading responses.
  • Air Canada's legal battle. A Canadian tribunal ruled Air Canada must honor a discount promised by its AI chatbot, highlighting the legal implications of AI errors.
  • Deploying AI wisely. Companies should implement guardrails, fine-tuning, action models, and hallucination prevention to minimize risks in AI-driven customer experiences.

We all know AI can lie. But is it now costing businesses money? 

Hallucinations are responses generated by artificial intelligence that are incorrect, misleading or downright nonsensical — though the bots present them as fact. And according to research, even in situations designed to prevent it from happening, AI-powered chatbots hallucinate anywhere from 3% to 27% of the time. 

Yet despite this flaw, more and more customer experience teams are deploying AI to bolster their efforts and improve service. As of 2023, 79% of organizations polled said they’re using artificial intelligence in their CX toolset in some capacity, according to CMSWire’s State of Digital Customer Experience report. 

Statistic showing number of companies that use AI for CX in some capacity

Air Canada was one such business, having set up an AI chatbot on its website to assist customers with questions and concerns. But now, after a landmark case that forced the company to honor an against-policy discount, the company seems to be rethinking its strategy. 

Air Canada Forced to Honor AI Chatbot’s Refund Promise 

Vancouver resident Jake Moffatt used Air Canada’s AI chatbot to see if the airline offered bereavement fares following the death of his grandmother. The bot told Moffatt that the company did offer a discount, which he could claim up to 90 days after flying. 

Moffatt booked the flight for a sum of $1,200. Yet when he later requested his promised discount, he was told by airline support staff that the chatbot’s responses were wrong and nonbinding. At a Canadian tribunal on the matter, Air Canada claimed that the AI chatbot was a “separate legal entity” to the company and couldn’t be held responsible for what it told customers. 

Tribunal member Christopher Rivers, however, disagreed. He determined that the airline must follow through with the chatbot’s promised discount. 

The airline, Rivers explained, committed negligent misrepresentation, writing, “It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.” 

Air Canada did not respond to a request for comment. However, as of April 2024, the bot is no longer available on the airline’s website. 

Related Article: Inside the Failed Willy Wonka Experience

Learning Opportunities

What Went Wrong With Air Canada’s AI Chatbot? 

When leveraging generative AI to the end-user in runtime, there’s an inherent risk that the model may respond with factually incorrect information, according to Jay Wolcott, CEO of Knowbl, a conversational AI platform. 

“What’s actually happening is these Large Language Models (that power generative AI) have been trained on billions of parameters of data (i.e. the entire internet) and how it works is it mathematically predicts the most likely token (i.e. word) one after another. So in reality, it has no idea if what it’s saying is true or false.”

Companies have put guardrails on these generative AI experiences, he explained — basically a list of rules on what the chatbot can or cannot say. But these guardrails are not exhaustive enough to catch all edge cases, and there is no guarantee the bot won’t say something off-brand or offensive.  

“In the case of Air Canada’s AI chatbot, the generative AI ‘hallucinated’ and told a customer they could be issued a refund when in reality they wouldn’t have been,” Wolcott said. “More and more enterprises are going to get concerned with the repercussions of using generative AI in customer-facing use cases because of news like this. One bad customer experience in today’s social media age could result in millions, maybe even billions of dollars in market cap loss.”

The Right Way to Deploy AI in Customer Experience

The Air Canada incident underscores the critical limitations in current AI technology for CX, said Rasmus Hauch, CTO at Boost.ai — particularly in understanding complex, nuanced policies and accurately relaying company-specific information. 

Still, he explained, to avoid problems, there are few critical tactics companies can follow when they deploy these kind of CX solutions: 

  • Guardrails: Companies must set strict boundaries and rules within which AI operates. “For CX solutions,” said Hauch, “guardrails help in aligning the AI's responses with the company's policies, ethical guidelines, and legal requirements, minimizing the risk of delivering incorrect information or advice to customers.”
  • Fine-Tuning: AI systems require continuous fine-tuning after training to adapt to new data, customer interactions and changing company policies. This process, explained Hauch, involves adjusting the AI’s model based on feedback and real-world interaction outcomes to improve accuracy, relevance and effectiveness of its responses. 
  • Action Models: These sophisticated frameworks within AI systems guide decision-making processes and determine the most appropriate actions in various customer interaction scenarios, said Hauch. “Developing and refining these models is critical for ensuring AI can handle a wide range of customer queries and situations effectively.” 
  • Hallucination Detection and Prevention: Implementing these mechanisms is essential to maintain the credibility and reliability of AI-powered CX solutions, explained Hauch. “These mechanisms involve monitoring outputs for accuracy and intervening when the system generates responses that do not align with factual information or established knowledge.”
  • Personally Identifiable Information (PII) Scanning: Protecting PII is paramount in customer interactions, said Hauch, and AI systems must be equipped with robust scanning capabilities to identify and mask sensitive customer data during interactions, which are essential for maintaining data security and privacy in AI-driven CX platforms. 

Overall, Hauch recommended: “Place customer trust at the heart of strategy. Consumers want to do business with brands that understand their needs or concerns, which companies can achieve in part by prioritizing and communicating openly about regulatory compliance and data protection.”

Related Article: The Evolution of AI Chatbots: Past, Present and Future

Embracing Human Oversight in AI for CX 

The Air Canada case demonstrates the importance of consistently keeping a human in the loop of AI operations, said Hauch. 

“This outcome though,” he added, “where a chatbot provided a response out of step with documented policy, doesn’t necessarily mean it was a failure exclusive to the technology.”

Organizations should be prepared to address a false response in any environment quickly and transparently, he explained — especially in customer-facing settings. 

About the Author

Michelle Hawley

Michelle Hawley is an experienced journalist who specializes in reporting on the impact of technology on society. As a senior editor at Simpler Media Group and a reporter for CMSWire and Reworked, she provides in-depth coverage of a range of important topics including employee experience, leadership, customer experience, marketing and more. With an MFA in creative writing and background in inbound marketing, she offers unique insights on the topics of leadership, customer experience, marketing and employee experience. Michelle previously contributed to publications like The Press Enterprise and The Ladders. She currently resides in Pennsylvania with her two dogs. Connect with Michelle Hawley: