A humanoid robot walks a Dalmatian breed dog with a leash on the lawn.
Editorial

Trust in AI: Combining AI & the Human Experience

9 minute read
Luke Soon avatar
SAVED
Trust profoundly impacts the human experience in numerous ways, shaping how we interact with businesses, loyalty and willingness to engage in transactions.

The Gist

  • Trust drives progress. Trust in AI is pivotal, impacting brand loyalty, customer engagement and the human experience.
  • Ethical AI is essential. Responsible AI development is crucial for ensuring safety, fairness and transparency in technology.
  • Guardrails guide AI. Setting ethical frameworks and standards is vital for AI's safe and fair application in society.

Everyone is banking on AI generating exponential value creation opportunities. Make no mistake — we are in a Big Tech-style "Cold War," an arms race to build artificial general intelligence. And yes, generating 10x, 100x, even 1000x profits along the way (just look at Nvidia, the current darling of Wall Street, nearing a $2 trillion valuation from nowhere).

Let's take a look at some of the issues around trust in AI.

In an era where artificial intelligence (AI) systems are becoming increasingly interwoven in our daily lives, or embedded into our life-journeys, the emphasis on responsible AI obviously underpinned by trust in AI has never been more crucial. In my book "Genesis," I call this a fork in the future of humanity (human experience) in the age of AI. It’s a pivotal moment (the fork), where we hurtle toward alternate realities, either utopian or dystopian. I for one believe we need to be long and not short on humanity and AI, not humanity or AI, to create a sustained future, together.

If we rely on first principles (which we must in these uncertain times — systems & design thinking), it points us toward an alarmingly increasing rate of trust equity erosion in our world — the very same element (trust) that ignited human value creation potential; from cavemen hunting together in packs to the bonds forged in small families huddling together, then on to the formation of towns and eventually cities and nations — society as we know it.

Trust profoundly impacts the human experience in numerous ways, shaping how we interact with businesses, our loyalty and our willingness to engage in transactions.

A young father embraces his child while sitting in front of a laptop computer at a kitchen table with coffee cups, pens and papers littering the table top in piece about trust in AI.
Trust profoundly impacts the human experience in numerous ways, shaping how we interact with businesses, our loyalty and our willingness to engage in transactions.Geber86 on Adobe Stock Photos

1. Brand Loyalty

Trust is a key factor in building brand loyalty. When customers trust a brand, they are more likely to make repeat purchases, remain loyal even in the face of competition, and recommend the brand to others. Trust in a brand often stems from consistent quality, reliable service and the feeling that the company acts in the customers' best interests.

Related Article: 5 Ways to Increase Customer Loyalty

2. Customer (and Employee) Engagement

Trust encourages customers and employees alike to engage more deeply with a brand. This engagement can take many forms, such as participating in loyalty programs, contributing to community forums, or engaging with the brand on social media. Trusting relationships enable more meaningful human interactions, as customers and employees feel safe sharing their preferences and feedback, knowing it will be used to enhance the human experience. The impact of trust is especially pronounced for the employee experience (the EX part of HX = CX + EX).

Simply put, we’ve neglected EX in favor of CX for the past two decades — as was the zeitgeist of most major digital transformations since the early 2000s. We were focused on building trust with outside-in customers and clients, forgetting that what drew them to the brand (product, service) was our shared, common purpose emanating from each and every employee. Ironic, how distrust within, i.e., degradation of the employee experience eventually breaks the customer experience.

Related Article: Unifying Customer & Employee Engagement

3. Word-of-Mouth Recommendations

Customers and employees who trust a brand are more likely to recommend it to friends and family, providing invaluable word-of-mouth marketing — increasing the "virality" of content. Such recommendations are often seen as more credible and persuasive than traditional advertising, as they come from a trusted source — us! This also shows that there’s a trust imbalance created over the years. That customers trust their peers (more), and less the brand pushing their own products and services. Consumers were simply inundated with incessant marketing messages from big brands; this method of "push" marketing was  popularized back in the 80s — us poor consumers had to grapple with more channels (now it’s omnichannel) where these brands directed their "push" messages (advertising?) at us, and we had to tune out white-noise ourselves.

Unfortunately, push marketing has also sowed distrust and misinformation, snowballing into the trust gap we see and experience today. Trust erosion is exacerbated with generative AI becoming mainstream in 2023 — it couldn’t be easier today, to create fake, contentious, seditious and even malicious content.

Related Article: Can We Fix Artificial Intelligence's Serious PR Problem?

4. Consumer Confidence in Purchasing Decisions

Trust reduces the perceived risk associated with purchasing decisions, especially for new or expensive products. When customers trust a brand, they feel more confident that the product or service will meet their expectations, making them more likely to proceed with a purchase.

Learning Opportunities

Related Article: Building a Gold Standard for Consumer Trust

5. Willingness to Share Personal Information

In an era where personalization is key to the customer experience, trust enables companies to collect personal data needed to tailor experiences. Customers are more likely to share personal information with brands they trust, believing that their data will be used responsibly and will enhance their shopping experience.

Related Article: How AI and Data Analytics Drive Personalization Strategies

6. Crisis Management and Reputation

Trust plays a crucial role in how customers and employees perceive a brand's handling of crises or controversies. Brands that have built up a reservoir of goodwill through trustworthy behavior are more likely to be given the benefit of the doubt by customers during tough times. They are also more likely to recover more quickly from any reputational damage.

We’ve seen this at play with the OpenAI CEO’s recent ouster in late 2023. The whole saga unfolded in just a few days, and top executives and employees were demanding that he (Sam Altman) be reinstated. That’s a strong sense of common, shared purpose. That also exemplifies a high-trust environment, and we do see very clearly how that positively impacts product and service quality. The ChatGPT story has been told many times — but my take on it: a powerful purpose is democratizing generative AI (as Jensen Huang puts it — everyone is a programmer with generative AI).

Relatd Article: Sam Altman Returns to OpenAI: How the Chaos Changes the AI Field

7. Reducing Friction in the Customer and Employee Journeys

Trust simplifies and streamlines the customer journey by reducing the need for extensive research or comparison shopping. When customers trust a brand, they are more likely to make purchases with ease and less likely to seek alternatives, resulting in a smoother and more satisfying shopping experience. Suffice to say when employees trust the brand, magic happens. I’d even argue that EX is more "important" versus CX during these uncertain times. Using an analogy: The brand’s employees convert trust equity while harnessing design and innovation to produce superior (purpose-driven) products x services.

Related Article: Building Trust: CX, EX, AI and the Human Experience

Trust in AI: AI as a Game-Changer for the Human Experience

I truly believe that AI might be humanity’s last (great) invention! From now on, the trajectory is set; AI (re)creates itself, hence the (re)generative term; some experts argue that AI will become self-aware as early as 2045. Hence the term artificial general intelligence (AGI). Do we want this event/singularity, the "birth" of new species to be defined by distrust? I think not.

Ethical AI

Responsible AI refers to the development and use of AI technologies in a way that is ethical, transparent, and accountable, ensuring that these technologies benefit humanity while minimizing harm — in other words, AI will amplify (and amplify exponentially) the accretion/destruction of trust. Trust levels have been receding at an alarming rate over the years. In my opinion, trust equity needs to be measured — just as we give ourselves a netzero target to avoid irreparable damage to the world — below certain trust levels, it’s pure anarchy and mayhem will ensue. What it means to be human, helping and caring for each other, would all be void — in other words, we won’t survive.

The Trust Ecosystem

The trust ecosystem is ever expanding, expecially when it comes to trust in AI; we not only have humans to trust but also need to extend the circle to machines, and then there's trust between machines as well to ponder. There are existing experiments in peerless trust concepts and constructs —things such as cryptocurrency, decentralized finance (DeFi), decentralized autonomous organizations (DAOs) and Web3. We've yet to see the full spectrum of possibilities, how these new technologies (now powered by AI) have reauthenticated the human experience.

AI is bringing limitless potential to push us forward as a society — but with great potential comes great risks. It’s our responsibility (yes, us humans) to devise guardrails and ethical frameworks on what can and should (or not) be done by AI, with AI. We’ve already profiteered with AI as early as the early 2000s, with companies like Facebook, Google and Amazon monetizing and productizing us (our data).

To have trust in AI, we need to feel  confident that it’s performing  as intended, safely and fairly:

  • Developing and adhering to ethical guidelines and standards for AI development and use.
  • Increasing transparency by making AI systems more explainable and understandable to non-experts.
  • Engaging in ongoing dialogue with stakeholders to understand their concerns and expectations regarding AI.
  • Implementing robust security measures to protect AI systems from attacks and misuse.
  • Conducting regular audits and assessments of AI systems to ensure they are operating as intended and without causing harm.

Final Thoughts

The journey toward responsible AI and building trust in AI is ongoing and requires the concerted effort of all stakeholders. By prioritizing ethical considerations, transparency, accountability, safety and inclusivity, we can foster an environment where AI can be trusted and its benefits fully realized.

Trust in AI is not just about ensuring that these systems do no harm; it's about unlocking their potential to significantly improve our world, making it more efficient, equitable and prepared for the challenges of the future.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author

Luke Soon

Luke is a business transformation professional with over 25 years’ experience leading multi-year human experience-led transformations with global telcos, fintech, insurtech and automotive organizations across the globe. He was the lead partner in the acquisition and build-up of the human experience, digital and innovation practices across Asia Pacific with revenues surpassing $250 million. Connect with Luke Soon:

Main image: Garan Julia