Data documents on office desk, in front of a laptop, as a pair of hands holds the paper stack.
Editorial

Decision-Making Bias & AI: Does Data Dependency Dull You?

8 minute read
J.D. Little  avatar
SAVED
See why your smartest data-driven decisions might be leading you astray.

The Gist

  • Decision-making bias: Relying solely on data for decisions often affirms biases rather than optimizes outcomes. Seek diverse data sources to challenge preconceptions.
  • Consumer vs. producer: The impact of data varies significantly between consumers and producers, affecting decision-making processes differently based on perspective.
  • AI's role challenged: Rebecca Haddix urges decision-makers to pose tough questions to AI, advocating for its role in challenging, not just confirming, business strategies.

A few months ago, I was researching for a blog I was writing about data-driven decisions. Regular CMSWire readers know there is a lot of information out there on the topic, ranging from pseudo-scientific research on the effectiveness of human intuition to ominous predictions that humanity will soon cede all control to AI.

What is clear, however, is that when it comes to the use of data in decision-making, analytics are too often employed to affirm preconceptions rather than optimize outcomes. Let's take a look at some the of issues around decision-making bias.

Data Misuse: Skepticism in Statistics Healthy

The notion that data can mislead and be misused is nothing new — Mark Twain, quoting British statesman, Benjamin Disraeli, talked about "lies, damn lies and statistics." Most attribute the quote to Twain — which is untrue, but it’s actually doubtful Diraeli was the first to say it either. My point is that whether you get your facts from a tabloid newspaper, a chatbot, or an analytics dashboard built into your DXP, skepticism is healthy.

Related Article: Are Your Business Decisions Failing Because They Are Biased?

Data-Driven Decisions: More Harm Than Good?

While I was researching opinions on the topic, an article in Forbes written by Rebecca Haddix leapt out at me. I found it refreshing for its honesty and authoritative tone — also I loved the title: “Your Data-Driven Decisions Are Probably Wrong.” Rebecca has been a technology contributor on Big Data for Forbes for nearly a decade.

Her article, published in 2020, didn’t mention generative AI at all, but the strategy and clear guidelines she suggested seemed so relevant to the discussion we are now about decision-making bias and data that I felt compelled to reach out. I wanted to know if her advice to decision makers in 2024 would be different now that the topic of AI is in every conversation.

She was gracious enough to arrange a video call with me. What follows is a summary of a few of her profound insights from that discussion. Let's consider what Rebecca has to say about data-driven decisions, decision-making bias and more.

Related Article: 3 Ways to Reduce Bias in Customer Survey-Based Data for Effective CX

The Impact of Big Data Good and Bad — It's a Matter of Perspective

The vast amounts of data we collect can be good or bad depending on our perspectives as a consumer or a producer. In any study you read, you'll learn we now measure daily data creation in hundreds of terabytes a day and that rate of collection is expanding at an exponential rate.

When I asked Rebecca whether the vast amounts of data we collect is good or bad for us, her response was nuanced and surprising, drawing a distinction between the decision-making of consumers and producers. She responded, “I guess it depends on who the us is in that equation. Right? So, as consumers, we have a choice between the products created by the producers, the businesses.”

Related Article: Dealing With AI Biases, Part 1: Acknowledging the Bias

Consumers Delegate Choices, Trust Overrides Optimal Decisions

She went on to explain that consumers want to delegate to a trusted source, an influencer, or a search engine. Consumers are excited by the prospect of optimal decisions, but often that is not what they really want or need.

“Search is based on a number of factors, and marketing and SEO, that may not yield the optimal purchase decision for you as a consumer for any number of factors you don't consider, which is why aggregate sources that people trust comes into play." She adds, “As a consumer, we never made the optimal decisions. Having more data means that we analyze less.

"When you say us though, as the producer, the business, the creators of these products ... I think more data is really exciting. As long as we're intentional about how we're processing and analyzing it.

“We're kind of already at the point of saturation now with the human brain as a consumer not taking in much more. So, I'm really most excited about the rise of big data. And the impact that it has on the companies that are producing new products, whatever those are, and optimizing how we will work really, have the ability to act on more understandings of the real relationship between separate things. That's exciting.”

Learning Opportunities

Related Article: The Imperative of Data Literacy in Business Decision-Making

It Is the Human’s Responsibility to Ask the Right Questions

Rebecca has written before, “Outputs will only be actionable if the inputs are relevant.

"We get an answer. We don't necessarily get an answer to the question you pose based on the training data that it has. So, the responsibility of us is to pose the right questions and ensure that [the AI] is trained on the right set of data."

If we were looking for information about a medical topic, for instance, she suggested, “We could say look through just JAMA, the Journal of the American Medical Association, from these dates by these authors who have at least X number of citations and answer a question like this: ‘Which is the most effective treatment for this condition?’ Then go validate that.

Related Article: Overcoming AI Bias in CX With Latimer

Challenge AI: Avoid Bias, Enhance Marketing Strategy

“So, with marketing technology, specifically, if we say, ‘Well, the campaigns that we have the ability to run, the data we have the ability to collect, only reflect visitors to our site who have gotten here through the existing channels that we have, there could be optimal implementations or experimentation that we don't have yet.”

She goes on to suggest that the process all begins by developing a clear problem statement and having four or five hypotheses about the optimal solution before posing the question [to AI], and at that point, "Don’t ask AI to prove you right — dare AI to prove you wrong.” This helps to avoid decision-making bias.

Decision-Making Bias: It Is Human Nature to Want to Be Right

Rebecca emphasized the importance to have a "Continuous gut check against our own biases, because when we have an idea we'd like to be right, so we like to look at data ourselves? In the scientific method, we don't get to prove ourselves right. We go about looking for evidence that our hypotheses are wrong. And that's how we can get around this cyclical bias loop that I see a lot of.  We want to be right; we want to be geniuses.”

Rebecca added, “We want to say our intuition is as important as data, [but] it is by asking the right questions and then using data to disprove hypotheses — or not — not to just delegate the decision-making to dashboards."

(Author insight: Rebecca Haddix is not a fan of analytics dashboards.)

A skinny unmuscled red toy human looks into a magic funhouse mirror and sees what they want to see a strong muscled body in piece about data driven decisions , AI and bias.
"We go about looking for evidence that our hypotheses are wrong. And that's how we can get around this cyclical bias loop that I see a lot of. We want to be right; we want to be geniuses.”Maxim Malevich on Adobe Stock Photos

Isaac Asimov Was a Visionary

Toward the end of our interview Rebecca recited science fiction writer Isaac Asimov's Three Laws of Robotics, nearly verbatim, which I found to be pretty impressive. It came about as I told her about my thoughts that AI should adopt the posture of a guide dog to its human, more about keeping us out of harm's way than accelerating our progress.

For reference, Asimov's Three Laws of Robotics are as follows: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

She talked a bit about how these rules cascade, each building on the one's before. It's a potent concept, but worth noting, that a faster, better cognition at our beckoning should put up guardrails to protect and enforce restraint — be trained to discern danger and protect.

Final Thoughts

It was enlightening to hear a thought leader build upon their already robust ideas from years past. New information didn’t change their previous advice but enhanced it with deeper insight.

Much like notable thinkers such as Isaac Asimov — and Rebecca Haddix. I look forward to future articles and further discussions on data-driven decisions, decision-making bias, and other topics from her.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author

J.D. Little

J.D. Little is a creative communicator, researcher, evangelist and a student of disruptive innovation. Connect with J.D. Little :

Main image: onephoto