Smarter GTM

Can AI Really Make Go-to-Market More Productive?

Demandbase image

October 30, 2023

7 mins read

Ai for go-to-market-strategy

Can AI Really Make Go-to-Market More Productive?

Since the release of ChatGPT on November 30, 2022, the topic of artificial intelligence (AI) in the workplace has been on everyone’s minds. Companies are struggling to figure out how and where to use it best, and are wondering how to quantify the impact it will have. Is it a productivity powerhouse, or does it carry the risk of diminishing human skill? 

Recent studies are starting to provide us a nuanced answer: it’s both. While AI was proven to significantly elevate performance across various tasks, there’s a caveat. The technology excels in some areas but falls short in others, particularly when it comes to accuracy and creativity. 

Let’s explore these studies and what they say about maximizing the benefits of AI in our B2B go-to-market (GTM) strategies.

AI for the win

A recent study by Harvard Business School found that Boston Consulting Group (BCG) consultants using ChatGPT-4 significantly outperformed those who did not across 18 real-world tasks. These included creative tasks (“Propose at least 10 ideas for a new shoe targeting an underserved market or sport.”), analytical tasks (“Segment the footwear industry market based on users.”), writing and marketing tasks (“Draft a press release marketing copy for your product.”), and persuasiveness tasks (“Pen an inspirational memo to employees detailing why your product would outshine competitors.”).

The study found that the BCG consultants using AI completed 12.2% more tasks while doing it 25.1% faster. They also produced over 40% higher quality results compared to those not using AI. 

That’s not just a marginal improvement; it’s a significant leap in productivity and quality that can translate into real competitive advantages for companies. Projects move more quickly from planning to execution, and higher quality work leads to better customer satisfaction, fewer revisions, and a more robust bottom line. These improvements suggest AI, when used properly, will be truly transformative for go-to-market.

Notably, the study also uncovered that AI acts as a “skill leveler.” The consultants who initially scored the lowest saw the biggest increases in performance when they teamed up with AI. While top performers also improved, the boost was less dramatic. This has deep implications for performance management across functions and disciplines. But, as we’ll see, AI isn’t always the right answer.

What about creative thinking?

Another study in Nature investigated the creative abilities of humans and AI chatbots. Participants were tasked with thinking of unique uses for common items. On average, the AI chatbots performed better and came up with more creative ideas than humans (as measured by an objective calculation of “semantic distance” and subjective ratings by human judges). The chatbots were also more consistent and showed less variability than the humans.

However, the most creative ideas from humans were on par with or better than those from the chatbots. The study concluded that in instances of high creativity and divergent thinking, the best humans still outshine AI, underlining the unique aspects of human creativity that AI has yet to replicate or surpass​.

Falling asleep at the wheel

While Generative AI is immensely powerful in some tasks, it fails completely or subtly in others. It’s great at turning CMO challenges into a lyrical poem, but it’s terrible at math and I’ve never been able to get it to return something that fits a specific word count. 

There’s a boundary that separates tasks where AI does well from tasks where AI does poorly, but unless you use AI frequently, it’s hard to know where that boundary is. The HBS study calls that unclear line “The Jagged Frontier”.

Ethan Mollick, one of the HBS study’s authors, explains it like this in his excellent post:

“Some tasks that might logically seem [to be]…equally difficult – say, writing a sonnet and an exactly 50-word poem – are actually on different sides of the wall. The AI is great at the sonnet, but, because of how it conceptualizes the world in tokens, rather than words, it consistently produces poems of more or less than 50 words.  Similarly, some unexpected tasks (like idea generation) are easy for AIs while other tasks that seem to be easy for machines to do (like basic math) are challenges for LLMs.”

To examine this, the HBS study included a task that would exploit the blind spots of AI to make it give a wrong, but convincing, answer to a problem that humans could easily solve. Sure enough, human consultants got the problem right 84% of the time without AI help, but when they used the AI, they did worse, only getting it right 60-70% of the time.

That’s why Mollick warns against “falling asleep at the wheel.” Over-reliance on AI can lead to mistakes, especially when humans let AI take over tasks it’s not equipped to handle. In another HBS study from Fabrizio Dell’Acqua, recruiters who used advanced AI found themselves becoming careless and less discerning in their judgments. They overlooked highly qualified applicants and ultimately made poorer decisions than those who either used less sophisticated AI or no AI at all. When AI performs extremely well, there’s a tendency for humans to disengage, allowing the machine to take full control rather than using it as an augmentative tool. 

Centaurs and cyborgs: two approaches to AI

OK, we’ve learned that:

  • Consultants using AI outperformed those who did not in terms of speed and quality, with the biggest gains from the lowest performers.
  • The best humans still outperform artificial intelligence in creative divergent thinking. 
  • There’s a risk that over-reliance on AI can cause knowledge workers to disengage and let the machine take over, leading to errors and lower performance. 

So how should we use all these insights to navigate the path of when and where to use artificial intelligence?

The HBS/BCG study identifies two approaches to navigate this jagged landscape. Workers using the “Centaur” approach clearly divide up the work between humans and machines, strategically allocating tasks based on each entity’s strengths (e.g. the human guides the strategy, and the AI does the brute force work). On the flip side, workers using the “Cyborg” approach integrate humans and machines deeply, working in tandem on almost every step (e.g. such as initiating a sentence for the AI to complete).

There’s no one right strategy to use. For example, I used both approaches in helping to write this post, sometimes using ChatGPT-4 to summarize the original research and other times having it draft or finish specific sentences and paragraphs. And no matter what, I reviewed the results and made sure it worked with my voice. The key takeaway is that the strongest approach combines the strengths of humans with the strengths of AI. 

Implications of AI on B2B go-to-market strategies

From account-based marketing (ABM) to branding content to customer success, these results suggest nuanced ways in which humans and AI can work together to maximize efficiency and effectiveness in your go-to-market.

  • Account-Based Marketing (ABM): AI is essential for scoring accounts and identifying buying groups more efficiently than manual methods. Machine learning algorithms can analyze market trends and customer behavior to not only pinpoint high-value accounts but also forecast their account journey. But humans should own the final account selection and it’s important to keep a human touch in crafting personalized messages and experiences.
  • Content Marketing: AI can dramatically speed up content production, but human involvement is still crucial for ensuring quality and generating truly unique and compelling stories.
  • Branding: AI can assist in data analytics, extracting sentiment and trends from millions of data points. This information can guide branding strategies, but the human element is still required to craft the narrative and emotional connection that defines strong brands.
  • Demand Generation: Automated systems can optimize ad placements, perform A/B testing, and personalize interactions at a scale impossible for humans, thereby increasing the efficiency of demand generation efforts. But the data also suggests that AI can sometimes get it wrong, meaning ongoing human oversight is essential for quality control.
  • Sales Development and Sales: AI can automate repetitive tasks such as drafting outreach emails and logging activities, allowing sales teams to focus on more complex tasks. However, given that AI is still not perfect at understanding nuance, human involvement remains vital for relationship-building and closing deals.
  • Customer Success: AI chatbots and automated support systems can handle a large volume of routine queries, freeing up human customer success managers to deal with more complex issues. The Centaur approach would work well here: let the AI handle the straightforward queries while human teams manage more complex customer needs.

Final Thoughts

AI offers significant advantages in automating and optimizing various aspects of a B2B go-to-market strategy. However, it’s crucial to remember that the technology is not a silver bullet. A hybrid approach, blending AI’s speed and data-crunching abilities with human creativity and nuance, appears to be the most effective strategy to optimize B2B go-to-market for the foreseeable future.

Demandbase image

Jon Miller

Former CMO, Demandbase

This article was published in:

Related articles