Top 10 B2B Lead Scoring Mistakes (Part 2 of 2)

When a client says “We set up lead scoring but sales doesn’t pay any attention,” it’s a sure sign that something is amiss – and more likely the problem doesn’t rest with sales. Salespeople will always pay attention to lead scoring if implemented correctly, at the very least because it allows them to prioritize their time following up on those leads that truly merit their attention. Here’s Part 2 of our discussion of the most common B2B lead scoring mistakes. (For Part 1 of this post, click here.)

lead behavior score6. Scoring every Web page visit the same.

Web page activity is a useful measuring stick to gauge a prospect’s interest in your company and solution, but not all Web pages are created equal. Assigning the same value therefore, say: 1 point per page, to every page on your Website, fails to differentiate between high-value and low-value pages. Some lower-value pages, “careers” for example, may even merit negative scores, whereas more high-value pages, say: “pricing” or “contact”, should be awarded relatively more points.

7. Too little negative scoring.

Negative scores are a useful tool in avoiding false positives – that is, assigning high lead scores to leads that in reality don’t merit sales’ attention. Too many companies ignore negative scores altogether, with the result being that junk leads slip through the cracks and sales confidence suffers. Negative scores can be demographic (e.g. consultant titles, companies below a certain size, non-supported geos) or behavioral (e.g. inbound traffic from consumer sites, visits to career page.)

8. No scoring suppression rules.

In a similar vein, some leads simply shouldn’t be scored at all since they’ll never be candidates for sales follow-up. Examples of leads that fall into this category include students and competitors. Rather than attempt to score down leads who meet those criteria, it’s usually simpler to simply suppress those leads from scoring altogether, and eliminate the possibility that they somehow (through an intense bout of activity, for example) get passed to sales in error.

9. Overscoring caused by repetitive actions.

A lead that clicks on the same email 5 times in one hour shouldn’t be scored the same as someone who clicks on 5 separate emails over 5 months. Yet without the appropriate score qualification rules, leads can be scored too highly based on repetitive action. In the example quoted, one solution would be to implement a rule such that any lead can only be scored for email clicks a maximum of once per hour.

10. Scoring “contact sales” incorrectly.

Most B2B companies have a certain lead score threshold at which prospects are deemed ready for sales. If an individual prospect explicitly requests he/she be contacted by sales, by filling out a contact request form, for example, the temptation is to automatically assign that lead a point value equal to the sales-ready threshold, say 100 points. However, what this fails to do is distinguish between a lead with 10 points who requests sales contact and a lead with 40 points who does the same. Both merit forwarding to sales, but in theory, the 40 point prospect is a more “mature” lead. A better solution, therefore, is add a certain number of points (in this example: 100 points) rather than bring the lead automatically to the sales-ready threshold.

One thought on “Top 10 B2B Lead Scoring Mistakes (Part 2 of 2)

  1. Gregg Thaler

    Great post. Is there room for an 11th mistake?

    11. Distributing Leads scores amongst duplicate records.

    To most MA platforms gregg@ringlead.com and greg@ringlead.com are two different Leads, when those are both valid emails for the same person.

    Unless someone visits a website from the same device every time, Marketo, Pardot, Eloqua, will all see marketing actions taken by those different emails as separate.

    As a result, the Lead scores will be distributed between the two records rather than concentrated on one. The result is that an MQL that should be an SQL is delayed or worse, never becomes an SQL.

    The solution? Establish a data plan and use the right balance of automation and intervention to enforce data quality at the point of creation.

Comments

Your email address will not be published. Required fields are marked *