Monday, August 04, 2008

Still More on Assessing Demand Generation Systems

I had a very productive conversation on Friday with Fred Yee, president of ActiveConversion, a demand generation system aimed primarily at small business. As you might have guessed from my recent posts, I was especially interested in his perceptions of the purchase process. In fact, this was so interesting that I didn’t look very closely at the ActiveConversion system. This is no reflection on the product, which seems to be well designed, is very reasonably priced, and has a particularly interesting integration with the Jigsaw online business directory to enhance lead information. I don't know when or whether I'll have time to do a proper analysis of ActiveConversion, but if you're in the market, be sure to take a look.

Anyway, back to our talk. If I had to sum up Fred’s observations in a sentence, it would be that knowledgeable buyers look for a system that delivers the desired value with the least amount of user effort. Those buyers still compare features when they look at products, but they choose the features to compare based on the value they are seeking to achieve. This is significantly different from a simple feature comparison, in which the product with the most features wins, regardless of whether those features are important. It differs still further from a deep technical evaluation, which companies sometimes perform when they don’t have a clear idea of how they will actually use the system.

This view is largely consistent with my own thoughts, which of course is why I liked hearing it. I’ll admit that I tend to start with requirements, which are the second step in the chain that runs from value to requirements to features. But it’s always been implied that requirements are driven by value, so it’s no big change for me to explicitly start with value instead.

Similarly, user effort has also been part of my own analysis, but perhaps not as prominent as Fred would make it. He tells me they have purposely left many features out of ActiveConversion to keep it easy. Few vendors would say that—the more common line is that advanced features are present but hidden from people who don’t need them.

Along those lines, I think it’s worth noting that Fred spoke in terms of minimizing the work performed by users, not of making the system simple or easy to use. Although he didn’t make a distinction, I see a meaningful difference: minimizing work implies a providing the minimum functionality needed to deliver value, while simplicity or ease of use implies minimizing user effort across all levels of functionality.

Of course, every vendor tries to make their system as easy as possible, but complicated functions inevitably take more effort. The real issue, I think, is that there are trade-offs: making complicated things easy may make simple things hard. So it's important to assess ease of use in the context of a specific set of functions. That said, some systems are certainly better designed than others, so it's possible to be easier to use for all functions across the board.

Looking back, the original question that kicked off this series of posts was how to classify vendors based on their suitability for different buyers. I’m beginning to think that was the wrong question—you need to measure each vendor against each buyer type, not assign each vendor to a single buyer type. In this case, the two relevant dimensions would be buyer types (=requirements, or possibly values received) on one axis, and suitability on the other. Suitability would include both features and ease of use. The utility of this approach depends on the quality of the suitability scores and, more subtly, on the ability to define useful buyer types. This involves a fair amount of work beyond gathering information about the vendors themselves, but I suppose that’s what it takes to deliver something useful.

2 comments:

Landon Ray said...

It's late, but a quick thought:

How about, instead of using 'buyer type', use 'goal of project' as the dimension against which to judge vendors?

Buyer type doesn't make a lot of sense to me, if by it you mean 'size of company'. Some of our most sophisticated clients are among the smallest.. and our largest client, ironically, uses our toolset in only the simplest ways.

But the 'goal' crosses company size .. and gets to what the project is all about in the first place.

The universe of tools you're looking at do attack a lot of problems. One may score high for it's ability to easily accomplish one goal, but not succeed at all in others.

Some users, for example, are looking to automate follow-up marketing based on behavioral triggers. Within that goal, there are all kinds of features available: can you automate direct mail? Voice broadcast? Sales-calls? Can you integrate data from my CRM? Etc etc..

Others are looking to 'close the loop' (that is, quantify campaign ROI and other metrics like LTV by channel, etc.) Within that, again, there are feature sets: can I capture offline orders? Shopping cart orders? phone calls? multiple campaign interactions? etc etc

Still others are looking to support sales by nurturing leads until they're 'sales-ready' or after they've been kicked back by sales. Again, a slew of features..

And, there are other goals as well.

It seems to me that it would make sense to start with a goal, ID the features that relate (and compare), and then look at what kind of job it is to reach the goal using each vendor's toolset.

I may be groggy.. but I think this makes sense...

David Raab said...

Hi Landon,

I agree that company size does not necessarily correlate with sophistication. By "buyer type", I generally had in mind a few different scenarios based on the needs of the buyer. These are somewhat similar to the cases you describe, although your cases may be more function-oriented. My current thought is to define a set of business functions, each supported by a cluster of system features (including attributes like ease of use). Each vendor would be given a score for each business function. The functions would then be weighted differently for the different buyer types. The weights would be applied to the vendor scores to rank the vendors for each business function.

This will remain a little vague until I actually sit down and define the specific business functions and buyer types. That should happen over the next week or two. The results will surely end up in this blog.