The Most Important Metric for Predictive Marketers

A few weeks ago I published an article in AdExchanger titled “Should A Data Scientist Lead Your Marketing Team?”. We received a deluge of feedback in response asking me to explain what I meant. Since most CMOs don’t have the training to become a data-scientist or the time to get trained, how can they become more effective in their new roles? In this series, I will outline the minimal amount of knowledge that you need to use predictive methods in your marketing organization.

No matter what the approach you are currently using  – lead-based marketing, account-based marketing, inbound marketing, persona-based marketing or a combination of all of the above – accurate predictions about individual and account behavior will improve your key MBOs (e.g. conversion rates, time-to-close, average deal size, opportunity creation rate) simultaneously.

This leads to two questions: a> Can you model buying behavior accurately? and b> If you can build an accurate model, how do you plug it into your workflows in order to realize the promised improvements?

In this post I will deal with the first question – how do you know if the model you have in hand is good enough for production?

For the class of models marketers deal with, the right output is almost always a probability that a target (customer, prospect, user) will do something positive and valuable in response to a stimulus, in a given period of time. Even binary outcomes – will someone buy or not – can be mapped to a model predicting the probability that someone will buy within a quarter.

For this class of models where the output is a score or a probability, there is a combined single metric that captures the overall quality of the model. This metric is AUC – short for “area under the curve”.

The AUC measures discrimination, that is, the ability of the model to correctly classify those who are likely to respond positively versus those who aren’t.  Consider a situation in which customers are classified into two groups, those who have purchased recently and those who haven’t. You randomly pick one customer from the closed/won group and one from the closed/lost group and test the model on both. The customer with the closed sale should be the one with the higher probability according to the model. The area under the curve is the percentage of randomly drawn pairs for which this is true. Essentially, it is a probability that the test correctly classifies the two customers from a randomly chosen pair. 

To evaluate any predictive model, you can ask – what is the AUC?

If the answer is between 0.7-0.9, you have a good model, if it is between 0.5-0.7, the model isn’t reliably predictive, and if it is >0.9, the model is likely too good to be true in production. AUC can frequently be improved by training with more data – especially for use-cases that involve new product introduction or new market entry.

That’s it – with one single question, you can establish your data-science chops and make a decision about going forward with the predictive model or asking that the provider do better.

In the next post I will explain why AUC is a much better metric than the oversold “accuracy” metric.

Written by

Shashi Upadhyay
December 10, 2015

Comments