Your metrics may be more arbitrary (and less useful) than you think.
By Andy Myers
In this era of data analytics and data science driven by machine learning (ML) and artificial intelligence (AI), there’s often a temptation to jump straight to solving the most complex business problems because these new techniques allow for it. But for most businesses, a more immediate and tangible opportunity exists.
The metrics you use to evaluate performance may be calculated using arbitrary factors that are based more on historical business practices than on current customer behavior. Using statistical methods to recalibrate your key metrics based on actual customer behavior will result in insights that are more consistent, actionable and representative of your business, while also setting you up to implement ML and AI successfully.
The quick approach
Our brains are programed to make things look nice and to create order. We’ll often round to the nearest 5, 10 or 100 for the sake of neatness. We’ll use intervals that are familiar (weeks, months, quarters, years), design ranges that look even (1–10, 11–20, 21–30) or follow a pattern (1–2, 3–5, 6–10) because that makes the bands themselves easier to understand, rather than the distribution they show naturally. We often begin to view the business and to take action based upon these values or bands, without ever questioning their origin or utility.
Viewing performance through arbitrary bands may be hiding something that’s lying underneath. Perhaps your customers follow a nine-week purchase cycle? What if the thresholds for a specific spend behavior are $13 and $42? Would you ever be able to take action upon these insights based on reports using your existing, arbitrarily constructed (though easy-to-understand) bands?
“Eyeballing” data is great for sense-checking and spotting obvious errors, but when you’re planning to make decisions based upon it, it’s fraught with danger. You may miss trends that aren’t necessarily visible to the human eye. If you’re only looking at customer behavior by quartile — using a box plot, perhaps — would you notice a bimodal distribution?
To properly recalibrate your metrics and extract the most value from your analytical efforts, you’ll need:
A Stable Analysis Period
The time period over which you run your analysis or reporting is incredibly important. You need to find a time period that ensures that a representative volume of your customers — and their behavior — will be included in the analysis. However, especially in the case of high frequency businesses, the period shouldn’t be so long that you lose efficiency due to the high volume of data that needs to be processed.
In other words, a stable time period should provide an appropriate snapshot of your business, be relatively quick to analyze and allow you to be confident in the results. For example, a department store might have a stable period of 12 months because most customers will likely shop at least once during that time. A grocery store, however, which most of us visit much more frequently than a department store, might have a stable period in the 6–8 week range.
By identifying this stable period, you can put analytical insights into action and measure their effects sooner. Running tests on this basis instead of using an arbitrary time period means you can plan projects more effectively. It also brings consistency to all the analysis across the business, from BI reporting to marketing measurement and custom analysis projects.
A view across multiple stable periods
The best way to ensure consistency in your analyses is to take your inferences over multiple time periods. Doing this means you can remove factors that may skew results if just viewed in a limited snapshot. For example:
· Seasonality effects: Q4 holiday sales spikes and Q1 sales lulls.
· Product performance: The popularity of a TV series-over-season.
· Market changes: A new product launch or product de-listing.
· Weather: Impacting consumers’ ability to visit your business.
Taking this approach, you can set consistent, insightful and actionable ranges to track metrics such as average spend, frequency or quantity. They’ll tell you what the natural range splits are for your metrics, and they’ll also allow you to quantify how much you need to shift behavior to make a meaningful difference. For example, encouraging customers to [visit/buy/watch] one more [time/item/show] will be difficult to execute if the natural frequency bands for your customer segment are too wide, because customer behavior within the band will be too diverse to steer effectively.
Moreover, defining customer segments using this approach means they’re likely to hold steady over time, providing a more robust framework by which to measure behavioral migration. For example, moving people up a value chain or identifying behavioral changes that lead to lapsing.
This analysis is conducted using migration clustering, which involves observing the distribution of your customers for key metrics over two consecutive stable periods, and hierarchically clustering those customers into groups based on the combination of the percentiles (or n-tile depending on the data) into which they fall in the two distributions. The ultimate goal is to identify the behaviors that are common and consistent period-over-period, and to ignore outlying, anomalous behaviors. The clusters account for some natural movement within themselves, meaning that movement between then should require some outside stimulus from the business, such as targeted marketing.
Pragmatic, thoughtful modeling
It’s important to note that this isn’t just an algorithm spitting out some clusters and the business blindly using the output. As with any machine learning or artificial intelligence technique, the results must be vetted by an analyst to ensure that they’re appropriate given the business context. For example, it may be that the highest average spend/frequency band identified in the clustering only accounts for 0.1% of your customer base. This is likely too small to be particularly useful. By running the algorithm multiple times with varying numbers of clusters outputted, you can see where clusters naturally split. You can then use those splits to adjust an earlier solution to make the resulting solution more applicable.
This is the art of data modeling. It’s a subtle art — the bit that the computer can’t get to, the real-world context that the algorithm can’t include. A good data analyst or data scientist doesn’t just make these decisions by eyeballing or arbitrarily choosing cluster boundaries based on a whim. They’re combining the algorithm’s power with their own knowledge of how the business operates to fine-tune the solution for maximum utility by executives, marketers, and other stakeholders across the business.
This migration clustering technique can be used in isolation to set standard bands for metrics to feed into reports or for practical application. For example, using frequency or spend bands to create loyalty tiers or to display upgrade or free shipping threshold pricing on a website. They can also be combined to form behavioral-based customer segmentations. Consider overlaying spend and frequency bands and clustering the combinations to form a value segmentation matrix that can inform CRM decisions to optimize customer value.
Taking the time to set these up means that the rest of your analyses that use these bands — and these will likely include some of the key success metrics for your business — will be more robust, more stable and aid better decision-making overall. Not to mention elevating your own knowledge and evolving your intuition of how your customers are behaving.