In my experience, one of the largest gaps between opportunity and reality in business today - especially from a Customer Success standpoint - is in the area of what and how to measure. In SaaS, where measurable data represents an embarrassment of riches, the gap can be even more significant. The ability to track and measure customer behavior down to a minute level is integral to the SaaS structure, but that proliferation of data actually can make extracting what’s important and valuable that much more difficult.
An overwhelming abundance of choices often results in decision paralysis and reversion to a default – the equivalent of choosing the Turkey Club from the exotic, multi-page menu. In the case of SaaS, this often means relying on tracking log-ins or utilization of certain core functionality on the assumption that these must be important indicators of adoption and leading indicators of value.
Overcoming Analysis Paralysis in Customer SuccessIt’s in this context that Dan responds in his blog post by recommending that you order the ice cream sundae. I like ice cream sundaes, and I also find a lot to like about Dan’s suggestion to track success. It does two things that are, in my opinion, very important:
- It focuses attention on something that really matters – a tangible measure of success and translation of that into real ROI.
- It requires a disciplined process for customer discovery – asking the right questions of the right people to identify the right signs of success and the right formula for the ROI.
Customer Success is a Journey, not a Static Outcome
As much as I like ice cream sundaes, though, I know they aren’t a good choice for lunch (although my 16-year-old son would say otherwise). And so there are also some things I don’t like about the idea of only tracking “success”
- Tracking end results isn’t transparent enough to facilitate optimization. Drucker’s precept – “what’s measured improves” – works best when the things that are tracked and measured are sufficiently transparent to and directly correlated with the strategies and activities that produce them. This is the sound logic behind keeping upsell/cross-sell measurement separate from MRR when calculating churn.
- Tracking should enable you to triage problems and opportunities. Measuring success alone can lead you to view things as a binary proposition: either the customer achieves success or doesn’t. Even if you allow that more success is better (in Dan’s hypothetical example, more button pushes and greater ROI), it’s a framework that makes it hard to define the terms of optimization. If Dan’s hypothetical customer generates a 75% ROI, is that as good as can be expected or is that number higher (yay!) or lower (dang!) than should have been expected based on trends and like-customer results? And, most importantly, if it was higher or lower, why?
- Only tracking end results doesn’t facilitate knowledge of customers and customer behavior. Customer activity data (the right customer activity data) is going to provide a nuanced view of how customers use the application and can yield insights about why they use it that way. This data can and should be the basis for customer segmentation efforts that can provide significant efficiency and effectiveness gains.
But this isn’t all about Dan’s blog... you still need to figure out what to order for lunch, right? As much as I don’t like “all of the above” recommendations, in this instance it is the one that makes the most sense to me. SaaS companies need to track activity in order to optimize their performance in terms of maximizing CLV and increasing efficiency (e.g., the max MRR per CSM or the new CSM hire ramp-up time). This activity data can and should have value for the customer champion as well since it can reveal opportunities to increase adoption across the organization and illustrative, compelling use case studies to present to senior management.
Customer Success Includes User Activity Tracking
That being said, the activities you track must correlate to overall success and ROI. In fact, in order to figure out what those activities should be, you need to start with the same disciplined process for ascertaining the success measure that you will plug into the ROI calculator. But don’t stop there. You need to understand what the customer and CSM activities are that deliver that success. (Ideally, you would push to turn these into customer segment-level insights to avoid inefficient customer-by-customer analyses.) And finally, you turn those activities into operational/usage measures that can be tracked, analyzed, set up on dashboards, and used as tools to manage the performance of the CSM function and to maximize customer value.
At my former company, we were having significant issues with churn. Customers weren’t implementing all the valuable parts of the application and thus were more likely to decide that it wasn’t worth their money to renew because they weren’t realizing the full value of what they originally paid for. But it was only when we dug deeper and surveyed customers that we realized that our onboarding program needed to be revamped and improved and, most significantly, that there was a strong, direct correlation between training attendance and customer success. Measuring success wouldn’t have gotten us to that insight.
And even after the insight had been revealed, tracking success wouldn’t have told us if we were doing what we needed to do to achieve success down the road. By tracking and monitoring – and working to drive –training attendance and satisfaction, we were able to improve both our own and our customers’ chances of success.
So perhaps my son isn’t totally wrong after all: We should start with dessert and work our way back to the sandwich...