This post addresses the surprising volatility of NPS and how this continues to be a big challenge for users.
Most companies are focused on continuously improving their customer satisfaction, so tracking a business Net Promoter Score is an important step in developing the culture of Customer Centricity. Over the past years, Net Promoter Score has been highly adopted by lot of different organisations around the world as KPI of customer satisfaction. Beside the KPI, NPS has the big advantage to trigger a cultural shift. I personally saw lot of Companies shifting to a more customer centricity culture after the adoption of an NPS program: the value comes not from the KPI itself but the whole effort to better understand the customer and put him in the center of the Board of Directors room.
If you however are only focused on tracking the score just for the sake of doing it, you are losing time and efforts. The score itself is highly volatile, and the greater shake I see here and the instability it causes is what’s behind the Customer Experience department frustration. Having to explain to the management that an increase or decrease in the score that looks considerable is an unpleasant task at best. It generates possible windy discussions and at worst, may undermine the credibility and the entire effort of the Customer Experience team.
Let’s try to explain some important aspect of the NPS KPI such volatility, sample size and margin of error.
NPS Sample Size
By converting an 11 point scale into a 2 point scale of detractors and promoters (throwing out the passives), information is lost. What’s more, this new binary scale doubles the margin of error around the net score (promoters minus detractors). Unfortunately, it means that if you want to show an improvement in Net Promoter Scores over time, it takes a sample size that is around twice -even much more – as big to calculate the difference, otherwise the difference won’t be distinguishable from sampling error. This has been one of my biggest complaints about the way the NPS was calculated since I was first introduced to it.
I’ve seen organisations looking at NPS dashboards and investigating why the NPS has gone up or down over a period of time. In too many cases, adding some error-bars to the graphs show that the changes are within the margin of error. The simple workaround is to use the raw mean and standard deviation when running statistical comparisons. I found that the mean likelihood to recommend responses can predict the Net Promoter Score quite well. You can use the net scoring system for the executives but use the raw data for the statistics. For that specific reason I strongly suggest my clients to show the NPS score near a bar chart distribution of the NPS votes, including the average, mean, median and mode of that distribution.
Not trowing away the 7 and 8 and work around the mean helps explaining the volatility. Working with NPS data coming from several customers in the last 10 years, I found a strong correlation between the mean and the NPS properly computed. The correlation is R = .959. You can test it by yourself using the regression equation:
NPS = -1.55 + 0.39 * mean – 0.7 * mean^2 + 0.006 mean^3 (Adj-R2=96%)
What I found was the mean to the likelihood to recommend question can predict around 96% of the variability of the Net Promoter Score. That means there’s a loss of about 4% when converting between the mean and Net Promoter Score. Which explain also the volatility observed in the NPS.
Having the NPS in mean (and not score) – considering the standard deviation – explains small movement in the NPS. In that specific case, using the score generates a movement explainable within the margin of error of your sample.
Margin of Error
The margin of error in your sample depends, among several factors, from the total number of respondents and your total population. In few words, Company A with 50 customers able to get an answer from all of them, it is in a total different position than a Company B with 5’000 customers having same 50 valid answer. Company B needs 357 valid answers to get a 5% margin of error at 95% confidence. With just 50 answers the margin of error is 13.79% at 95% confidence. While Company A, of course, will have 0% margin of error at 100% confidence. The bad news: that is valid for the mean, if we consider NPS Score then we need to minimum double the sample of Company B.
The exact figure can be calculated by the formula to calculate the margin of error for a sample proportion remembering to include the fact you have reduced the sample assigning observations to just two categories: detractors and promoters.
One last consideration: does your sample represent your population
The NPS observed at specific touchpoints is not the NPS of your customer base. It is the NPS of the customers had experiences at those specific touchpoints. If you want to calculate the general NPS of your customer base be sure the sample of respondents represents your customer base, unless you will have a bias in the sample. For instance, NPS of customers contacting customer care cannot be considered as a measure of the whole customer base, but only for those specific customers with the need to get in touch with customer care. There is a specific statistic technique allowing assigning measure of accuracy to sample estimates. It is called bootstrapping and I will not go in detail in this post.
As I mentioned at the beginning, the NPS – especially an NPS program – helps organisations becoming Customer Centric. That is definitely a positive aspect of NPS. My recommendation: unfortunately is not a “single KPI fit all” neither a “single question explaining all”. It is important, if you are responsible of an NPS program, to identify other KPIs relevant for your business to be reported together with the NPS. Better, if you can explain a correlation between those KPIs and the NPS itself.
Not sure your NPS score is correctly calculate? Feel free to get in touch with me.