As we continue our discussion on measuring sales enablement, it’s important to look at what could go wrong. A whole host of errors are commonly introduced into the measuring process. So, it is important to avoid making these mistakes.
Why is it important to avoid errors? Above all, credibility. A trained executive eye, especially one that is focused and detail-oriented, will easily spot inconsistencies and conclusions that the underlying data does not support. This undermines the credibility of your analysis. Without credibility, funding for sales enablement measurement initiatives falls by the wayside
6 Pitfalls of Measuring Sales Enablement
-
Bad Underlying Data
Most of all, getting good clean data is the largest problem. With bad data, analysis and measurement is pointless. These are the typical reasons of bad data:
- Individual resistance to using sales force automation tools – sales reps hate using these tools. They see a lot of cost but little value in putting forth the effort. If using a system is not an integral component of the sales rep’s day-to-day job, opportunity metrics end up being hit or miss.
- Circumventing the process with spreadsheets – It’s not unusual for sales managers to use spreadsheets to manage their pipeline. Spreadsheets can provide a “quick and dirty” analysis and offer gated visibility to other stakeholders like sales ops or senior management. Furthermore without accurate pipeline data, the difficult of comparison and benchmarking increases.
- Other sales performance metrics – before starting the measuring process determine to what extent your team uses their SFA solution and uncover any additional sales performance metrics that are used in the organization.
-
Correlation / Causation
For any sort of program you run into a correlation vs. causation issue. Did the sales enablement actually help reps be more effective? Or was there some other factor that did it (new reps, new management, new tools)? When there is an increase of sales performance, people line up to take credit. To address this, benchmark data beforehand, along with gathering qualitative data around the usage of sales enablement tools and programs. Furthermore, run surveys and focus groups with sales reps to see what works and what doesn’t. This complements your hard usage and sales metrics.
-
The Good Sales Rep Issue
We have all heard the adage, that a great salesperson could sell ice to an eskimo. This rings true for sales enablement programs as well. Certain reps and sales managers simply kill it, and take advantage of the tools available to them. This makes something like A/B analysis inferior, and highlights why it is important to set a benchmark.
-
Changing Definition of Metrics
One thing you will often see in any sort of sales ops function is that metrics change. What defines something as an opportunity? In addition, what happens if your product mix changes? Can you still do a comparison? As a result, this favors some sort of A/B analysis.
-
Sales Rep and Management Turnover
Turnover can introduce errors in your analysis and it appears to be steadily decreasing — according to Sales Hacker the average tenure of a sales rep is 1.4 years. Is it fair to compare a new rep or sales manager to someone who has been around for years? Therefore address this in your analysis by segmenting by average tenure.
-
Analytics Expertise
While every marketer should have a solid set of analytics skills, for this type of measurement you are probably going to want some sort of expertise. Previous experience running statistical models will help find the real trends, it’s too important to risk it.
Next up on measuring sales enablement, I will share with you how I measured one of my programs.
One thought on “6 Pitfalls of Measuring Sales Enablement”