Measuring Partner Enablement
Many people are under the impression that the effectiveness of enablement programs is not measurable.
This isn’t true.
Enablement effectiveness can be measured whether it’s sales, partnerships, or leadership.
Measure the correct KPIs
The problem is that most organizations leave the measuring to the enablement teams, whose KPIs tend to lean on their own internal relevance, rather than the broader impact on the business. Most departments operate this way. Groups and teams measure what’s important to them.
Training and enablement teams can’t evaluate from within to any great effect (in fact, one could argue this is the case for all systems, human and otherwise). For enablement to be graded, there need to be measurements that connect the content and programs to the KPIs that are relevant to the employee’s role.
This sounds easier than it is. We have become a world so mired in data, that groups tell the story they want to tell versus the story that needs to be told.
Training and enablement fall short when reporting completions
For example, let’s say a TL&D team was tapped to help with worker safety enablement. The training team would work closely with site managers to build safety training, support materials, and a delivery mechanism - like an online course. If a worker completes the training, then they are in compliance.
This is an easy story for a TL&D team to tell. It’s simply a record of attendance or a passing test score. There can be leveled certifications showing a progression and grading of learning, often marked with a point, badge, or even belt system.
Training completion can be used to tell a tinted story of success in several ways. For example, the total number of participant completions as a percentage of the total employee population can give insight as to the communication, availability, and accessibility of learning and support materials. A high percentage of completion means employees heard about the training and were able to complete it. That’s a good thing. No one’s arguing that.
Training completion can also be used to tell the story of TL&D resource throughput and utilization. As an example, a certain number of assets created and seen with a certain number of training hours completed per employee translates to a number of employees impacted per enablement content creator. Think about this as the number of employees one content creator can impact. That's great information for scaling staffing models or capacity forecasting within training and enablement teams.
This type of reporting can bring some value. But let’s face it, completion and compliance reporting doesn’t give a picture of enablement effectiveness, although many teams may paint with broad strokes that it does.
There is no doubt that there is some value to this type of reporting, and even some ROI if it’s contractual or a litigious hedge play.
But, to tell the real story of enablement effectiveness, TL&D teams need to focus on more than just completion and compliance, which is a quantity measure.
Quality over quantity
Enablement quantity is the story TL&D wants to tell, quality is the story that needs to be told.
The reason why quantity is such a universal success measurement for TL&D is that it’s a measure they can conduct internally. They don’t have to partner with a line of business for data collection and analysis. They just need to track who has completed what training, or who has viewed what content. That's why it’s an easy story to tell.
Quality is more difficult to evaluate and quantify because it requires data collection beyond the sphere of control of TL&D. It requires looking at the metrics of the lines-of-business learners are tied to, and the behaviors that move those metrics. Enablement is about supporting repeatable behaviors that achieve the desired result.
We can define quality as the degree to which performance meets expectations.
For the sake of evaluating training and enablement, we need to ask two questions:
- Does applying the knowledge and performing the actions as described in the learning meet an expected and desired result?
- Does it impact KPIs relevant to the line of business?
This means partnering with different teams and stakeholders to see if any data or anecdotes can give clues as to whether the efforts in enablement are making a difference.
Sometimes a simple compliance report is good enough, and the quality of the enablement doesn’t need to be questioned. Sometimes, enablement is there to check a box, and that box-check brings value.
But when the only measurement is box-checking, it is difficult for enablement teams and partner teams to argue for added investment in their efforts.
Partner Enablement is yet another example of the cross-functional nature of partnerships teams. Setting KPIs that can be directly affected by partner enablement and then measuring the effectiveness of those enablement efforts will be a critical function of partnership teams in the near future.