Best Practices for Measuring Enablement Success

It can sometimes feel like detective work, but here are some best practices for measuring enablement success depending on the sales environment, in their order of effectiveness.

Direct observation

The number one, most effective way to know if enablement is being applied is through direct observation.

Partner leaders should care enough to encourage and observe sellers as they attempt something new. This may be during a ride-along for face-to-face meetings or listening to call recordings, nothing beats direct observation.

For the evaluation to be effective, the partner leader must know intimately what was in the enablement training so that they may identify the new behaviors in action. If they don’t, then they may just coach and evaluate the interaction as is, without considering that there is an attempt at something new.

Partner Leaders also need to track if there is an attempt or not, and the tracking needs to be consistent. That is the only way to begin estimating EAR.

There are many coaching philosophies and methodologies to help learners as they adjust and tune what they learned to the real world. No coaching can happen if there is no attempt, just like no evaluation of the enablement quality can be made without the attempt.

Systematic observation

There are systems that can help with indirect observation of learning applications and can be used to help estimate EAR. Many call centers have keyword-spotting systems that can scan the audio of a conversation and hunt for keywords and phrasing. Choosing keywords related to enablement training can show evidence that the learning is being used.

As an example, if sellers are expected to talk about a new partner that they just learned about, the name of that partner can be one of the keywords the system listens for. If that phrasing shows an uptick in use shortly after the training, then there could be a correlation between the two. So now there are grounds for exploring the quality of the enablement.

As a reminder, these systemic tools are great to see if sellers are attempting to apply what they learned but, they may not offer much as to what specifically is working or not working or why.

That’s ok. Those are the next steps.

Systemic tools are also a great way to objectively show if learners are NOT applying what they learned, which is much more important. If sellers are not applying what they learned, that could be a symptom of issues that go beyond training and enablement and is always worth further investigation.

Two-punch surveys

There are a lot of opinions on the effectiveness and reliability of employee surveys. Survey completion can be as much work as promoting and chasing training compliance.

They do have their advantages though.

Surveys are inexpensive, easy to distribute, and have rich reporting, so they are easy to analyze. In many cases, it may be the only option to get feedback across lines of business.

The best way to hone in on EAR is to use surveys two-fold. The first is immediately after the training and enablement, asking sellers the following question:

  • What is the likelihood that they will apply what they have learned?
  • When do they expect to attempt?

But then a follow-up survey should be sent out based on the expected timeframe question to ask if the sellers actually DID attempt to apply.

This two-punch survey approach can not only help to better understand if sellers are applying what they learned, but it can also give clues as to the health of the overall learning culture.

Do sellers trust the enablement, and their organization, enough to attempt what is taught?

Is the enablement intentional enough so that it can be applied within a timeframe that is reasonable to a seller?

I mean, would it make sense to learn something that a seller won’t apply for many months?

Follow-up interviews

Follow-up interviews involve an actuation conversation with the seller, sometime after they have applied the learning (hopefully shortly after). The interview can ask reflective questions to start gauging beyond just the application attempt.

This style of data gathering can be a slippery slope because the focus can slip away from learning attempts and enablement quality, and can quickly become a coaching conversation. This is not a coaching session because it does not seek to tweak or tune the behavior of the seller to be more effective.

The focus should be on the attempt to apply the learning, and how the enablement can be improved from an experience standpoint.

Combine the Techniques

The very best approach is to combine the observation techniques.

Use systemic gathering along with surveys to correlate trends and validate what sellers have said in their surveys.

Did they say they were going to apply what they learned?

Well, let’s see if the words cloud or corroborate the seller's intent.

Mentors can then use their direct observations to fill in the story as to what circumstances they saw the attempts being made.

Funnily, follow-up interviews can be done after analyzing the data to ask better questions and target specifically how the enablement experience can be improved based on the data and circumstances.

Put all this together - the data and the direct observations - and now look at the seller's KPIs, and there can be a confident story on the effectiveness of the enablement.

It takes a true internal partnership for this all to work, but that’s what partner enablement teams always need to be because they are in service to the employees they support.