Stop Starting with Smiles and Begin with Business Outcomes: Results-Based Integrated Design

Repeated studies over the past 70 years have challenged the impact of training. When tragedy hits, training’s budget is one of the first to go! Yet, knowing that competitive advantage depends on human productivity, innovation and pursuit of excellence, there is a better way.

In its 2003 State of the Industry Report, The American Society for Training and Development (ASTD) revealed challenges for the performance improvement industry include controlling costs, evaluating training and understanding return on investment (ROI). The integrated model described here addresses this serious need and applies an inductively logical, visible, repeatable process to design training with the business need in mind.

Over 85 percent of benchmark organizations surveyed by ASTD in its 2002 State of the Industry Report evaluate training with “Feel Good Smile Sheets”, those feedback forms distributed to participants at the end of training. What do most Smile Sheets do? They evaluate how the participant felt about the instructor. Smile Sheets don’t measure whether trainee behavior will change.

You want behavior change? Escape from this mediocre majority; take steps towards realizing breakthrough performance.

One of the foremost authorities on training assessment is Donald Kirkpatrick. He introduced his assessment model in 1968 and that model remains the standard for talking about training evaluation. In simple form, Kirkpatrick’s model has the following four levels:

Level One       Did they like it?

Level Two       Did they learn it?

Level Three     Did they use it?

Level Four       Did they make a difference?

Level One – Did they like it?

The Smile Sheets mentioned earlier would be Level One evaluation in Kirkpatrick’s model. While the real purpose of Level One is tracking customer satisfaction to make it right this time and to improve it next time, most Level One users stop with this as the only evaluation of the training. The participant’s immediate assessment of how well they liked the training, the instructor, and the content, is a limited, biased measure.

But the whole purpose of training evaluation is to prove the training helped move the organization toward realizing business outcomes. Smile Sheets don’t do this. So turn Kirkpatrick upside down! Begin with business outcomes instead!

Level Four – Did it make a difference?

The ultimate assessment in Kirkpatrick’s model attempts to assess the difference the training made in terms of the business or organizational outcomes. As an example, a company might have surveyed their customers and found that the level of customer satisfaction was 75 percent. Leadership wants more. A subsequent survey of customer satisfaction found that the measure had moved from 75 percent to 92 percent. Did training make a difference?

Kirkpatrick points out that it is very difficult to determine the degree to which any one strategy is responsible for the movement of such a metric in a complex system. He suggests that training professionals should look for evidence that the training had impact on the business rather than trying to prove it. The cost of Level Four evaluation strategies, where proof is the target, is significant because of the complexity of multiple influencers; however, one powerful aspect of Level Four is that it calls for the training team to clarify the business outcome with the stakeholders before training design. Most corporate training is designed and delivered without any consideration of a business measure. In those cases, it is most likely that the intervention won’t meet the potential impact on the business.

Level Three – Did they use it?

Unfortunately, less than 10 percent of corporate training is evaluated at this level. There are two common approaches to Level Three evaluation. One is through the use of participant surveys after the class that ask the degree to which participants are using information from the training course. The second approach is through observation of participants after the training. Both of these approaches will provide helpful information – the differences between the two approaches are reliability of the data and the costs.

Participant surveys depend solely on self-reported data, thus the design of the survey instrument is very important. The instrument has to ask the questions and provide answer options that are clear in terms of the behavior that you seek to evaluate. If participants are asked to rank how often they use the behavior, define the rankings clearly. While dependent on truthful reporting, this method is much more cost effective than interviews and/or observations. Kick it up a notch with random observation checklists (yes or no) by the managers and watch the degree of buy-in with respect to the impact of the training grow.

Level Two – Did they learn it?

Assessing the degree of learning takes us to the second level. In most cases, this shows up as a pencil and paper test. Paper and pencil tests tell us nothing of higher order critical and problem solving skills essential to on-the-spot workplace application. Move Level Two evaluation away from pencil and paper testing and create authentic, real-world forms of assessment. Content learned and tested in context allows us to test for learning, not just to pass. There is not a doctor on the planet who will give a pharmaceutical representative a multiple choice test; yet, most will verbally challenge the rep and expect rapid responses. So assess the rep verbally. How well can they spontaneously articulate study results which match the doctor’s needs?

When Level Two assessments are designed with processes to observe the actual application of the new knowledge, voila! You get a realistic view of potential transfer of the desired change from the learning place to the workplace. Support the concept that development of skills as an ongoing process is essential. Provide managers with this type of assessment after a training program, and they will have a clear indication of how to support the employee in their continued development.

Begin with Level Four—know what your goal is before you design your training. It’s much easier to measure, and you’ll be able to defend your department’s budget before the economic downturn. While each level has its place, turning the evaluation model on its head can help training garner more appreciation and respect in your organization.

Stop starting with Smiles!

This article was re-uploaded from our internal archive.

Never miss a post! Get blogs and more delivered directly to your inbox.

arrow-right Sign Up