First identified by Don Kirkpatrick in his seminal works on training evaluation, Level 1 evaluation helps an organization to assess participants' reactions to a learning event, facilitator, setting, materials, and learning activities. Like a canary in the proverbial coal mine, Level 1 is an early indicator of whether the participant valued the experience enough to positively speak about it to others.
It can also identify whether it was a negative experience likely to inhibit learning and application back on the job and lead the learner to speak negatively about it to others. Level 1 evaluation, by itself, should not be the only training evaluation tool you use.
What is it?
Level 1 evaluation means measuring the reaction to the learning by evaluating the trainee's perceptions about the learning experience. The intent is to determine if the trainees liked the training and if the training was relevant to their work, to make potential improvements that positively impact learning. There are several areas of consideration: the program content, sequencing, and materials; the logistics of the training and the training environment; the instructor's facilitation, interaction, and organization of the content; and the expectation for applying the training back on the job.
1| Determine how the data will be gathered.
* Use a questionnaire immediately after the learning event that is either paper or electronic.
* Use interviews and focus groups as a follow-up to the reaction questionnaires to gather more information.
* Provide in-session polling with an audience response system tool.
* Gather in-session data with wall charts, index cards, and other implements.
* Connect with the learner using social media such as Twitter or Facebook to find out what they are really saying about the training.
2| Choose your perspective. Choose when you'd like to gather the data. Will you wait until the end of a session or unit? Depending on the length of the program, will you get the reaction at the end of a program or day or module? If you are running a pilot program, what do you want to know during the session so that you can make adjustments?
3| Align to your desired outcomes. Determine the business outcomes with your stakeholders first and design your Level 3 and 4 evaluations. Determine the learning objectives and the KSAs (key skills and abilities) for the Level 2 evaluation. Write your Level 1 evaluation to assess delivery and the learning experience.
4| Write questions that ask for quantified responses. Most questions are subjective (including Likert-scale questions such as "strongly agree"). What do you want to know? "Did the training include practice opportunities," with numeric response options tells you something different than asking, "did you like the practice opportunities?" If you use Likert-scale type questions, provide a "comments" section for further information about this answer.
5| How will you use the evaluation information? If you ask how the learner rates the usefulness of the training on a scale of 1 to 5, what will you do with that information? Think about what you want to know, and ask questions that give you those answers. Pay attention to the answers. Asking the right questions helps you make sure the training is doing what it's supposed to do.
Myth 1. Level 1 is just a "smile sheet" and offers no relevant or useful information.
Fact. Level 1 evaluation has its place in helping uncover content sequencing problems, delivery and facilitation problems, and training environment problems, allowing you to quickly rectify them. It also identifies what the learner perceives to be going well.
Myth 2. If the learners are happy, then the training is a success.
Fact. Happy learners and unhappy learners may or may not acquire knowledge and skills. You must complete a Level 2 evaluation to identify whether they have learned the content.
Myth 3. Learners can accurately assess the training's value to their actual job.
Fact. Learners may not know how they are going to use it and if the job environment is conducive to new knowledge application. This validation should be done at the instructional design phase with subject-matter experts, manager input, and support.
Learners provide their subjective perceptions about the pace, the delivery, the organization, and activities of the program. They can be biased by the difficulty of the material, the personality of the trainer, and other factors. The Level 1 evaluation should be one tool in your evaluation arsenal identifying the needed improvements that will strengthen the program. Make sure that the Level 1 reaction form includes
* reaction to the content
* reaction to the effectiveness of the trainer or facilitator
* reaction to the materials, such as handouts, audiovisuals, case studies, and activities.
Why it works
Level 1 evaluation, used correctly, has a significant place in understanding the satisfaction of the learner. Immediate feedback helps the facilitator and organization make needed adjustments to the program. Level 1 is so much more than a smile sheet!
Taking Your Level 1 Evaluation to Level 2
So, what can you do (especially on a limited budget) to strengthen your Level 1 evaluation? Incorporate Level 2 into your Level 1. The next level of evaluation establishes if the learner learned what the training was designed to teach. How do you do that? Add some of the components of a Level 2 evaluation into your Level 1.
[check] One of the easiest is to have learners self-report their level of knowledge at the beginning of the training and then again at the end of the training. Ask them to re-rate their pre-training assessment for 'hindsight' accuracy. Report the improvement between their self-assessments.
[check] Add a pre- and a post-test. Have learners demonstrate the actual performance before any training and then after the training. Quantify the results.
[check] Link the actual job and the job outputs to what you are training. Have learners demonstrate their current level of performance at the beginning of the training. Provide a checklist of desired behaviors during the training and retest at the end of the training.
[check] Provide learners the opportunity to demonstrate what they know before the training. During the training, have them evaluate and coach each other to achieve the measurable outcomes of the training.
[check] At the end of the training, ask learners to quantify their learning improvement and ways that they will implement the new learning back on the job.
Stone, Ron Drew
Aligning Training for Results
San Francisco, Pfeiffer, 2008
Parry, Scott B.
Evaluating the Impact of Training: A Collection of Tools and Techniques
Alexandria, VA, ASTD, 1997
Phillips, Jack J.
Handbook of Training Evaluation and Measurement Methods
Houston, Gulf Professional Publishing, 1997
Kirkpatrick, Donald L., and
James D. Kirkpatrick
Implementing the Four Levels: A Practical Guide for Effective Evaluation of Training Programs
San Francisco, Berrett-Koehler, 2007
Phillips, Jack J., et al
Make Training Evaluation Work
Alexandria, VA, ASTD, 2004
Phillips, Jack J., and Ron Stone
How to Measure Training Results: A Practical Guide to Tracking the Six Key Indicators
New York, McGraw Hill, 2002
Brinkerhoff, Robert O.
Telling Training's Story: Evaluation Made Simple, Credible, and Effective
San Francisco, Berrett-Koehler, 2006
Gaines Robinson, Dana, and James C. Robinson
Training for Impact: How to Link Training to Business Needs and Measure the Results
San Francisco, Jossey-Bass, 1989
Geri Lopker and Rhonda Askeland are president and senior consultant, respectively, of Geri Lopker & Associates. With almost 40 years experience between them, Lopker and Askeland present internationally, helping organizations achieve breakthrough performance results; www.lopker.com.…