Donald Kirkpatrick first published his ideas about training evaluation in 1959, but it wasn’t until 1975 when he further defined them in his book, Evaluating Training Programmes, that they began to command industry attention.
Since then, awareness of his ideas has gradually increased and has been bolstered by a redefinition and updating in his 1998 book, Evaluating Training Programs: The Four Levels.
The rest as they say is history, and today, Kirkpatrick’s Evaluation Model has arguably become the industry standard within the learning and development community.
The four levels of Kirkpatrick’s Evaluation Model summarised:
1.Reaction: The extent to which trainees find the training agreeable, relevant and engaging.
Trainee satisfaction levels are usually assessed using a feedback form, often referred to as a ‘Happy Sheet’.
Verbal reactions and post-training surveys can also be used to assess reaction.
What’s great about this level of assessment is that it’s quick, easy to do and inexpensive.
2.Learning: The increase in knowledge and capability experienced by the student.
This is usually assessed by conducting and comparing the results of tests carried out before and after training.
Assessment can also be done via interview or observation.
Like Level One it’s relatively easy to set up, and is really useful for assessing clearly quantifiable skills.
3.Behaviour: The extent to which students apply their learning in the working environment.
Compared to levels 1 and 2, Level 3 requires much more participation and skilled observation from line-managers.
Behaviour is assessed via observation and interview over a period of time so as to assess behaviour change, how relevant that change is, and whether it is sustained.
4.Results. The overall impact that the trainee’s performance has on the business or working environment.
This represents a fundamentally different challenge to levels 1 to 3 as individual assessments are carried out.
It’s about relating the trainee’s behaviour change to real bottom-line improvements and organisational performance metrics in a credible and believable way.
A unit of change in learning should be directly linked to a specific improvement in a key organisational metric.
The Model In Practice
It’s appears a sound and attractive theory, so how does a willing L&D practitioner apply the Kirkpatrick model in the workplace?
Even though Level 1 Evaluation, (happy sheets), are unlikely to raise even an eyebrow in the boardroom, they should not be dismissed as lightweight.
They have an important role to play in helping you develop engaging training, without which learning will be impaired, and the higher levels of training evaluation will be compromised.
For example, a lack of post training learning from level 2, might be as a result of poor training delivery that can be easily identified in level 1.
Key criteria that you’ll be looking to assess in Level 1 are things like, whether trainees felt that the training was:
1. Worth their time
3. Suited to their learning style
5. Relevant and appropriate
6. Too easy or too hard
Once you have collected all the data, it’s crucial that you act on it where appropriate, delivering constructive changes based on feedback and suggestions from your trainees.
Level 2 learning evaluation looks at knowledge acquisition.
You can also make modifications to Level 2 evaluation processes, by incorporating modern gamification tactics and processes.
By gamifying this part of the evaluation process using leaderboards and badges you can reward learning, and create a healthy sense of competition that will boost engagement and learning too.
Interestingly, the way that you assess and recognise knowledge acquisition in Level 2 can actually boost engagement and ultimately enhance learning.
This level of learning evaluation comes with significantly greater challenges than levels 1 and 2.
You’ll need to look at how well your students have modified their behaviour as a result of the training they receive.
Are they actually applying the learning in practice?
You’ll also need to be aware that behaviour change can only be expected to happen in a conducive environment.
For example, let’s say that you miss out Level 1 and 2 assessment and just focus on post-training office behaviour and note that no behaviour modification has occurred.
It would be easy to assume that the training didn’t work and that the trainees didn’t learn anything.
This could indeed be the case, but it could be that learning did actually take place but trainees are simply not applying it.
There are many reasons why learning might not be applied, such as the manager not allowing them to apply the new knowledge, or not providing supporting opportunities for them to practice.
The reasons could be more intrinsic such as the employee having no desire to apply the knowledge or lacking the confidence.
That’s why an important ongoing enabler of level 3 evaluation is creating a work environment that promotes the application of new learning.
Managers should be actively encouraged to consider linking reward and recognition programmes to applied learning by awarding and publicly praising staff for deploying new skills, techniques and behaviours.
Managers will have a big role to play in longitudinal observation and data collection, but knowing how time-pressurised managers are, L&D professionals may need to devote time to motivating line-managers to prioritise this activity.
In level 3 you’ll be looking to answer some key questions:
1. Have the trainees applied any of their learning?
2. Are trainees able to train others with their new knowledge, behaviour or skills?
3. Do trainees seem aware that their behaviour has changed?
For Level 3 evaluation to be successful you’ll need to get managers on-board and make the evaluation process effortless, or a the least, relatively easy and straight-forward.
In practice, this stage of evaluation will require the biggest investment of time and resources.
You need to make a credible link between macro benefits, (or results in the business) and specific training, in order to assess the true organisational impact of that training.
It’s not easy, and will require persistence and intellectual rigour to get it right.
The kind of outcomes you may consider trying to link training to are:
- Increased productivity
- Lower staff turnover
- Increased customer satisfaction
- Increase staff engagement levels
- Increased sales revenues
- Fewer mistakes or less waste
Introducing Level 5
There is one still piece of the training evaluation jigsaw left and this is held by Jack Phillips who built an important fifth level of training evaluation on top of the Kirkpatrick four.
In his book, ‘Return on Investment in Training and Performance Improvement Program’, the author puts forward the Phillips model.
It shows L&D practitioners how to calculate the ROI of training using the data gathered from Kirkpatrick’s Level 4 Evaluation, and put it in a more actionable format.
You’ll definitely need your calculators and basic algebra for this stage of analysis.
The Level 5 evaluation equation looks a little like the following:
ROI % = (£ Benefit of Training – £ Cost of Training) / Cost of Training
So, how does it work in practice?
Here’s a case study example to help you get a feel for this model.
Let’s say that by introducing a new e-learning system you predict that productivity will increase by 20% of over the next 2 years, yielding an additional £100,000 in profit.
This £100,000 is our £ Benefit of Training.
But, how much will it cost? Let’s say your LMS implementation costs are £20,000.
Also, let’s say you lose 50 coders with an hourly charge out rate of £100 an hour for an hours training, at a total lost opportunity cost of £5,000.
This means your £ Cost of Training is £25,000.
Now, if we run all these figures through our equation, voilà, we get the magical, Holy Grail ROI figure to impress the boardroom.
In this case it is 300% ROI which remains that we have recouped three times the original investment.
ROI 300 % = (£ 100,000 – £ 25,000) / £25,000
This is quite a simplified look at the ROI calculation and there are greater levels of detail and refinement that will need to be explored in real-world practice.
As you would imagine, this analysis would ordinarily be deployed retrospectively as confirmation of the effectiveness of your training intervention for purposes of recognition and securing future budgets.
But, it also has a role in planning as it can be used to develop ROI forecasts and projections and enable your organisation to make more informed training investment decisions.
Chief Disruption Officer