The Kirkpatrick Model of Evaluation is one of the most popular ways to measure training effectiveness. Donald Kirkpatrick, the creator of the model, was a pioneer in the field of training evaluation and his work has had a lasting impact on how organisations measure learning outcomes.
First developed in the 1950s, Kirkpatrick’s model provides a clear, four-level framework for assessing how well learning programmes perform – these four levels range from initial reactions to real-world results.
Whether you’re an L&D professional looking to prove impact or an elearning provider ensuring your content delivers real value, this model still works.
Below we break down each level and explain how it can help you evaluate training. Kirkpatrick Partners is the organisation that has further developed and promoted the model.
What is the Kirkpatrick Model of Evaluation
The Kirkpatrick Model of Evaluation is one of the most widely used approaches for measuring training.
Originally developed in the 1950s, it provides a structured way to evaluate learning across four levels of evaluation: reaction, learning, behaviour and results.
While it was designed for traditional, in-person training, the model still applies today and can be used with coaching and courses delivered through an online learning platform.
Each level gets you closer to understanding the real value of a learning intervention. From how participants feel, to what they remember, to how they apply it, and ultimately, what results it drives. Collecting learning data at each stage helps inform improvements and ensures training is meeting its objectives.
In theory, all four levels should be measured. But in practice, many organisations stop at levels 1 and 2 which are the easiest to collect. They then miss out on the deeper insights further down the chain.
That’s where the model often gets misused.
Used properly however, it goes beyond surface metrics and asks tougher questions about what training actually changes. It helps organisations focus on the desired outcome of training, so behavioural changes lead to measurable improvements in job performance and organisational goals.
If you work in L&D or HR, it’s a model worth knowing, not just in name, but in how it can inform smarter, more evidence-based decisions around learning.
So, how does it work in practice?
What are the benefits?
The Kirkpatrick Model brings many benefits to organisations wanting to evaluate and improve their training programmes. By providing a comprehensive framework, it allows learning professionals to measure every aspect of training – from initial learner satisfaction to the ultimate business impact.
This holistic approach means measurements go beyond surface level feedback and instead look at both short-term and long-term results. The model helps organisations align training to strategic objectives, so it’s easier to show how training contributes to business impact and organisational success.
It also helps identify strengths and weaknesses in training initiatives, so continuous improvement and more informed decisions around future training investments can be better made. By measuring not just what learners think or know, but how they apply new skills and the results achieved, organisations can get the most return on investment and ensure every training programme delivers meaningful, measurable outcomes.
What are the 4 levels of Kirkpatrick’s Model?
The model is broken down into 4 levels, each measuring a different type of outcome. Each level can be applied to different training methods including training events, workshops and online courses.
Together they give a fuller picture of how well a training programme really works. The model can also be used to evaluate different training methods for effectiveness.
The 4 levels of Kirkpatrick’s Model are:
1. Reaction
How learners find the training agreeable, relevant and engaging.
Level 1 is about evaluating the training from the learner’s perspective.
Trainee satisfaction levels are usually measured using a feedback form, often called a ‘Happy Sheet’.
Verbal reactions and post-training surveys can also be used to measure reactions. It’s important to collect data through surveys and observations to gather learner feedback and improve the training experience.
What’s great about this level of assessment is it’s quick, easy and cheap.
2. Learning
The increase in knowledge and capability learned.
This is usually measured by testing and comparing the results of tests done before and after training to measure what intended knowledge and skills were supposed to be learned.
Assessment can also be done via interview or observation. Learning data collected from quizzes, interviews and observations is used to measure knowledge acquisition and skill development.
Like Level 1, it’s easy to set up and useful for measuring quantifiable skills.
3. Behaviour
Level 3 is about identifying changes in behaviour in the workplace, measuring whether learners are applying what they have learned and adapting their actions accordingly.
Compared to Levels 1 and 2, Level 3 requires much more involvement and skilled observation from line managers, as well as support for the adoption of new behaviour after training.
Behaviour is measured via observation and interview over a period to assess behaviour change, how relevant that change is and whether it’s sustained. Common methods include direct observation, self-assessment, peer feedback and manager evaluations to measure these changes. Tracking leading indicators – early signs that behaviour change will lead to desired results – is also important for evaluating training.
4. Results
The overall impact the trainee’s performance has on the business or working environment is measured at the final level of the Kirkpatrick Model.
This is a different challenge to levels 1 to 3 as individual assessments are done.
It’s about linking the trainee’s behaviour change to real bottom-line improvements and organisational performance metrics in a credible and believable way. Level 4 is about organisational results and how these outcomes align to organisational goals, so training initiatives contribute to the company’s strategic objectives.
A unit of change in learning should be directly linked to a specific improvement in a key organisational metric. These are examples of desired results, specific outcomes and tangible results that organisations want to achieve, such as increased productivity, cost savings or higher customer satisfaction.
Comparing results before and after training, or between control and experimental groups, is essential to measure results and isolate the true impact of training on these outcomes.

Learn How To Create Personal Learning Journeys For FREE!
Kirkpatrick Model Examples
It sounds like a nice theory, so how does a willing L&D practitioner apply the Kirkpatrick Model in the workplace?
When applying the model in practice, you need to start by clearly defining the desired outcome, focusing on the behavioural changes and impact on job performance the training is trying to achieve. To measure the effectiveness of the training, you also need to measure results by evaluating how well those outcomes are met and assessing the overall impact on organisational goals.
Kirkpatrick Level 1 Evaluation Examples
Even though Level 1 Evaluation (training feedback forms) won’t raise an eyebrow in the boardroom, don’t dismiss them as lightweight. They have a role to play in helping you design engaging training, without which learning will be impaired and the higher levels of training evaluation will be compromised.
For example, a lack of post-training learning from Level 2 might be due to poor training delivery which can be easily identified in Level 1.
You’ll be looking to assess some of the following in Level 1:
- Was the venue what you expected?
- Did you receive the pre-programme joining instructions?
- Were the joining instructions helpful?
- Were the learning activities engaging and practical?
- What are your 3 key takeaways from the event?
- What is the first thing you are going to work on?
- How will you embed the learning in the role?
- Did you feel the event was a good use of your time?
- Overall, was the training a success?
- Did the trainer cater for your learning style?
- Was the content relevant to your role?
- What content was delivered at the right pace?
- Was the programme delivered at the right level?
- Was the programme too hard or too easy?
- Was the programme covered in the right amount of detail?
- Did the trainer use a variety of different methods?
- Rate the trainer’s overall presentation skills
- Rate the knowledge level of the trainer on the topics
- Was the trainer flexible to your needs?
- How helpful was the trainer?
- Would you recommend the course to others?
Once you have all the data, you need to act on it where possible and deliver changes based on feedback and suggestions from your learners.
Kirkpatrick Level 2 Evaluation Examples
Level 2 learning evaluation looks at knowledge acquisition.
You can also modify Level 2 evaluation processes by incorporating modern gamification tactics and processes.
By gamifying this part of the evaluation process using leaderboards and badges; you can reward learning and create a healthy sense of competition that will boost learner engagement and learning too.
Interestingly, how you assess and recognise knowledge acquisition in Level 2 can actually enhance learning.
The following can be used to measure the learner’s knowledge:
- Self-assessment
- Peer assessment
- Quizzes
- Role-play
- Interview
- Observation
Kirkpatrick Level 3 Evaluation Examples
This level of learning evaluation is much more challenging than Levels 1 and 2. You need to look at how well your learners have changed their behaviour because of the training they received.
Are they applying the learning in practice?
You also need to be aware that behaviour change can only happen in a conducive environment.
For example, let’s say you miss out Level 1 and 2 assessment and just focus on post-training office behaviour and note that no behaviour modification has occurred. It would be easy to assume that the training didn’t work and that the learners didn’t learn anything.
This could indeed be the case, but it could be that learning did take place, but the learners aren’t applying it.
There are many reasons why learning might not be applied, such as the manager not allowing them to apply the new knowledge, or not providing supporting opportunities for them to practice.
The reasons can also be more intrinsic such as the employee having no desire to apply the knowledge or lacking the confidence.
An important ongoing enabler of Level 3 evaluation is therefore to create a work environment that promotes the application of new learning.
Managers should be actively encouraged to consider linking reward and recognition programmes to applied learning by awarding and publicly praising staff for deploying new skills, techniques and behaviours.
Managers will have a big role to play in creating a learning culture in their organisation that involves longitudinal observation and data collection. Despite this, managers are also under time pressures and may need learning professionals to support them to devote time to motivating line-managers to prioritise this activity.
In Level 3 you’ll be looking to answer some key questions:
Have the learners applied any of their learning?
Are learners able to train others with their new knowledge, behaviour or skills?
Do learners seem aware that their behaviour has changed?
For Level 3 evaluation to be successful you need to get managers on-board and make the evaluation process as easy, simple and straightforward as possible for them to carry out.
The following are examples of what can be used to measure the learner’s behaviour change:
- Self-assessment
- Peer assessment
- 360-degree assessment
- 180-degree assessment
- Role-play
- Skills interview
- Competency-based interview
- Observation
- Evaluators
Kirkpatrick Level 4 Evaluation Examples
In practice the final stage of the Kirkpatrick Model of Evaluation will require the biggest investment of time and resources. You need to make a link between macro benefits, otherwise known as results in the business, and specific training, to measure the true organisational impact of that training.
The outcomes you may want to link training to:
- Increased productivity
- Lower staff turnover
- Increased customer satisfaction
- Increased staff engagement levels
- Increased sales revenues
- Fewer mistakes or less waste
- Increased compliance
- Fewer sick days and absence
- Reduced customer complaints
- Customer retention rates
- Recruitment fees
- Staff satisfaction
- Reduction in accidents
- Increased staff morale
- Increased quality
How to Use the Kirkpatrick Model
Understanding the four levels is one thing. Using them to actually plan, structure and evaluate your training is where the value really shows.
The Kirkpatrick Model isn’t just a tool to look back after the fact, but a guide you can use from the very start.
And here’s the key: you don’t start with Level 1. You start at Level 4.
The best way to use the model is to work backwards.
Start with the desired results you want to achieve and ensure these are aligned to your organisational goals. Then define what behaviour needs to change to get there. From there, determine what learning is required to support that behaviour, and finally, how you’ll engage learners from the start.
It flips the traditional approach on its head and that’s the point. When you plan with outcomes first, you’re much more likely to create training that delivers real impact.
Here’s a quick summary:
Start with Level 4 (Results): What outcome do you want? More sales? Fewer errors? Higher customer satisfaction?
Then Level 3 (Behaviour): What do people need to do differently on the job to achieve that outcome?
Then Level 2 (Learning): What knowledge or skills will drive that behaviour?
Then Level 1 (Reaction): How will you make the training relevant and engaging from the start?
By planning this way, you’re not just delivering content, you’re creating a learning experience with purpose.
To fully benefit from the Kirkpatrick Model, you need to apply best practices at each of its four levels.
At the Reaction level, collect data on learner satisfaction and engagement through online questionnaires, customer satisfaction surveys and post learning assessments. This will ensure the training programme is relevant and engaging from the start.
At the Learning level, measure learning effectiveness by using pre and post assessments, quizzes or knowledge checks to track knowledge and skill acquisition. This will help you identify where learners may need additional support.
At the Behaviour level, measure behavioural change in the workplace. Use manager feedback, peer reviews and performance reviews to observe how new skills are being applied and identify areas for further coaching.
Finally, at the Results level, measure business impact by tracking key performance indicators, sales figures and customer satisfaction surveys. Analyse these data points to see where the training programme is driving organisational success and where you can improve further. By following best practice across all four levels you can ensure your training programme is effective and aligned to your organisation’s goals.
Stakeholders
Stakeholders are key partners in this process. Their involvement brings valuable perspectives and feedback that can help you identify areas for improvement at every stage of the training programme.
Stakeholders may include learners, managers, supervisors and organisational leaders, each with unique insights into the effectiveness of the training initiative. Engaging stakeholders from the Reaction level to the Results level ensures the evaluation process is comprehensive and aligned to organisational objectives.
Regular communication, feedback sessions and progress updates will keep stakeholders informed and invested in the success of the training programme. By involving stakeholders you can better evaluate if the training is meeting its intended goals and contributing to overall organisational success.
Level 5
Once you’ve planned your training with the Kirkpatrick model in mind, working backwards from the results you want, there’s one more piece of the evaluation puzzle to consider.
That final piece comes from Jack Phillips who expanded on Kirkpatrick’s work with a fifth level focused on something every organisation cares about: return on investment.
In his book ‘Return on Investment in Training and Performance Improvement Program’ the author puts forward the Phillips model.
It shows L&D practitioners how to calculate the ROI of training using the data gathered from Kirkpatrick’s Level 4 Evaluation, in a more actionable format. You’ll need your calculators and basic algebra for this stage of analysis.
The Level 5 evaluation equation looks like this:
ROI % = (£ Benefit of Training – £ Cost of Training) / £ Cost of Training
How does Level 5 work in practice?
Here’s a case study example to help you get a feel for this model.
Let’s say by introducing a new learning system that you predict productivity will increase by 20% over the next two years, yielding an additional £100,000 in profit.
This £100,000 is our £ Benefit of Training.
Then you need to work out how much your training will cost. Let’s say your learning management system implementation costs are £20,000.
Also let’s say you have 50 users. Each need to be trained on the system with an approximate charge of £100 each. So that’s an additional £5,000.
This means your £ Cost of Training is £25,000.
If we plug all these figures into the Phillips equation you are left with your ROI figure to impress the boardroom.
In this case it’s a 300% ROI, recouping three times the original investment in training.
ROI 300 % = (£ 100,000 – £ 25,000) / £25,000
This is a very simplified look at the ROI calculation and there are many more levels of detail and refinement that will need to be explored in real-world practice.
As you would expect this analysis would be deployed retrospectively as confirmation of the effectiveness of your training intervention for purposes of recognition and securing future budgets.
However the equation can also be used for planning as it can be used to develop ROI forecasts and projections and enable your organisation to make more informed training investment decisions.
Challenges in Evaluation
Evaluating training programmess is not without its challenges, especially when it comes to measuring behavioural change and business impact. One of the biggest hurdles is collecting and analysing data to identify areas for improvement.
To overcome this, organisations can use a control group to compare results and isolate the effect of the training program. Reducing inconsistencies in the evaluation process is also key; this can be achieved by using relevant metrics and benchmarks to measure learning effectiveness and business impact.
Combining the Kirkpatrick Model with other evaluation models can give a more comprehensive view of training effectiveness. By proactively addressing these challenges using the Kirkpatrick Model, a control group and relevant metrics, organisations can ensure their training programmes are effective, drive meaningful behavioural change and contribute to organisational success.
This allows learning professionals to measure learning accurately, evaluate business impact and continuously identify areas for improvement.
Limitations of the Kirkpatrick Evaluation Model
This article has covered how useful the Kirkpatrick Model is and all the benefits of using it. Here we look at some of the limitations of using the model for training evaluation.
Level One: Reaction
Self-assessment forms are subjective, and learners will be completing them in a hurry so they can get back to their desks or leave work for the day. Also, any focus group exercises conducted by the training provider will suffer from the subjects’ natural bias towards pleasing the trainer.
On the other hand, if learners are given post-study surveys much later on when it’s more convenient and possible for them to be objective, there’s the risk that the training won’t be as fresh in subjects’ minds.
Another type of bias that needs to be combated is the tendency of trainees to rate their experiences based on how they concluded rather than averaging out their experiences over the whole duration of the training.
Level Two: Learning
Assessments after training eat into the time that trainees have to make themselves available. Assessments can be expensive and not all trainees will perform as well in standardised tests since anxiety levels, memory, reading ability and cognitive skill all play a part in how well one can perform during an assessment.
Another way to run such assessments is during the training, with a short quiz after each component or module. Most online training requires participants to complete brief tests on each topic before moving on to the next, to avoid participants having to complete lengthy tests at the end.
Level Three: Behaviour
Here’s where evaluation can get very time consuming and challenging since it requires ongoing, periodic observation. A manager may not have the time to do this sort of oversight. Even where they do have the time and the enthusiasm to devote themselves to the task, such behavioural studies generate reports which usually require actions to be taken.
There’s also the question of how to avoid skewing the results, should staff become aware they are being assessed. Finally, it requires a level of expertise which a manager might not possess. To ensure workplace observations like this are carried out to a high standard it can be wise to retain the services of the training provider.
Level Four: Result
As with any experiment you must move from correlation to causation. Just because an improvement has been noted in a particular work area doesn’t mean that training was the precise cause. Other potential causes need to be eliminated first so you can prove a tangible connection between training and outcome. This requires expertise and a scientific approach to analysis.
Doing this level of evaluation could again be time consuming and costly. If it’s done too far after the training, there’s a risk its relevance will be questioned. It’s also possible corporate strategy may have moved on to other priorities.
Key Takeaways and Next Steps
The Kirkpatrick Model is over 60 years old but it’s still one of the most widely used and recognised tools for evaluating training. It was designed for a time when group sessions and classroom learning were the norm, long before most learning started happening virtually, on demand and in the flow of work.
But despite its age it still holds up.
When used properly and planned in reverse starting with results it gives you a practical framework to track outcomes and build more effective learning programmes.
If you want to show the true value of your training this is still one of the best places to start. Focus on results first. Build backwards. And use each level to shape a learning experience that actually delivers.
If you’re an L&D professional looking to lease engaging eLearning content for your LMS then Skillshub can help you. You can also buy online courses at our marketplace or we can create content from scratch for you with our bespoke eLearning solutions.


