Evaluation Models: Kirkpatrick
Another useful framework to evaluate the effectiveness of High Growth Enterprise Coaching Programmes is the Kirkpatrick Model which consists of four levels of evaluation.
Evaluation Level | Description and Characteristics | Examples of Tools and Methods | Relevance and Practicability |
Level 1: Reaction
|
Reaction evaluation is how the client felt, and their personal reactions to the training or learning experience, for example:
· Did the client like and enjoy the coaching? · Did they consider the coaching relevant? · Was it a good use of their time? · Did they like the style, timing, and process? · Did they feel comfortable with the level of participation? · How was the ease and comfort of experience? · Were they happy with the level of effort required to make the most of the coaching? · How did they perceive the practicability and potential for applying the learning and development? |
‘Happy sheets’
Feedback forms based on subjective personal reaction to the training experience Verbal reaction which can be noted and analysed Post-programme surveys or questionnaires Online evaluation or grading by clients Subsequent verbal or written reports given by clients |
Can be done immediately the coaching ends
Very easy to obtain reaction feedback Feedback is not expensive to gather or to analyse for groups Important to know that people were not upset or disappointed Important that people give a positive impression when relating their experience to others who might be deciding whether to experience same |
Evaluation Level | Description and Characteristics | Examples of Tools and Methods | Relevance and Practicability |
Level 2: Learning
|
Learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience:
· Did the clients learn what was intended? · Did the client experience what was intended for them to experience? · What is the extent of advancement or change in the clients after the training, in the direction or area that was intended?
|
Typically assessments or tests before and after the learning process
Interview or observation can be used before and after although this is time-consuming and can be inconsistent Methods of assessment need to be closely related to the aims of the coaching. Measurement and analysis is possible and easy on a group scale Reliable, clear scoring and measurements need to be established, so as to limit the risk of inconsistent assessment Hard-copy, electronic, online or interview style assessments are all possible
|
Relatively simple to set up, but more investment and thought required than reaction evaluation
Highly relevant and clear-cut for certain coaching such as quantifiable or technical skills Less easy for more complex learning such as attitudinal development, which is difficult to assess Cost escalates if systems are poorly designed, which increases work required to measure and analyse |
Evaluation Level | Description and Characteristics | Examples of Tools and Methods | Relevance and Practicability |
Level 3: Behaviour
|
Behaviour evaluation is the extent to which the clients applied the learning and changed their behaviour, and this can be immediately and several months after the coaching, depending on the situation:
· Did the clients put their learning into effect when back on the job? · Were the relevant skills and knowledge used? · Was there noticeable and measurable change in the activity and performance of the client in their role? · Was the change in behaviour and new level of knowledge sustained? · Would the client be able to transfer their learning to another person? · Is the client aware of their change in behaviour, knowledge, skill level?
|
Observation and interview over time are required to assess change, relevance of change, and sustainability of change
Arbitrary snapshot assessments are not reliable because people change in different ways at different times Assessments need to be subtle and ongoing, and then transferred to a suitable analysis tool Assessments need to be designed to reduce subjective judgement of the observer or interviewer, which is a variable factor that can affect reliability and consistency of measurements The opinion of the client, which is a relevant indicator, is also subjective and unreliable, and so needs to be measured in a consistent defined way 360-degree feedback is a useful method and need not be used before the coaching, because respondents can make a judgement as to change after coaching, and this can be analysed for groups of respondents and clients Assessments can be designed around relevant performance scenarios, and specific key performance indicators or criteria Online and electronic assessments are more difficult to incorporate – assessments tend to be more successful when integrated within existing management and coaching protocols Self-assessment can be useful, using carefully designed criteria and measurements
|
Measurement of behaviour change is less easy to quantify and interpret than reaction and learning evaluation
Simple quick response systems unlikely to be adequate Cooperation and skill of observers, typically line-managers, are important factors, and difficult to control Management and analysis of ongoing subtle assessments are difficult, and virtually impossible without a well-designed system from the beginning Evaluation of implementation and application is an extremely important assessment – there is little point in a good reaction and good increase in capability if nothing changes back in the job, therefore evaluation in this area is vital, albeit challenging Behaviour change evaluation is possible given good support and involvement from line managers or clients, so it is helpful to involve them from the start, and to identify benefits for them, which links to the level 4 evaluation below
|
Evaluation Level | Description and Characteristics | Examples of Tools and Methods | Relevance and Practicability |
Level 4: Results
|
Results evaluation is the effect on the business or environment resulting from the improved performance of the client – it is the acid test.
Measures would typically be business or organisational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organisational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc
|
It is possible that many of these measures are already in place via normal management systems and reporting
The challenge is to identify which and how relate to the client’s input and influence Therefore it is important to identify and agree accountability and relevance with the client at the start of the training, so they understand what is to be measured This process overlays normal good management practice – it simply needs linking to the coaching input Failure to link to coaching input type and timing will greatly reduce the ease by which results can be attributed to the training For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from coaching
|
Individually, results evaluation is not particularly difficult; across an entire organisation it becomes very much more challenging, not least because of the reliance on line-management, and the frequency and scale of changing structures, responsibilities and roles, which complicates the process of attributing clear accountability
Also, external factors greatly affect organisational and business performance, which cloud the true cause of good or poor results
|
Click on the lesson title under the “Next” button below to access the following lesson.