Evaluating the Effectiveness of Training
When organization has invested in some training, how do we know if it has a success? Our gut feeling might be that skills and practice have improved. But in what ways and by how much has it improved, and did organization get value of money? Answer of these questions can be given by doing evaluation.
The evaluation of training forms the remaining part of the training cycle which starts with the identification of training needs, establishing objectives and continues through to the design and delivery of the training course itself.
It is the function of evaluation to assess whether the learning objectives originally identified have been satisfied and any deficiency rectified. Evaluation is the analysis and comparison of actual progress versus prior plans, oriented toward improving plans for future implementation. It is part of a continuing management process consisting of planning, implementation and evaluation; ideally with each following the other in a continuous cycle until successful completion of the activity. Evaluation process must start before training has begun and continue throughout the whole learning process.
Functions of evaluation: There are basically two functions of evaluation
Qualitative evaluations is an assessment process how well did we do?
Quantitative evaluation is an assessment process that answers the question How much did we do?
Principles of Training Evaluation:
Training need should be identified and reviewed concurrently with the business and personal development plan process.
There should be correlation to the needs of the business and the individual.
Organisational, group and individual level training need should be identified and evaluated.
Techniques of evaluation should be appropriate.
The evaluation function should be in place before the training takes place.
The outcome of evaluation should be used to inform the business and training process.
Why Evaluation of Training:
Training cost can be significant in any business. Most organizations are prepared to incur these cost because they expect that their business to benefit from employees development and progress. Whether business has benefited can be assessed by evaluation training.
There are basically four parties involved in evaluating the result of any training. Trainer, Trainee, Training and Development department and Line Manager.
The Trainee wants to confirm that the course has met personal expectations and satisfied any learning objectives set by the T & D department at the beginning of the programme.
The Trainer concern is to ensure that the training that has been provided is effective or not.
Training and Development want to know whether the course has made the best use of the resources available.
The Line manager will be seeking reassurance that the time hat trainee has spent in attending training results in to value and how deficiency in knowledge and skill redressed.
The problem for many organizations is not so much why training should be evaluated but how. Most of the organizations overlook evaluation because financial benefits are difficult to describe in concrete terms.
Improve business performance through training:
The business environment is constantly changing, therefore the knowledge and skills require for developments will also change. To compete with others, organization requires training strategy which, where organization business wants to be and identifies the training require getting there.
Regular evaluations of completed training help in identify skill gaps in workforce and take pro active steps to avoid the problem.
Organisation training strategy integrated with business planning and employee development. The process of evaluation is central to its effectiveness and helps to ensure that:
Whether training budget is well spent
To judge the performance of employee as individual and team.
To establish culture of continuous learning and improvement.
What to Evaluate:
Donald Kirkpatrick developed four level models to assess training effectiveness. According to him evaluation always begins with level first and should move through other levels in sequence.
Reaction Level: The purpose is to measure the individuals reaction to the training activity. The benefit of Reaction level evaluation is to improve Training and Development activity efficiency and effectiveness.
Learning Level: The basic purpose is to measure the learning transfer achieved by the training and development activity. Another purpose is to determine to what extent the individual increased their knowledge, skills and changed their attitudes by applying quantitative or qualitative assessment methods.
Behaviour Level: The basic purpose is to measure changes in behavior of the individual as a result of the training an development activity and how well the enhancement of knowledge, skill, attitudes has prepared than for their role.
Result Level: The purpose is to measure the contribution of training and development to the achievement of the business/operational goals.
When to Evaluate (Process of evaluation)
There are three possible opportunities to undertake an evaluation:
Pre Training Evaluation: is a method of judging the worth of a programme before the progrmame activities begin. The objective of this evaluation is (a) To determine the appropriateness of the context of training activity and (b) To help in defining relevant training objectives.
Context and Input Evaluation: is a method of judging the worth of a pogramme while the programme activities are happening. The objectives of this evaluation are (a) To assess a training course or workshop as it progress (b) To find out the extent of programme implementation andฉ To determine improvement and adjustments needed to attain the training objectives.
Post Training Evaluation: is method of judging the worth of a programme at the end of the programme activities. The focus is on the outcome. It tries to judge whether the transfer of training to the job taken place or not.
Regards
Raj
From India
When organization has invested in some training, how do we know if it has a success? Our gut feeling might be that skills and practice have improved. But in what ways and by how much has it improved, and did organization get value of money? Answer of these questions can be given by doing evaluation.
The evaluation of training forms the remaining part of the training cycle which starts with the identification of training needs, establishing objectives and continues through to the design and delivery of the training course itself.
It is the function of evaluation to assess whether the learning objectives originally identified have been satisfied and any deficiency rectified. Evaluation is the analysis and comparison of actual progress versus prior plans, oriented toward improving plans for future implementation. It is part of a continuing management process consisting of planning, implementation and evaluation; ideally with each following the other in a continuous cycle until successful completion of the activity. Evaluation process must start before training has begun and continue throughout the whole learning process.
Functions of evaluation: There are basically two functions of evaluation
Qualitative evaluations is an assessment process how well did we do?
Quantitative evaluation is an assessment process that answers the question How much did we do?
Principles of Training Evaluation:
Training need should be identified and reviewed concurrently with the business and personal development plan process.
There should be correlation to the needs of the business and the individual.
Organisational, group and individual level training need should be identified and evaluated.
Techniques of evaluation should be appropriate.
The evaluation function should be in place before the training takes place.
The outcome of evaluation should be used to inform the business and training process.
Why Evaluation of Training:
Training cost can be significant in any business. Most organizations are prepared to incur these cost because they expect that their business to benefit from employees development and progress. Whether business has benefited can be assessed by evaluation training.
There are basically four parties involved in evaluating the result of any training. Trainer, Trainee, Training and Development department and Line Manager.
The Trainee wants to confirm that the course has met personal expectations and satisfied any learning objectives set by the T & D department at the beginning of the programme.
The Trainer concern is to ensure that the training that has been provided is effective or not.
Training and Development want to know whether the course has made the best use of the resources available.
The Line manager will be seeking reassurance that the time hat trainee has spent in attending training results in to value and how deficiency in knowledge and skill redressed.
The problem for many organizations is not so much why training should be evaluated but how. Most of the organizations overlook evaluation because financial benefits are difficult to describe in concrete terms.
Improve business performance through training:
The business environment is constantly changing, therefore the knowledge and skills require for developments will also change. To compete with others, organization requires training strategy which, where organization business wants to be and identifies the training require getting there.
Regular evaluations of completed training help in identify skill gaps in workforce and take pro active steps to avoid the problem.
Organisation training strategy integrated with business planning and employee development. The process of evaluation is central to its effectiveness and helps to ensure that:
Whether training budget is well spent
To judge the performance of employee as individual and team.
To establish culture of continuous learning and improvement.
What to Evaluate:
Donald Kirkpatrick developed four level models to assess training effectiveness. According to him evaluation always begins with level first and should move through other levels in sequence.
Reaction Level: The purpose is to measure the individuals reaction to the training activity. The benefit of Reaction level evaluation is to improve Training and Development activity efficiency and effectiveness.
Learning Level: The basic purpose is to measure the learning transfer achieved by the training and development activity. Another purpose is to determine to what extent the individual increased their knowledge, skills and changed their attitudes by applying quantitative or qualitative assessment methods.
Behaviour Level: The basic purpose is to measure changes in behavior of the individual as a result of the training an development activity and how well the enhancement of knowledge, skill, attitudes has prepared than for their role.
Result Level: The purpose is to measure the contribution of training and development to the achievement of the business/operational goals.
When to Evaluate (Process of evaluation)
There are three possible opportunities to undertake an evaluation:
Pre Training Evaluation: is a method of judging the worth of a programme before the progrmame activities begin. The objective of this evaluation is (a) To determine the appropriateness of the context of training activity and (b) To help in defining relevant training objectives.
Context and Input Evaluation: is a method of judging the worth of a pogramme while the programme activities are happening. The objectives of this evaluation are (a) To assess a training course or workshop as it progress (b) To find out the extent of programme implementation andฉ To determine improvement and adjustments needed to attain the training objectives.
Post Training Evaluation: is method of judging the worth of a programme at the end of the programme activities. The focus is on the outcome. It tries to judge whether the transfer of training to the job taken place or not.
Regards
Raj
From India
Hi Raj,
It was really nice article.
There are some of my views, through which we can evaluate the Trainer's performance which is required equally.
Evaluating the Trainer:
Trends in training delivery, technology and outsourcing have made trainers more accountable for their performance than ever before. A typical trainer's job used to be fairly straightforward and routine. His main responsibility was to impart standard work-related training to each new crop of employees. Sessions were held in classes for fixed periods using many of the teaching methods commonly used in high school or college classrooms. At the end of each session, the new recruits were tested and sent on to their jobs. Some were called back for follow-up training.
Today, the need for better and more comprehensive training in more areas than in the past, emphasis on quality control and issues such as affirmative action and diversity training are all making the trainer set higher standards. This also necessitates the need to find the more accurate methods for measuring training results. Every organisation must keep up with the increasing customers' level of expectations lest they should be left behind.
Measuring performance the wrong way poses danger. For example, training officials need to distinguish among many competing variables like- are employees' productivity gains related to training or to other factors. Trainers who rely on second or third hand feedback need to take great care in how they interpret results. However, when feedback pertaining to training is consistently good or bad over long periods of time, it is probably reliable.
Tests collect a range of information. Apart from determining how much employees learned from their training and how well trainers were doing their jobs, testing can be used to pinpoint what each trainee knows even before training begins. In this way trainers can not only determine how much the trainee has learnt from the session but can also get a better idea of how to structure the training session.
Measurement is only as good as the records that are kept. Companies need to maintain records about training in order to measure quality and productivity, to meet ISO or professional requirements or to provide reports to regulatory agencies. Records are not only useful for purposes of substantiating claims but also for long-term comparisons. They help the company know how well training helped one team this year versus its effect on teams during previous sessions. The ability to make all those important comparisons and to substantiate them in concrete terms can help trainers know when adjustments, or even new strategies may be needed.
Increased expectations of training have re-defined the role of trainers. They need to pay more attention to the tools that at one time were relegated to secondary importance: testing and tracking scores. Trainers mustconsciously lookout for better methods of evaluating employees' skills, so that they can quantify the impact of their strategic roles.
Cheers
Archna
From India, Delhi
It was really nice article.
There are some of my views, through which we can evaluate the Trainer's performance which is required equally.
Evaluating the Trainer:
Trends in training delivery, technology and outsourcing have made trainers more accountable for their performance than ever before. A typical trainer's job used to be fairly straightforward and routine. His main responsibility was to impart standard work-related training to each new crop of employees. Sessions were held in classes for fixed periods using many of the teaching methods commonly used in high school or college classrooms. At the end of each session, the new recruits were tested and sent on to their jobs. Some were called back for follow-up training.
Today, the need for better and more comprehensive training in more areas than in the past, emphasis on quality control and issues such as affirmative action and diversity training are all making the trainer set higher standards. This also necessitates the need to find the more accurate methods for measuring training results. Every organisation must keep up with the increasing customers' level of expectations lest they should be left behind.
Measuring performance the wrong way poses danger. For example, training officials need to distinguish among many competing variables like- are employees' productivity gains related to training or to other factors. Trainers who rely on second or third hand feedback need to take great care in how they interpret results. However, when feedback pertaining to training is consistently good or bad over long periods of time, it is probably reliable.
Tests collect a range of information. Apart from determining how much employees learned from their training and how well trainers were doing their jobs, testing can be used to pinpoint what each trainee knows even before training begins. In this way trainers can not only determine how much the trainee has learnt from the session but can also get a better idea of how to structure the training session.
Measurement is only as good as the records that are kept. Companies need to maintain records about training in order to measure quality and productivity, to meet ISO or professional requirements or to provide reports to regulatory agencies. Records are not only useful for purposes of substantiating claims but also for long-term comparisons. They help the company know how well training helped one team this year versus its effect on teams during previous sessions. The ability to make all those important comparisons and to substantiate them in concrete terms can help trainers know when adjustments, or even new strategies may be needed.
Increased expectations of training have re-defined the role of trainers. They need to pay more attention to the tools that at one time were relegated to secondary importance: testing and tracking scores. Trainers mustconsciously lookout for better methods of evaluating employees' skills, so that they can quantify the impact of their strategic roles.
Cheers
Archna
From India, Delhi
Here are a few additional thoughts;
ROI on training can be expressed in percentage terms as:
Benefits from training less Costs of training
Costs of training
The problem with this approach is that it is usually conducted at a single, arbitrary point in time. Experience shows that we continue to get benefits from training long after the event, and that's what we would all want and expect, isn't it? This equation also fails to take account of other factors, such as changes in internal and/or external circumstances (and the impact and costs of the evaluation itself). For example, an injection of sales training might be shown to have produced increased sales but what if, at the same time, the sales people's targets and/or commissions had been raised, a new product had been launched or there had been a general upturn in the market?
Importantly, anyone expecting training on its own to change things significantly, without the support of other organisational changes, is likely to be disappointed. Training in isolation usually raises 'Why are we here?' concerns from course participants, and we've all heard complaints about 'It's them above who need the training!' or, later, 'I've had no encouragement to use what I learned'.
In my opinion, we would get a better return for our investment in training (and its evaluation) by making sure that participants and their managers harnessed the training more proactively. One way is through spending more time and effort on evaluation that involves them in seeing how to get a better return. I want to make a return happen, not just to measure it.
Training evaluation - science or art?
Essentially, there are two schools of thought about training evaluation, those who believe in the importance of scientific, quantitative and conclusive analysis, and those who believe in the value of subjective, qualitative and action-oriented exploration. The former school support ROI analysis, use of experimental and control groups, and, above all, the elimination of extraneous or even contributing variables. This is mainly because they want proof of the value of training itself (and, possibly, to control or curtail its costs if they are high in comparison to other options). At this point we should ask ourselves is this what busy line managers want, is it really sensible to exclude variables that might contribute to increased training impact, and do we really only want a snapshot about training taken at some arbitrary point?
Those who want to use evaluation to improve training and to reinforce its effect on participants' learning belong to the latter school of thought. They want to improve the transfer of training back to work (one of the biggest leakages in any training effort). They are ready to use interviews, small group surveys and feedback, and critical incident analysis deliberately to involve participants in renewed or new learning about the original training. Subjectivity and the inclusion of variables from activities related to the training (for example, promotion following management training, or changes in wider performance management practices introduced alongside appraisal training) are not a problem, because they assist in the interpretation of the rich data gathered. This school is interested in evidence of ongoing training impact, and what it may point to.
It seems to me essential to recognise that the difficulties and costs of proving or quantifying the value of training increase over time, but the benefits of using evaluation to reinforce the original training remain high at all times.
Some practical ideas about making a better return happen
Leaving aside for a moment the issue of different stakeholders' views, and any arguments about how ROI may be defined satisfactorily, there are some interesting ways to increase the return on training through its evaluation.
Simply refining end-of-course 'happy sheets' to include thought-provoking questions (such as 'What will you do/do differently as a result of this training?' and 'How exactly will each element of this course help you to do a better job?') and following up with line managers and participants will bear some fruit. I often get participants to write themselves a letter at the end of a course, to be delivered back to them at some random time in the future, saying what they had intended to do. When I return it some weeks or months later I ask for feedback from each participant and usually get honesty about having implemented some things and a renewed will to try again on other things. Follow-up interviews produce similar results.
Another inexpensive ploy is to email a sample of people, asking them about past 'critical incidents' (in, say, leadership or negotiation), what skills they used to deal with them, and where they acquired those skills (that is, without hinting that training could be a prime source).
In one organisation I emailed 50 managers and rapidly got a 70% return, which included many thoughtful 2 to 3 page replies reflecting back over 5 to 10 years and explaining how particular experiences and courses had helped to provide the skills. Several replies ended with the comments like 'I was a bit pushed to find time for this but once I got started I rediscovered a lot of what I thought I had forgotten, but that was your purpose in emailing me wasn't it!'.
At the more time-expensive end of evaluation-to-reinforce-training, it certainly pays to interview participants, their managers and other significant stakeholders before and after training. One deep process is Repertory Grid, which essentially involves subjective comparisons of important, work-related behaviours displayed by the participant and other members of staff (both better or 'model' and not so good) to produce relatively 'positive' and relatively 'negative' descriptions of behaviour against which to measure the effect of the training.
For a recent management course I conducted separate pre- and post-event Repertory Grid interviews with the participants and their managers. These revealed significant positive shifts as a result of the course in at least 4 out of 10 key areas for improvement identified beforehand by and for each participant. The pre-course output helped the trainers to focus the course. Very importantly, the trainers reported that, unlike many courses they ran, the participants 'were incredibly keen to spend time on particular topics, going into much more depth, and seemed to be checking off some sort of shopping list they had come with'. Repertory grid interviews are, however, very time consuming.
Cheers
Prof.Lakshman
From Sri Lanka, Kolonnawa
ROI on training can be expressed in percentage terms as:
Benefits from training less Costs of training
Costs of training
The problem with this approach is that it is usually conducted at a single, arbitrary point in time. Experience shows that we continue to get benefits from training long after the event, and that's what we would all want and expect, isn't it? This equation also fails to take account of other factors, such as changes in internal and/or external circumstances (and the impact and costs of the evaluation itself). For example, an injection of sales training might be shown to have produced increased sales but what if, at the same time, the sales people's targets and/or commissions had been raised, a new product had been launched or there had been a general upturn in the market?
Importantly, anyone expecting training on its own to change things significantly, without the support of other organisational changes, is likely to be disappointed. Training in isolation usually raises 'Why are we here?' concerns from course participants, and we've all heard complaints about 'It's them above who need the training!' or, later, 'I've had no encouragement to use what I learned'.
In my opinion, we would get a better return for our investment in training (and its evaluation) by making sure that participants and their managers harnessed the training more proactively. One way is through spending more time and effort on evaluation that involves them in seeing how to get a better return. I want to make a return happen, not just to measure it.
Training evaluation - science or art?
Essentially, there are two schools of thought about training evaluation, those who believe in the importance of scientific, quantitative and conclusive analysis, and those who believe in the value of subjective, qualitative and action-oriented exploration. The former school support ROI analysis, use of experimental and control groups, and, above all, the elimination of extraneous or even contributing variables. This is mainly because they want proof of the value of training itself (and, possibly, to control or curtail its costs if they are high in comparison to other options). At this point we should ask ourselves is this what busy line managers want, is it really sensible to exclude variables that might contribute to increased training impact, and do we really only want a snapshot about training taken at some arbitrary point?
Those who want to use evaluation to improve training and to reinforce its effect on participants' learning belong to the latter school of thought. They want to improve the transfer of training back to work (one of the biggest leakages in any training effort). They are ready to use interviews, small group surveys and feedback, and critical incident analysis deliberately to involve participants in renewed or new learning about the original training. Subjectivity and the inclusion of variables from activities related to the training (for example, promotion following management training, or changes in wider performance management practices introduced alongside appraisal training) are not a problem, because they assist in the interpretation of the rich data gathered. This school is interested in evidence of ongoing training impact, and what it may point to.
It seems to me essential to recognise that the difficulties and costs of proving or quantifying the value of training increase over time, but the benefits of using evaluation to reinforce the original training remain high at all times.
Some practical ideas about making a better return happen
Leaving aside for a moment the issue of different stakeholders' views, and any arguments about how ROI may be defined satisfactorily, there are some interesting ways to increase the return on training through its evaluation.
Simply refining end-of-course 'happy sheets' to include thought-provoking questions (such as 'What will you do/do differently as a result of this training?' and 'How exactly will each element of this course help you to do a better job?') and following up with line managers and participants will bear some fruit. I often get participants to write themselves a letter at the end of a course, to be delivered back to them at some random time in the future, saying what they had intended to do. When I return it some weeks or months later I ask for feedback from each participant and usually get honesty about having implemented some things and a renewed will to try again on other things. Follow-up interviews produce similar results.
Another inexpensive ploy is to email a sample of people, asking them about past 'critical incidents' (in, say, leadership or negotiation), what skills they used to deal with them, and where they acquired those skills (that is, without hinting that training could be a prime source).
In one organisation I emailed 50 managers and rapidly got a 70% return, which included many thoughtful 2 to 3 page replies reflecting back over 5 to 10 years and explaining how particular experiences and courses had helped to provide the skills. Several replies ended with the comments like 'I was a bit pushed to find time for this but once I got started I rediscovered a lot of what I thought I had forgotten, but that was your purpose in emailing me wasn't it!'.
At the more time-expensive end of evaluation-to-reinforce-training, it certainly pays to interview participants, their managers and other significant stakeholders before and after training. One deep process is Repertory Grid, which essentially involves subjective comparisons of important, work-related behaviours displayed by the participant and other members of staff (both better or 'model' and not so good) to produce relatively 'positive' and relatively 'negative' descriptions of behaviour against which to measure the effect of the training.
For a recent management course I conducted separate pre- and post-event Repertory Grid interviews with the participants and their managers. These revealed significant positive shifts as a result of the course in at least 4 out of 10 key areas for improvement identified beforehand by and for each participant. The pre-course output helped the trainers to focus the course. Very importantly, the trainers reported that, unlike many courses they ran, the participants 'were incredibly keen to spend time on particular topics, going into much more depth, and seemed to be checking off some sort of shopping list they had come with'. Repertory grid interviews are, however, very time consuming.
Cheers
Prof.Lakshman
From Sri Lanka, Kolonnawa
Community Support and Knowledge-base on business, career and organisational prospects and issues - Register and Log In to CiteHR and post your query, download formats and be part of a fostered community of professionals.