Hello Everybody,
I am a student of MBA and currently pursuing a project " Evaluation of Effectiveness of Training.
Please give me some insights in this area. what are the different tools used for evaluation of both technical and behavioral training programmes(such as Management Development Programmes etc.)
Regards
Pooja
From India, Indore
I am a student of MBA and currently pursuing a project " Evaluation of Effectiveness of Training.
Please give me some insights in this area. what are the different tools used for evaluation of both technical and behavioral training programmes(such as Management Development Programmes etc.)
Regards
Pooja
From India, Indore
Hello Martin
Thanx for your valuable guidance,i got lot of stuff on that website. Sir I have read Kirkpatrick model of evaluation. I would like to know, is it necessary to evaluate a training programme on all the four levels.
That is to say the organization for which i am doing this project is already using Pre & Post Tests for evaluation of learning and for evaluation of the behavioral changes , i m designing a Training Effectiveness Form to be filled by the reporting officers of the employees who have attended the
MDP Programme.How to evaluate the outcome/result of the MDP.
Regards
Pooja
From India, Indore
Thanx for your valuable guidance,i got lot of stuff on that website. Sir I have read Kirkpatrick model of evaluation. I would like to know, is it necessary to evaluate a training programme on all the four levels.
That is to say the organization for which i am doing this project is already using Pre & Post Tests for evaluation of learning and for evaluation of the behavioral changes , i m designing a Training Effectiveness Form to be filled by the reporting officers of the employees who have attended the
MDP Programme.How to evaluate the outcome/result of the MDP.
Regards
Pooja
From India, Indore
Jack Phillips who proposes a 5th level to Kirkpatrick's model, which is Return on Investment or ROI, suggests to evaluate at levels 4 and 5 you need to evaluate to some extent at the lower levels too.
I'm not so sure that is necessarily the case.
I think the best way to determine how effective a training course has been is to be very clear before attending the course just what outcomes are required - what new knowledge, skills, attitudes and behaviours are required - of course the assumption is that the training actually addresses these areas - perhaps not in one course but several?
After the training ask the managers or reporting officers of those people who attended the course how many of these objectives of knowledge, skills, attitudes and behaviorus have been met.
I ask reporting officers to state their requirements in terms of key performance indicators, and in terms f being able to do specific things, using for format of TASK, CONDITIONS (i.e. what conditions to do task under, e.g. at night, or with a particular tool etc) and STANDARDS, i.e. how this task will be measured and to what standard of success, for example "in accordance with BS 7799", or "according to the Standard Operating Procedure #12-67 and with no errors" or "45 words per minute with 95% accuracy".
I use a simple form - on side 1 is the name of the employee, name of reporting officer, department name, title, date and venue of the training, costs of the course, accommodation and travel, and then below this space for justifying the training - both by the employee and the reporting officer, followed by Side 2 where the objectives are listed along with how success for each will be measured. Then below this is space to put notes on an after-training meeting and details of an action plan to take the lessons of the training back to the workplace - actions, resources needed, responsible person for each action and deadline. I also use this form then to track the financial costs, to audit the training request and delivery process and to hold reporting officers and employees to account and see how they are progressing.
I'll attach a copy to a later post and also put it on my web site later.
Regards
Martin
From United Kingdom,
I'm not so sure that is necessarily the case.
I think the best way to determine how effective a training course has been is to be very clear before attending the course just what outcomes are required - what new knowledge, skills, attitudes and behaviours are required - of course the assumption is that the training actually addresses these areas - perhaps not in one course but several?
After the training ask the managers or reporting officers of those people who attended the course how many of these objectives of knowledge, skills, attitudes and behaviorus have been met.
I ask reporting officers to state their requirements in terms of key performance indicators, and in terms f being able to do specific things, using for format of TASK, CONDITIONS (i.e. what conditions to do task under, e.g. at night, or with a particular tool etc) and STANDARDS, i.e. how this task will be measured and to what standard of success, for example "in accordance with BS 7799", or "according to the Standard Operating Procedure #12-67 and with no errors" or "45 words per minute with 95% accuracy".
I use a simple form - on side 1 is the name of the employee, name of reporting officer, department name, title, date and venue of the training, costs of the course, accommodation and travel, and then below this space for justifying the training - both by the employee and the reporting officer, followed by Side 2 where the objectives are listed along with how success for each will be measured. Then below this is space to put notes on an after-training meeting and details of an action plan to take the lessons of the training back to the workplace - actions, resources needed, responsible person for each action and deadline. I also use this form then to track the financial costs, to audit the training request and delivery process and to hold reporting officers and employees to account and see how they are progressing.
I'll attach a copy to a later post and also put it on my web site later.
Regards
Martin
From United Kingdom,
Evaluation of Training
Post Evaluation effectiveness
I - Delayed impact (non-job)
• Customer satisfaction at X weeks after the end of training.
• Customer satisfaction at X weeks after the training when customers know the actual costs of the training.
• Retention of Knowledge at X weeks after the end of training.
• Ability to solve a "mock" problem at X weeks after end of training.
• Willingness to try (or intent to use) the skill/ knowledge at X weeks after the end of the training.
II - On the job behavior change
• Trained individuals that self-report that they changed their behavior / used the skill or knowledge on the job after the training (within X months).
• Trained individuals who's managers report that they changed their behavior / used the skill or knowledge on the job after the training (within X months).
• Trained individuals that actually are observed to change their behavior / use the skill or knowledge on the job after the training (within X months).
III - On the job performance change
• Trained individuals that self-report that their actual job performance changed as a result of their changed behavior / skill (within X months).
• Trained individuals who's manager's report that their actual job performance changed as a result of their changed behavior / skill (within X months).
• Trained individuals who's manager's report that their job performance changed (as a result of their changed behavior / skill) either through improved performance appraisal scores or specific notations about the training on the performance appraisal form (within X months).
• Trained individuals that have observable / measurable (improved sales, quality, speed etc.) improvement in their actual job performance as a result of their changed behavior / skill (within X months).
• The performance of employees that are managed by (or are part of the same team with) individuals that went through the training.
• Departmental performance in departments with X % of employees that went through training ROI (Cost/Benefit ratio) of return on training dollar spent (compared to our competition, last year, other offered training, preset goals etc.).
Other measures
• CEO / Top management knowledge of / approval of / or satisfaction with the training program.
• Rank of training seminar in forced ranking by managers of what factors (among miscellaneous staff functions) contributed most to productivity/ profitability improvement.
• Number (or %) of referrals to the training by those who have previously attended the training.
• Additional number of people who were trained (cross-trained) by those who have previously attended the training. And their change in skill/ behavior/ performance.
• Popularity (attendance or ranking) of the program compared to others (for voluntary training programs).
Obatined from elsewhere and not mine :D
Regards
Rajesh B
Valuelanes
Bangalore
From India, Bangalore
Post Evaluation effectiveness
I - Delayed impact (non-job)
• Customer satisfaction at X weeks after the end of training.
• Customer satisfaction at X weeks after the training when customers know the actual costs of the training.
• Retention of Knowledge at X weeks after the end of training.
• Ability to solve a "mock" problem at X weeks after end of training.
• Willingness to try (or intent to use) the skill/ knowledge at X weeks after the end of the training.
II - On the job behavior change
• Trained individuals that self-report that they changed their behavior / used the skill or knowledge on the job after the training (within X months).
• Trained individuals who's managers report that they changed their behavior / used the skill or knowledge on the job after the training (within X months).
• Trained individuals that actually are observed to change their behavior / use the skill or knowledge on the job after the training (within X months).
III - On the job performance change
• Trained individuals that self-report that their actual job performance changed as a result of their changed behavior / skill (within X months).
• Trained individuals who's manager's report that their actual job performance changed as a result of their changed behavior / skill (within X months).
• Trained individuals who's manager's report that their job performance changed (as a result of their changed behavior / skill) either through improved performance appraisal scores or specific notations about the training on the performance appraisal form (within X months).
• Trained individuals that have observable / measurable (improved sales, quality, speed etc.) improvement in their actual job performance as a result of their changed behavior / skill (within X months).
• The performance of employees that are managed by (or are part of the same team with) individuals that went through the training.
• Departmental performance in departments with X % of employees that went through training ROI (Cost/Benefit ratio) of return on training dollar spent (compared to our competition, last year, other offered training, preset goals etc.).
Other measures
• CEO / Top management knowledge of / approval of / or satisfaction with the training program.
• Rank of training seminar in forced ranking by managers of what factors (among miscellaneous staff functions) contributed most to productivity/ profitability improvement.
• Number (or %) of referrals to the training by those who have previously attended the training.
• Additional number of people who were trained (cross-trained) by those who have previously attended the training. And their change in skill/ behavior/ performance.
• Popularity (attendance or ranking) of the program compared to others (for voluntary training programs).
Obatined from elsewhere and not mine :D
Regards
Rajesh B
Valuelanes
Bangalore
From India, Bangalore
hiiiiiiiii pooja
this is raashi this side........i know u got ur problems solved by other members...........any ways if u wish i can send some more information abt training as recently i did a project on training in maruti
byeeeeeee n take care
'
rAAshi
From India, Delhi
this is raashi this side........i know u got ur problems solved by other members...........any ways if u wish i can send some more information abt training as recently i did a project on training in maruti
byeeeeeee n take care
'
rAAshi
From India, Delhi
Rajesh B provides a large list of potential measures, which could be useful.
I say 'could be' because without a hard link between these measures and the training objectives, all you can possibly have are subjective estimates and correlations between the changes in performance and the training course. Correlations are not causations - correlations on their own do not tell you what caused what, only that when one indicator moved, so did another. They may be mutually linked, or circularly linked, or almost not linked at all. Correlations are usually acceptable when the organisation is performing well. It's when performance is not so good that senior management, esp CFO and CEO/COO ask very searching questions about the hard links between training and performance.
Take the following indicator:
Somebody has been away on a training course - may be it was nice for them to be away on the course, not to deal with real work problems for a day or 2, meet new people etc. Do you really think they would self-report no positive or increased change in performance? Do you really think they are going to say "I didn't learn anything", especially if they rated the course and trainer with high marks on the reaction or Kirkpatrick Level 1 evaluation forms? No!
This same critique must be applied to any indicator where there is self-reporting. Does this mean we shouldn't use self reporting? N! We should do! Rajesh B has suggested many measures - we should use a braod selection of them. So yes, use self-reporting - it will tell you something about the trainee, about their circumstances etc that may help identify better ways to support them in putting their learning in to practice.
If we take a different measure:
Here there is a degree of objectivity if we assume that the observable/measurable changes are being observed/measured by people or systems with no significant bias. But we still need to have a CAUSAL LINK to be sure there is a link - correlation is not good enough in some cases. We need to be sure that of all the possible factors that could account for some or all of the changed behaviours, the training course is by far and away the dominant one.
Nobody said it would be easy but I think I said it could and should be EASIER!!! We are getting there I think!
Regards
Martin
From United Kingdom,
I say 'could be' because without a hard link between these measures and the training objectives, all you can possibly have are subjective estimates and correlations between the changes in performance and the training course. Correlations are not causations - correlations on their own do not tell you what caused what, only that when one indicator moved, so did another. They may be mutually linked, or circularly linked, or almost not linked at all. Correlations are usually acceptable when the organisation is performing well. It's when performance is not so good that senior management, esp CFO and CEO/COO ask very searching questions about the hard links between training and performance.
Take the following indicator:
Somebody has been away on a training course - may be it was nice for them to be away on the course, not to deal with real work problems for a day or 2, meet new people etc. Do you really think they would self-report no positive or increased change in performance? Do you really think they are going to say "I didn't learn anything", especially if they rated the course and trainer with high marks on the reaction or Kirkpatrick Level 1 evaluation forms? No!
This same critique must be applied to any indicator where there is self-reporting. Does this mean we shouldn't use self reporting? N! We should do! Rajesh B has suggested many measures - we should use a braod selection of them. So yes, use self-reporting - it will tell you something about the trainee, about their circumstances etc that may help identify better ways to support them in putting their learning in to practice.
If we take a different measure:
Here there is a degree of objectivity if we assume that the observable/measurable changes are being observed/measured by people or systems with no significant bias. But we still need to have a CAUSAL LINK to be sure there is a link - correlation is not good enough in some cases. We need to be sure that of all the possible factors that could account for some or all of the changed behaviours, the training course is by far and away the dominant one.
Nobody said it would be easy but I think I said it could and should be EASIER!!! We are getting there I think!
Regards
Martin
From United Kingdom,
Hi Martin Thanks for notifying the flaw. My interest was to give a borader outline of the measures which can be looked into while devising the effectiveness measures and matrices Cheers
From India, Bangalore
From India, Bangalore
Hello everybody'
sir actually i m doin this project for an organization and i have been given two training programmes that are Management Development Programme & the other is Mentor Mentee programme. These programmes have already being conducted 3-4 months back and now they want to evaluate the effectiveness.
THey have taken a training feedback from employees regarding their views regarding but i don't have that information too. i have designed a questionnaire for this purpose but since i dont have any pre-training info.
i m in dilema now how to use the data that i have collected thru questionnaires.
Please do help me in this regard . I will also post the questionnaires in this cite.
Can we find out the ROI of these programmes , since they r the behavioral programmes, its difficult to quantify the results.
I agree to that people may overstate their performance and their is no reliability of that data. please keep sharing your views with me.
regards
Pooja
From India, Indore
sir actually i m doin this project for an organization and i have been given two training programmes that are Management Development Programme & the other is Mentor Mentee programme. These programmes have already being conducted 3-4 months back and now they want to evaluate the effectiveness.
THey have taken a training feedback from employees regarding their views regarding but i don't have that information too. i have designed a questionnaire for this purpose but since i dont have any pre-training info.
i m in dilema now how to use the data that i have collected thru questionnaires.
Please do help me in this regard . I will also post the questionnaires in this cite.
Can we find out the ROI of these programmes , since they r the behavioral programmes, its difficult to quantify the results.
I agree to that people may overstate their performance and their is no reliability of that data. please keep sharing your views with me.
regards
Pooja
From India, Indore
Hi Rashi, I am still struggling with my problem.i would be very happy if u can help me in this regard. regards Pooja
From India, Indore
From India, Indore
Hi Rajesh
You made no flaw - you presented a useful list - all I did was to suggest that to make additional use of these measures we must approach them with some thought about their limitations.
I hope you and others continue to contribute this kind of material.
Thank you again!
Regards
Martin
From United Kingdom,
You made no flaw - you presented a useful list - all I did was to suggest that to make additional use of these measures we must approach them with some thought about their limitations.
I hope you and others continue to contribute this kind of material.
Thank you again!
Regards
Martin
From United Kingdom,
Community Support and Knowledge-base on business, career and organisational prospects and issues - Register and Log In to CiteHR and post your query, download formats and be part of a fostered community of professionals.