hi
I am a student of PG (H.R) doing my summer training in an IT Company . My project is on finding tools to measure the training programs .
i want to implement the models in my organisation.I need questionnaires and some guidance as to how I can actually evaluate the impact of training programs.
Thanks .... Diksha
From India, Madras
I am a student of PG (H.R) doing my summer training in an IT Company . My project is on finding tools to measure the training programs .
i want to implement the models in my organisation.I need questionnaires and some guidance as to how I can actually evaluate the impact of training programs.
Thanks .... Diksha
From India, Madras
Hi Diksha, Check out the topic Training Needs Analysis Questionnaires in the Section: Human Resource Management by James Low Hope this helps. Ciao, Vinisha.
From India, Mumbai
From India, Mumbai
Hi Diksha,
Check out the topic help for project in training and development in the Section: Training & Development by smitaverma
https://www.citehr.com/1-vt9732.html?start=0
Hope this helps too.
Ciao,
Vinisha.
From India, Mumbai
Check out the topic help for project in training and development in the Section: Training & Development by smitaverma
https://www.citehr.com/1-vt9732.html?start=0
Hope this helps too.
Ciao,
Vinisha.
From India, Mumbai
Hi Diksha,
Thanks to Vinisha.. you have got the links on TNA
Addiionally you has asked on Kirkpatriks model which am enclosing herewith.
Hope this helps..
Cheerio,
Rajat
Donald Kirkpatrick. This model articulates a four-step process.
* Level 1: Reactions.
At this level, we measure the participants’ reaction to the programme. This is measured through the use of feedback forms (also termed as “happy-sheets”). It throws light on the level of learner satisfaction. The analysis at this level serves as inputs to the facilitator and training administrator. It enables them to make decisions on continuing the programme, making changes to the content, methodology, etc.
* Level 2: Participant learning.
We measure changes pertaining to knowledge, skill and attitude. These are changes that can be attributed to the training. Facilitators utilise pre-test and post-test measures to check on the learning that has occurred. However, it is important to note that learning at this level does not necessarily translate into application on the job.
Measuring the effectiveness of training at this level is important as it gives an indication about the quantum of change vis-à-vis the learning objectives that were set. It provides critical inputs to fine-tuning the design of the programme. It also serves the important aspect of being a lead indicator for transfer of learning on to the job context.
* Level 3: Transfer of learning
At this level, we measure the application of the learning in the work context, which is not an easy task. It is not easy to define standards that can be utilised to measure application of learning and there is always this question that preys on the minds of various people: ‘Can all changes be attributed to the training?’
Inputs at this level can come from participants and their supervisors. It makes sense to obtain feedback from the participants on the application of learning on the job. This can be done a few weeks after the programme so that it gives the participants sufficient time to implement what they have learnt. Their inputs can indicate the cause of success or failure; sometimes it is possible that learning was good at level-2, but implementation did not happen due to system-related reasons. It can help the organisation deal with the constraints posed by systems and processes so that they do not come in the way of applying learning.
* Level 4: Results.
This measures effectiveness of the programme in terms of business objectives. At this level we look at aspects such as increase in productivity, decrease in defects, cycle time reduction, etc.
Many organisations would like to measure effectiveness of training at this level; the fact remains that it is not very easy to do this, as it is improbable that we can show direct linkage. However, it is worthwhile making the attempt even if the linkage at this level is indirect.
It is possible for organisations to measure effectiveness for all programmes at level-1 and level-2. This can be built into the design of the training programme.
I have found that it is easy to measure training programmes related to technical and functional areas at level-3 and level-4. It is not easy to do this with behavioral skills programmes. Organisations that choose to measure training effectiveness can start with the former category before moving to measuring behavioural skills at level-3 and level-4.
I will articulate an example to show how we can measure some training programmes at levels-3 and level-4. Let us consider the case of an IT services company that conducts technical training programmes on products for their service engineers.
Learning at level-2 can be measured at the end of the programme by the use of tests—both written and practical. Measurement at level-3 is possible for these programmes by utilising the wealth of data the organisation will have on calls attended by engineers at various customer sites. This data is generally available in “Call Tracking Systems”.
I have found valuable insights by comparing data pertaining to the period before the training programme and after the training programme. To simplify analysis, we can take a 24-week cycle—12 weeks prior to the training and 12-weeks subsequent to the programme. The data gives a picture on aspects such as:
• How many calls did the engineer attend on the given product prior to and after the programme? We need to analyse this data. If sufficient calls were not taken after the training, is it due to the fact that there were no calls in this category or because the engineer was not confident to take calls?
• Comparison of the average time to complete a call. Did the cycle time to close similar calls reduce?
• Comparison of the quality of the solution, eg did the problem occur again within a specified period?
• Did the engineer change parts when they were not required to be changed? Such speculative change of spares gives an indication of the diagnostic capability of the engineer. Organisations get to know the details of such speculative changes when a so-called defective spare is returned by the repair centre with a statement that there is no problem with it.
The data from the call tracking system and other related data give a clear indication of application on the job. However, I will not attribute all of the transfer of learning to the training. It is possible that the organisation has instituted mechanism such as mentoring, sending new engineers on calls with senior colleagues, etc, to enable them to also learn on the job. Hence the data needs to be interpreted keeping the overall environment in mind.
This data can also be utilised to measure effectiveness at level-4. It is easy to calculate productivity increases and cost savings for the example cited above. The measures from level-3 can be converted into revenue or cost saving figures.
Similarly, it is possible to conduct measurement in the areas of software development, manufacturing area, accounting and other such functional skills. There are prerequisites to conduct effectiveness of training at this level. It is important for the organisation to institute strong indicators to measure performance levels.
There are mechanisms to measure effectiveness of behavioural skills at level-3. These are cumbersome to implement. It needs a fair amount of investment by the organisation in terms of time and money.
Organisations that have chosen to implement assessment centres have been able to measure learning at this level. Assessment centre is a large topic on its own and has been kept out of the scope of this article.
My suggestion to organisations that embark on measuring effectiveness of training is to measure all programmes at level-1 and level-2. The measures at level-3 and level-4 can start with the functional skills, before moving on to the behavioural skills programmes.
From India, Pune
Thanks to Vinisha.. you have got the links on TNA
Addiionally you has asked on Kirkpatriks model which am enclosing herewith.
Hope this helps..
Cheerio,
Rajat
Donald Kirkpatrick. This model articulates a four-step process.
* Level 1: Reactions.
At this level, we measure the participants’ reaction to the programme. This is measured through the use of feedback forms (also termed as “happy-sheets”). It throws light on the level of learner satisfaction. The analysis at this level serves as inputs to the facilitator and training administrator. It enables them to make decisions on continuing the programme, making changes to the content, methodology, etc.
* Level 2: Participant learning.
We measure changes pertaining to knowledge, skill and attitude. These are changes that can be attributed to the training. Facilitators utilise pre-test and post-test measures to check on the learning that has occurred. However, it is important to note that learning at this level does not necessarily translate into application on the job.
Measuring the effectiveness of training at this level is important as it gives an indication about the quantum of change vis-à-vis the learning objectives that were set. It provides critical inputs to fine-tuning the design of the programme. It also serves the important aspect of being a lead indicator for transfer of learning on to the job context.
* Level 3: Transfer of learning
At this level, we measure the application of the learning in the work context, which is not an easy task. It is not easy to define standards that can be utilised to measure application of learning and there is always this question that preys on the minds of various people: ‘Can all changes be attributed to the training?’
Inputs at this level can come from participants and their supervisors. It makes sense to obtain feedback from the participants on the application of learning on the job. This can be done a few weeks after the programme so that it gives the participants sufficient time to implement what they have learnt. Their inputs can indicate the cause of success or failure; sometimes it is possible that learning was good at level-2, but implementation did not happen due to system-related reasons. It can help the organisation deal with the constraints posed by systems and processes so that they do not come in the way of applying learning.
* Level 4: Results.
This measures effectiveness of the programme in terms of business objectives. At this level we look at aspects such as increase in productivity, decrease in defects, cycle time reduction, etc.
Many organisations would like to measure effectiveness of training at this level; the fact remains that it is not very easy to do this, as it is improbable that we can show direct linkage. However, it is worthwhile making the attempt even if the linkage at this level is indirect.
It is possible for organisations to measure effectiveness for all programmes at level-1 and level-2. This can be built into the design of the training programme.
I have found that it is easy to measure training programmes related to technical and functional areas at level-3 and level-4. It is not easy to do this with behavioral skills programmes. Organisations that choose to measure training effectiveness can start with the former category before moving to measuring behavioural skills at level-3 and level-4.
I will articulate an example to show how we can measure some training programmes at levels-3 and level-4. Let us consider the case of an IT services company that conducts technical training programmes on products for their service engineers.
Learning at level-2 can be measured at the end of the programme by the use of tests—both written and practical. Measurement at level-3 is possible for these programmes by utilising the wealth of data the organisation will have on calls attended by engineers at various customer sites. This data is generally available in “Call Tracking Systems”.
I have found valuable insights by comparing data pertaining to the period before the training programme and after the training programme. To simplify analysis, we can take a 24-week cycle—12 weeks prior to the training and 12-weeks subsequent to the programme. The data gives a picture on aspects such as:
• How many calls did the engineer attend on the given product prior to and after the programme? We need to analyse this data. If sufficient calls were not taken after the training, is it due to the fact that there were no calls in this category or because the engineer was not confident to take calls?
• Comparison of the average time to complete a call. Did the cycle time to close similar calls reduce?
• Comparison of the quality of the solution, eg did the problem occur again within a specified period?
• Did the engineer change parts when they were not required to be changed? Such speculative change of spares gives an indication of the diagnostic capability of the engineer. Organisations get to know the details of such speculative changes when a so-called defective spare is returned by the repair centre with a statement that there is no problem with it.
The data from the call tracking system and other related data give a clear indication of application on the job. However, I will not attribute all of the transfer of learning to the training. It is possible that the organisation has instituted mechanism such as mentoring, sending new engineers on calls with senior colleagues, etc, to enable them to also learn on the job. Hence the data needs to be interpreted keeping the overall environment in mind.
This data can also be utilised to measure effectiveness at level-4. It is easy to calculate productivity increases and cost savings for the example cited above. The measures from level-3 can be converted into revenue or cost saving figures.
Similarly, it is possible to conduct measurement in the areas of software development, manufacturing area, accounting and other such functional skills. There are prerequisites to conduct effectiveness of training at this level. It is important for the organisation to institute strong indicators to measure performance levels.
There are mechanisms to measure effectiveness of behavioural skills at level-3. These are cumbersome to implement. It needs a fair amount of investment by the organisation in terms of time and money.
Organisations that have chosen to implement assessment centres have been able to measure learning at this level. Assessment centre is a large topic on its own and has been kept out of the scope of this article.
My suggestion to organisations that embark on measuring effectiveness of training is to measure all programmes at level-1 and level-2. The measures at level-3 and level-4 can start with the functional skills, before moving on to the behavioural skills programmes.
From India, Pune
Hi Can anyone help me with tools to capture training effectiveness at level 2 and 3? Its for my final semester MBA project. Regards Ayeshwarya
From India
From India
Hi my name is Norlina...I am a Phd candidate and very interested to investigate the training transfer factors in influencing training among entrepreneurs. I am looking at Level 3 of Kirkpatrick model to evaluate training effectivessness --training transfer...I need help to get questionnaires to measure training transfer.
From Malaysia, Shah Alam
From Malaysia, Shah Alam
Community Support and Knowledge-base on business, career and organisational prospects and issues - Register and Log In to CiteHR and post your query, download formats and be part of a fostered community of professionals.