40
1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

Embed Size (px)

Citation preview

Page 1: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

1© 2010 by Nelson Education Ltd.

Chapter Eleven

Training

Evaluation

Page 2: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

2© 2010 by Nelson Education Ltd.

Learning Outcomes

Define training evaluation and the main reasons for conducting evaluations

Discuss the barriers to evaluation and the factors that affect whether or not an evaluation is conducted

Describe the different types of evaluations Describe the models of training evaluation and

the relationship among them

Page 3: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

3© 2010 by Nelson Education Ltd.

Learning Outcomes

Describe the main variables to measure in a training evaluation and how they are measured

Discuss the different types of designs for training evaluation, as well as their requirements, limits, and when they should be used

Page 4: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

4© 2010 by Nelson Education Ltd.

Instructional Systems Design Model

Page 5: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

5© 2010 by Nelson Education Ltd.

Instructional Systems Design Model

Training evaluation is the third step of the ISD model and consists of two parts:• The evaluation criteria (what is being measured)• Evaluation design (how it will be measured)

These concepts are covered in the next two chapters

• Each has a specific and important role to play in the effective evaluation of training and the completion of the ISD model

Page 6: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

6© 2010 by Nelson Education Ltd.

What is Training Evaluation?

Process to assess the value - the worthiness - of training programs to employees and to organizations

Page 7: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

7© 2010 by Nelson Education Ltd.

Training Evaluation

Not a single procedure; a continuum of techniques, methods, and measures

Ranges from simple to elaborate procedures The more elaborate the procedure, the more

complete the results, yet usually the more costly (time, resources)

Need to select the procedure based on what makes sense and what can add value within resources available

Page 8: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

8© 2010 by Nelson Education Ltd.

Why Conduct Training Evaluations?

Assist managers in identifying what, and who, should be trained

Determine cost-benefits of a program Determine if training program has achieved

expected results Diagnose strengths and weaknesses of a program

and pinpoint needed improvements Justify and reinforce the value of training

Page 9: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

9© 2010 by Nelson Education Ltd.

Barriers to Training Evaluation

Barriers fall into two categories:

1. Pragmatic• requires specialized knowledge and can be

intimidating• data collection can be costly and time

consuming

2. Political• potential to reveal ineffectiveness of training

Page 10: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

10© 2010 by Nelson Education Ltd.

Types of Training Evaluation

Evaluations may be distinguished from each other with respect to:

1. The data gathered and analyzed

2. The fundamental purpose of the evaluation

Page 11: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

11© 2010 by Nelson Education Ltd.

Types of Training Evaluation

1. The data gathered and analyzed

a. Trainee perceptions, learning and behaviour at the conclusion of training

b. Assessing psychological forces that operate during training

c. Information about the work environment• Transfer climate and learning culture

Page 12: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

12© 2010 by Nelson Education Ltd.

Types of Training Evaluation

2. The purpose of the evaluation:

a. Formative: provide data about various aspects of a training program

b. Summative: provide data about worthiness or effectiveness of a training program

c. Descriptive: provide information that describes the trainee once they have completed a training program

d. Causal: provide information to determine if training caused the post-training behaviours

Page 13: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

13© 2010 by Nelson Education Ltd.

Models of Training Evaluation

A. Kirkpatrick’s Hierarchical ModelOldest, best known, and most frequently used model.

The Four Levels of Training Evaluation:– Level 1: Reactions– Level 2: Learning– Level 3: Behaviours– Level 4: Results

Page 14: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

14© 2010 by Nelson Education Ltd.

Models of Training Evaluation

Kirkpatrick’s Model provides a systematic framework for assessing training

The four levels are presented in a hierarchy with each level providing more important information than the preceding oneIt assumes all levels are positively related to each other; have causal effect on the next levelContributions of Kirkpatrick model not to be underestimated; clear, simple, demystified evaluation and provide impetus for research

Page 15: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

15© 2010 by Nelson Education Ltd.

Models of Training Evaluation

There is general agreement that the four levels are important outcomes to be assessed there are some critiques: Doubt about the validity Insufficiently diagnostic Kirkpatrick requires all training evaluations

to rely on the same variables and outcome measures

Page 16: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

16© 2010 by Nelson Education Ltd.

Models of Training Evaluation

B. COMA Model

A training evaluation model that involves the measurement of four types of variables

1. Cognitive

2. Organizational Environment

3. Motivation

4. Attitudes

Page 17: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

17© 2010 by Nelson Education Ltd.

Models of Training Evaluation

The COMA model improves on Kirkpatrick’s model in three ways:

1. Integrates a greater number of measures

2. Measures are causally related to training success

3. Defines variables with greater precision

However, COMA model is relatively new so too early to determine its value; also its focus is on factors that impact transfer only; and it does not specify how evaluations should be conducted

Page 18: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

18© 2010 by Nelson Education Ltd.

Models of Training Evaluation

C. Decision-Based Evaluation Model

A training evaluation model that specifies the target, focus, and methods of evaluation

Page 19: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

19© 2010 by Nelson Education Ltd.

Models of Training Evaluation

Decision-Based Evaluation Model

Goes further than either of the two preceding models:

1. Identifies the target of the evaluation

2. Identifies its focus

3. Suggest methods

4. General to any evaluation goals

5. Flexibility: guided by target of evaluation

Page 20: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

20© 2010 by Nelson Education Ltd.

Models of Training Evaluation

As with COMA, the DBE model is recent and will need to be tested more fully

All three models require specialized knowledge to complete the evaluation; this can limit their use in organizations without this knowledge

Holton and colleagues’ Learning Transfer System Inventory (seen in Ch 10) provides a more generic approachSee Training Today 11.2 for more on its use for evaluation

Page 21: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

21© 2010 by Nelson Education Ltd.

Training Evaluation Variables

Training evaluation requires data be collected on important aspects of training

Some of these variables have been identified in the three models of evaluation

A more complete list of variables is presented in Table 11.1 and Table 11.2 shows sample questions and formats for measuring each type of variable

Page 22: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

22© 2010 by Nelson Education Ltd.

Training Evaluation Variables

A. Reactions

B. Learning

C. Behaviour

D. Motivation

E. Self-efficacy

F. Perceived/anticipated support

G. Organizational perceptions

H. Organizational resultsSee Table 11.1 in text

Page 23: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

23© 2010 by Nelson Education Ltd.

Training Evaluation Variables

A. Reactions

1. Affective reactions: measures that assess trainees’ likes and dislikes of a training program

2. Utility reactions: measures that assess the perceived usefulness of a training program

Page 24: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

24© 2010 by Nelson Education Ltd.

Training Evaluation Variables

B. Learning

Learning outcomes can be measured by:

1. Declarative learning: refers to the acquisition of facts and information, and is by far the most frequently assessed learning measure

2. Procedural learning: refers to the organization of facts and information into a smooth behavioural sequence

Page 25: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

25© 2010 by Nelson Education Ltd.

Training Evaluation Variables

C. Behaviour

Behaviours can be measured using three approaches:

1. Self-reports

2. Observations

3. Production indicators

Page 26: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

26© 2010 by Nelson Education Ltd.

Training Evaluation Variables

D. Motivation

Two types of motivation in the training context:

1. Motivation to learn2. Motivation to apply the skill on-the-job (transfer)

E. Self-efficacy

Refers to the beliefs that trainees have about their ability to perform the behaviours that were taught in a training program.

Page 27: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

27© 2010 by Nelson Education Ltd.

Training Evaluation Variables

F. Perceived and/or Anticipated Support

Two important measures of support are:

1. Perceived support: The degree to which the trainee reports receiving support in attempts to transfer the learned skills

2. Anticipated support: The degree to which the trainee expects to supported in attempts to transfer the learned skills

Page 28: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

28© 2010 by Nelson Education Ltd.

Training Evaluation Variables

G. Organizational Perceptions

Two scales designed to measure perceptions:

1. Transfer climate: can be assessed via a questionnaire that identifies eight sets of “cues”

2. Continuous learning culture: can be assessed via questionnaire presented in Trainer’s Notebook 4.1 in chapter 4 of your text

Page 29: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

29© 2010 by Nelson Education Ltd.

Training Evaluation Variables

G. Organizational Perceptions (cont'd)

Transfer climate cures include:

• Goal cues

• Social cues

• Task and structural cues

• Positive feedback

• Negative feedback

• Punishment

• No feedback

• Self-control

Page 30: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

30© 2010 by Nelson Education Ltd.

Training Evaluation Variables

H. Organizational Results

Results information consists of :

1. Hard data: Results that can be measured objectively (i.e., number of items sold)

2. Soft Data: Results that are assessed through perceptions and judgments (i.e., attitudes)

3. Return on Expectations: Measurement of a training program’s ability to meet managerial expectations

Page 31: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

31© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

The manner with which the data collection is organized and how the data will be analyzed

All data collection designs compare the trained person to something

Page 32: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

32© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

1. Non-experimental designs: Comparison is made to a standard and not to another group of (untrained) people

2. Experimental designs: Trained group is compared to another group that does not receive the training and when the assignment of people to the training group and the non-training group is random

3. Quasi-experimental designs: Trained group is compared to another group that does not receive the training, but when the assignment of people to the training group and the non-training group is not random.

Page 33: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

33© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Different Types of Evaluation Designs

Design A: The single group post-only design*Design B: The single group pre-post design*Design C: The time series design*Design D: The single group design with control group**Design E: The pre-post design with control group**Design F: The time-series design with comparison group**Design G: The internal referencing strategy***

*All of the above are non-experimental designs**Casual Models: experimental or quasi-experimental***Hybrid design that permits some conclusions drawn from causal designs

Page 34: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

34© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Page 35: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

35© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Pre Post Pre Post

A: Single Group Post-only Design(Non-experimental)

B: Single Group Pre-Post Design(Non-experimental)

Page 36: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

36© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Pre Post Pre Post

C: Time Series Design(Non-experimental)

D: Single Group Designwith Control Group

Trained Untrained

Page 37: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

37© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Pre Post Pre Post

E: Pre-post Design with Control Group

F: Time Series Designwith Comparison Group

Trained Untrained

Page 38: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

38© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Pre Post

G: Internal Referencing Strategy

Training on Relevant Items

Training on Irrelevant Items

Page 39: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

39© 2010 by Nelson Education Ltd.

Data Collection Designs in Training Evaluation

Decisions about which data collection design to use need to be considered in the training design stage

For example if a trainer wants to have pre- and post data collection it will need to be factored into the design and administration of the program at the design stage

Page 40: 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

40© 2010 by Nelson Education Ltd.

Summary

The main purposes for evaluating training programs was as well as the barriers was discussed

Three models of training (Kirkpatrick, COMA, and DBE) were presented, critiqued and contrasted

Recognized that Kirkpatrick model is most frequently used, yet has some limitations

The variables required for an evaluation as well as methods and techniques required to measure them

The main types of data collections designs were presented Factors influencing choice of data collection designs were

discussed