5
21 Performance Improvement, vol. 48, no. 3, March 2009 ©2009 International Society for Performance Improvement Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/pfi.20059 PERFORMANCE PROFICIENCY AS A MEASURE OF LEARNING NEEDS Paul Robbins Level 2 evaluation often advocates that pretest and posttest measures be collected to assess learning gains. In many military organizations, learning per se is less relevant than whether graduates can proficiently perform the job tasks for which they were hired. The critical end is performance proficiency. Incorporating methods for measuring existing performance levels as part of a Level 2 evaluation process has helped the U.S. Coast Guard prove training success. AS A GOVERNMENT AGENCY, the U.S. Coast Guard exists to accomplish its missions of safe ships and naviga- tion, search and rescue, environmental response, law enforcement, and national security. Congress allows a cer- tain number of personnel and funding to provide train- ing for Coast Guard personnel beyond the minimum number required to accomplish the missions. This is so because the Coast Guard, as with all military organiza- tions, receives personnel who are at an entry level and provides just-in-time interventions throughout an indi- vidual’s career to develop skill and knowledge in its work- force. The Coast Guard does not hire middle-level or senior-level proficiency from the job market. Americans, through Congressional funding and over- sight, expect the Coast Guard to train its personnel to proficiency with maximum effectiveness and efficiency. Learning for the sake of learning, that is, learning as an end worthy of measure, is not an efficient use of the citi- zens’ resources. As a training professional, I must ensure that the grad- uates of my training intervention have the proficiency to execute the tasks expected of them on the job. I measure my success at ensuring proficiency through valid, reliable, and high fidelity performance tests. I modify the training intervention based on the data from the performance tests. Donald L. Kirkpatrick developed an evaluation frame- work for measuring training results, which comprises the following four levels (Kirkpatrick & Kirkpatrick, 2006, p. 21): Level 1—Reaction Level 2––Learning Level 3––Behavior Level 4––Results Kirkpatrick writes: “Evaluate knowledge, skills, and/or attitudes both before and after the program” (Kirkpatrick & Kirkpatrick, 2006, p. 43). This often involves the use of a pretest and a posttest to prove learning has occurred. However, in the world of armed forces work, it is not always cost effective to pretest personnel, most of whom have not previously executed the task before entering the training intervention, with a valid, reliable, high fidelity performance test before the intervention. Such pretesting would increase the length of a typical course by up to one-fourth. The performance test can enhance the function of a Level 2 evaluation, but tests do not necessarily measure learning, because it is assumed that if baseline profi- ciency is present, then any necessary skills, knowledge, information, and attitude must be present as well. Proficiency gains are then the end result. Any “learning” is incidental as a means to the end, and the concern for learning is only as a necessary process in the service of performance proficiency. In a military training environment, reframing Level 2 evaluation from measures of simple learning gains to measures of baseline performance proficiency allows for the effective and efficient documentation of proficiency.

Performance proficiency as a measure of learning needs

Embed Size (px)

Citation preview

Page 1: Performance proficiency as a measure of learning needs

21

Performance Improvement, vol. 48, no. 3, March 2009©2009 International Society for Performance Improvement

Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/pfi.20059

PERFORMANCE PROFICIENCY AS A MEASURE OF LEARNING NEEDS

Paul Robbins

Level 2 evaluation often advocates that pretest and posttest measures be collected to assess

learning gains. In many military organizations, learning per se is less relevant than whether

graduates can proficiently perform the job tasks for which they were hired. The critical end

is performance proficiency. Incorporating methods for measuring existing performance levels as

part of a Level 2 evaluation process has helped the U.S. Coast Guard prove training success.

AS A GOVERNMENT AGENCY, the U.S. Coast Guardexists to accomplish its missions of safe ships and naviga-tion, search and rescue, environmental response, lawenforcement, and national security. Congress allows a cer-tain number of personnel and funding to provide train-ing for Coast Guard personnel beyond the minimumnumber required to accomplish the missions. This is sobecause the Coast Guard, as with all military organiza-tions, receives personnel who are at an entry level andprovides just-in-time interventions throughout an indi-vidual’s career to develop skill and knowledge in its work-force. The Coast Guard does not hire middle-level orsenior-level proficiency from the job market.

Americans, through Congressional funding and over-sight, expect the Coast Guard to train its personnel toproficiency with maximum effectiveness and efficiency.Learning for the sake of learning, that is, learning as anend worthy of measure, is not an efficient use of the citi-zens’ resources.

As a training professional, I must ensure that the grad-uates of my training intervention have the proficiency toexecute the tasks expected of them on the job. I measuremy success at ensuring proficiency through valid, reliable,and high fidelity performance tests. I modify the trainingintervention based on the data from the performancetests.

Donald L. Kirkpatrick developed an evaluation frame-work for measuring training results, which comprises the following four levels (Kirkpatrick & Kirkpatrick,2006, p. 21):

• Level 1—Reaction

• Level 2––Learning

• Level 3––Behavior

• Level 4––Results

Kirkpatrick writes: “Evaluate knowledge, skills, and/orattitudes both before and after the program” (Kirkpatrick& Kirkpatrick, 2006, p. 43). This often involves the use ofa pretest and a posttest to prove learning has occurred.However, in the world of armed forces work, it is notalways cost effective to pretest personnel, most of whomhave not previously executed the task before entering thetraining intervention, with a valid, reliable, high fidelityperformance test before the intervention. Such pretestingwould increase the length of a typical course by up toone-fourth.

The performance test can enhance the function of aLevel 2 evaluation, but tests do not necessarily measurelearning, because it is assumed that if baseline profi-ciency is present, then any necessary skills, knowledge,information, and attitude must be present as well.Proficiency gains are then the end result. Any “learning”is incidental as a means to the end, and the concern forlearning is only as a necessary process in the service ofperformance proficiency.

In a military training environment, reframing Level 2evaluation from measures of simple learning gains tomeasures of baseline performance proficiency allows forthe effective and efficient documentation of proficiency.

Page 2: Performance proficiency as a measure of learning needs

22 www.ispi.org • DOI: 10.1002/pfi • MARCH 2009

Through appropriate analysis of preexisting profi-ciency, the organization establishes which tasks require atraining solution and which do not for a given work pop-ulation. Through appropriate instructional analysis,design, and development, the instructional materials usedin the training intervention reflect the level of skill andknowledge generally found in the student population onthe first day of the intervention. When the students haveproceeded through the intervention for a task, they areready for the evaluation of their performance gains andimproved proficiency.

PERFORMANCE TESTSWhat then makes for an effective and efficient perfor-mance test and, by extension, a Level 2 evaluationprocess? First, consider the performance test.

Many experts contend that there should be a perfor-mance test for every terminal performance objective in acurriculum. The terminal performance objective shouldmatch the expectations for performance by a graduate onthe job. By extension this means that the conditions,behavior, and standards on the performance test shouldmatch those of the terminal performance objective andthus the job.

In a like manner, the performance test should containthe same cues used on the job during task performance.All the cues—no more and no less—should be tested.

One key component of the performance test is the per-formance observation checklist. The observation check-list precisely describes the specific, observable qualitycharacteristics of the output produced on the job as aresult of performing the task. Those characteristics arederived from the analysis of the job, via the terminal per-formance objective, and should again match thosedescribed in the objective. Each quality characteristic onthe checklist should be so described that performance tostandards results in a “yes” being checked on the checklist.

If the terminal performance objective describes stan-dards regarding safety, security, regulations, legal require-ments or liability, or public relations concerns, then theperformance test must evaluate the actions of the studentas well as the output of the task. In such a case, each spe-cific, observable action on the checklist should be listed inthe sequence of action and numbered. In addition, eachaction should be described in specific, observable behav-ioral terms, as an observer would observe the perfor-mance to standard on the job.

Although the observation checklist may be the centerof attention for the student and the official certifying per-formance, it is vital that the performance test includesinstructions to the certifying official to ensure that the

test is administered in a reliable and consistent fashion.The instructions on the performance test should includethe following categories of information, as appropriate:

• Scheduling

• Equipment and resources required for conducting thetest

• Assistance allowed for the student

• Tools, equipment, references, and information allowedfor the student

• Verbatim instructions to be read to or by the student

• How to score the results of the test

• Specific remedial instructions to assign to the studentduring the feedback session after the performance test

The remedial procedures mentioned above should referdirectly to specific errors in the student’s performance onthe test as measured against the items on the observationchecklist.

Spaces should be included on the observation checklistfor the student’s name, date, and the certifying official’ssignature.

Finally, the performance test and the observationchecklist should include the date of their version’s publi-cation. This ensures that the appropriate version of thetest is being administered.

EXAMPLEExhibit 1 is an example of a performance test for writing aterminal performance objective. You should note that onthe observation checklist portion of the test, only the qual-ity characteristic of the product to be produced is listed,that is, the terminal performance objective. It is not neces-sary for the certifying official to observe the student’s

Proficiency is the end. Any“learning” is incidental as ameans to the end, and theconcern for learning is onlyas a necessary process in theservice of performanceproficiency.

Page 3: Performance proficiency as a measure of learning needs

Performance Improvement • Volume 48 • Number 3 • DOI: 10.1002/pfi 23

EXHIBIT 1 EXAMPLE OF A PERFORMANCE TEST

TPO TEST FOR WRITING A TERMINAL PERFORMANCE OBJECTIVE, VERSION JUNE 17, 2003

Introduction This test evaluates the student’s proficiency in writing terminal performance objectives. The test measures the standards of theoutput, that is, the objective only. It does not measure the actions the student takes to write the objective.

TPO The terminal performance objective measured by this test is described below:

Performance: The course designer will write a terminal performance objective (TPO).

Conditions: Given a job title (and specialty, if applicable) and a task performed on the job, and using the Coast Guard Job Aid for Writing Performance Objectives.

Standards: The objective will list the performance (action and output) as it is produced on the job. The objective will list the conditions applicable to the job, including tools and equipment to be used as well as any environmental constraints. Theobjective will describe clearly and precisely criteria for successful performance, including, as applicable, accuracy, time, productivity, and safety.

Scheduling 1. Test the students when they tell you they are ready to write the performance objective for their assigned task.

2. Give the student a copy of the observation checklist for Writing Performance Objectives.

3. Tell the students to make sure the objective matches the standards on the checklist when they give you the completed objective for evaluation.

Materials The student may write the objective by hand or write the objective on a computer. The student may give you a paper versionor an electronic version.

Assistance The student is allowed to use the Coast Guard Job Aid for Writing Performance Objectives, any references related to instructional design and development, and any information concerning the assigned task.

The student may not receive assistance from anyone except to gather more information regarding the assigned task.

Instructions Read the following statement to the student:

“You are now ready to write the performance objective for your assigned task. Here is your observation checklist for Writinga Performance Objective.

Use the Coast Guard Job Aid for Writing Terminal Performance Objectives and the data concerning your assigned task asyou write your objective. You may also use any reference materials related to the design and development of instruction.

You may get assistance related to understanding your assigned task, but you must write the objective by yourself.

When you are finished writing the objective, compare your results with the data on the observation checklist. When you areconfident that your objective matches the standards on the checklist, give me a paper copy of the objective, or send me theobjective as an attachment to an email message.

I will then compare your objective to the standards on the checklist and give you feedback. What are your questions?”

Scoring After comparing the student’s objective with the observation checklist, mark the checklist and use the decision table below.results

Remediation For each item you marked “no” on the checklist:

1. Refer the student to the relevant task and step of the Job Aid for Writing Terminal Performance Objectives.

2. Have the student compare and contrast the data from his or her assigned task with the guidance in the job aid.

3. Tell the student to write the objective again and then show the objective to you for review.

IF you checked THEN

Yes for every item Sign and date the checklist

No for any item Remediate

(continued on next page)

Page 4: Performance proficiency as a measure of learning needs

24 www.ispi.org • DOI: 10.1002/pfi • MARCH 2009

actions. The certifying official needs only to certify thequality of the objective produced by the student.

EFFECTIVENESS AND EFFICIENCYMEASURESIt is possible to measure performance proficiency with atraining intervention in at least four ways (U.S. CoastGuard Leadership Development Center, 2006, pp. 71–74).Two of the measures are from the perspective of effective-ness and two are from the perspective of efficiency. Themeasures of effectiveness are the quality of the productand on-time delivery of the product. The measures ofefficiency are cycle time and waste.

The performance test establishes the quality of theproduct, which, in this case, is a graduate who is profi-cient at performing the task expected on the job.

The extent to which the students meet the require-ments of the performance test on the first try is one mea-sure of on-time delivery. When an unacceptable numberor percentage of students do not meet the requirementson the first try, there is a need for analysis as to what ele-

ments, what points in the training cycle, are insufficientto produce on-time performance proficiency.

With respect to efficiency, the cost of remediation(waste in the form of rework), that is, providing addi-tional instructional support post-performance test, canbe described in time and dollars. This measure of wasteenables management to prioritize the task training mod-ules in highest priority for improving the training processand products for a task in question.

When 100% of the students meet the requirements ofthe performance test on the first try, then managementcan choose to put its efforts into reducing the time it takesto move the students from “cannot do” to “can do.” Thisreduction in cycle time greatly reduces the greatest cost oftraining, that is, the pay, benefits, and lost work opportu-nity for a worker in training instead of on the job.

SUMMARY AND RECOMMENDATIONSWith an expanded focus on performance proficiencymeasures of training effectiveness, Level 2 could be said tobe the most important of Kirkpatrick’s levels of evalua-

EXHIBIT 1 EXAMPLE OF A PERFORMANCE TEST (continued)

TPO TEST FOR WRITING A TERMINAL PERFORMANCE OBJECTIVE, VERSION JUNE 17, 2003

Quality Yes Nocharacteristic 1. The performance statement:

• Contains the action verb executed on the job � �

• Contains the output produced on the job � �

2. The conditions statement lists:

• Conditions that match as closely as possible those found on the job � �

• Cue or cues that stimulate the performer to act � �

• Tools and equipment used to perform the TPO � �

• References, job aids, and assistance used to perform the TPO � �

• The range of conditions typically found on the job � �

• Any restrictions placed on the performance or its environment � �

3. The standards statement:

• Matches the standards found on the job � �

• Avoids words subject to misinterpretation � �

• Lists only the criteria that separate acceptable performance from unacceptable � �

• Lists specific, observable characteristics of the output � �

• IF specific actions are required, THEN the actions are in sequence and are specific, observable behavior � �

Student’s Name/Class Date Testing Official’s Name/Signature

Page 5: Performance proficiency as a measure of learning needs

Performance Improvement • Volume 48 • Number 3 • DOI: 10.1002/pfi 25

tion with respect to the concerns of valid, reliable, andhigh fidelity training.

By incorporating measures of the above-mentionedelements in a Level 2 evaluation process, the success of atraining intervention can be continuously improved.Success is the degree to which graduates achieve perfor-mance proficiency, during training, with job-related taskswithout posttest remediation.

References

Kirkpatrick, D.L., & Kirkpatrick, J.D. (2006). Evaluating train-ing programs: The four levels. San Francisco: Berrett-Koehler.

U.S. Coast Guard Leadership Development Center. (2006).Performance improvement guide (5th ed.). Boston: U.S.Government Printing Office.

PAUL ROBBINS serves as the deputy director of training for curriculum development at the U.S. Coast Guard Training Center in Petaluma, California. The Training Center produces the CoastGuard’s electronics technicians, emergency medical technicians, food service specialists, health ser-vices specialists, information technology specialists, instructors, operations specialists, storekeepers,and yeomen. The Training Center is also the proud home of the Chief Petty Officers Academy. Atrainer since 1982, his professional practice includes accomplishment-based training, performancetesting, on-the-job training, training doctrine, continuous quality improvement, teamwork, strategicplanning, distance learning, performance support, and organizational culture. He may be reachedat [email protected].