34
Human Performance Engineering: A Practical Approach to Application of Human Factors P.M. Haas, Ph.D. Concord Associates, Inc. Knoxville, TN NPRA National Safety Conference April 28-30, 1999 Dallas, Texas ABSTRACT We cannot “engineer” humans, but we can engineer human performance. To assure maximum safety and effective operations, we must apply the same degree of rigor and systematic analysis to engineering human performance as we do to engineering process and equipment performance. The field of Human Factors offers principles, methods and data for designing the “human-machine interface” to improve the safety, effectiveness, efficiency, and ease of use of engineered systems by human operators. Human Factors is primarily a design discipline, and is most effective when applied as part of a total system engineering design effort for a new system. Unfortunately, design and development of totally new industrial facilities within the U.S. has been very limited for a number of years while demands for enhanced safety and performance at existing facilities have increased dramatically. The petrochemical, refining, and other industries require from Human Factors specialists a practical, systematic approach to improving human performance in existing facilities without extensive system re-design. This paper presents an integrated, practical approach, referred to as “Human Performance Engineering” (HPE) for doing just that – improving human performance, and thereby improving overall system performance, in an operational facility. It outlines a practical guide for engineering the “human

NPRA-042899

  • Upload
    pmhaas

  • View
    211

  • Download
    0

Embed Size (px)

DESCRIPTION

Full paper presented at the NPRA National Safety Conference, 1999 in Dallas "Human Performance Engineering: A Practical Approach to Application of Human Factors. [Relevant to safe operation of "high hazard" facilities, for example in process and energy industries.]

Citation preview

Page 1: NPRA-042899

Human Performance Engineering: A Practical Approach to Application of Human Factors

P.M. Haas, Ph.D.Concord Associates, Inc.

Knoxville, TN

NPRA National Safety ConferenceApril 28-30, 1999

Dallas, Texas

ABSTRACT

We cannot “engineer” humans, but we can engineer human performance. To assure maximum safety and effective operations, we must apply the same degree of rigor and systematic analysis to engineering human performance as we do to engineering process and equipment performance. The field of Human Factors offers principles, methods and data for designing the “human-machine interface” to improve the safety, effectiveness, efficiency, and ease of use of engineered systems by human operators. Human Factors is primarily a design discipline, and is most effective when applied as part of a total system engineering design effort for a new system. Unfortunately, design and development of totally new industrial facilities within the U.S. has been very limited for a number of years while demands for enhanced safety and performance at existing facilities have increased dramatically. The petrochemical, refining, and other industries require from Human Factors specialists a practical, systematic approach to improving human performance in existing facilities without extensive system re-design. This paper presents an integrated, practical approach, referred to as “Human Performance Engineering” (HPE) for doing just that – improving human performance, and thereby improving overall system performance, in an operational facility. It outlines a practical guide for engineering the “human side” of the system (i.e., for building the underlying structures and management systems that will produce and support safe and effective human performance). Implementation of a comprehensive HPE program will dramatically reduce human error and enhance process safety. It also will help to establish and support a culture of continual improvement in human performance, a culture of “operational excellence”.

Page 2: NPRA-042899

SYSTEMS ENGINEERINGHuman Factors as a discipline has been most effective when practiced as an integral part of the systems engineering design process. From the systems engineering perspective, the system (Figure 1) includes not only equipment or hardware, but also software, people, administrative controls, procedures, job aids, facilities, and support organizations - all immersed in and influenced by organizational culture and the social/technical environment surrounding the system. The internal and external customers drive the system mission and performance specifications. The essence of the systems view is that all of the parts of the system need to be designed and utilized in a way that leads to successful performance of the total system. The systems engineering process starts in the early conceptual stages of design by specifying the system mission, goals and overall performance requirements. It continues with the definition and allocation of functions, trade off studies, and design analysis, or synthesis, to obtain and document specific performance requirements for each element of the system, including the human element. It also specifies the design of interfaces and interrelationships among the different elements (e.g., the human-machine interface). Systems engineering is a process for optimization to the top-level goals and performance requirements. These top-level requirements include safety, reliability, operability, maintainability, environmental responsibility, and other dimensions or system characteristics.

Social and Physical Environment

Corporate, Organizational Culture

Administrative Controls, Procedures, Job Aids

Facilities and Support Organizations

Hardware

Software

Personnel SubsystemJob/Org. Design

Selection & StaffingTraining & Qualification

SupervisionMotivation, Attitudes

Individual/teamperformance

Figure 1. The “Socio-Technical” System

Concord Associates, Inc. Page 2 of 22

Page 3: NPRA-042899

It is our belief that applications of systems engineering concepts and approaches in future design of petrochemical facilities could dramatically improve safety and effectiveness of operations. However, we recognize that very few totally new systems are being designed and built in the U.S. at this time. The immediate concern is what we can do to improve performance of existing systems. We believe we can apply systems engineering principles to engineer (re-engineer) the performance of existing systems. In our company we refer to this as systems performance engineering (Figure 2.)

Define Goals,Objectives,

Requirements

MeasurePerformance

Develop andValidate

Measures

Develop andImplement

Modifications

MeasureEffectiveness

of Mods

OK?

OK?

N

N

Y

Y

Figure 2. Systems Performance Engineering

HUMAN PERFORMANCE ENGINEERINGFurther, we believe that the most fruitful area for improving the performance of existing systems is human performance. We refer to the application of system performance engineering to the human element of the system as human performanceengineering. The basic technical discipline in human performance engineering is, of course, human factors, or ergonomics, which is a specialty focused on design of the “human-machine interface”. However, the term “human performance engineering” is intended to convey a considerably broader perspective and includes areas such as training system design and administrative controls (conduct of operations) that typically are viewed as human resource or management issues. Human performance engineering addresses all factors that influence human performance. The term also is intended to emphasize the notion that we can engineer – apply scientific principles to plan, design, construct, and manage – human performance. Human performance engineering offers a systematic and comprehensive framework for improving human performance and thereby improving overall system performance. While the activities comprising human performance engineering can be organized in different ways, a comprehensive program

Concord Associates, Inc. Page 3 of 22

Page 4: NPRA-042899

will address at least the following six areas, which are discussed in the subsequent paragraphs:

1. Human Error Assessment - “prospective” analysis for process hazards analysis and “retrospective” analysis for incident investigation

2. Conduct of Operations - disciplined professionalism

3. Personnel Subsystems - selection, training and qualifications

4. Procedures - design for effective use

5. Ergonomics - human-machine-interface design

6. Measurement and Feedback - continuous measurement of human, organizational, and management system performance; learning from successes and failures; and feedback to all personnel.

AREA 1: HUMAN ERROR ASSESSMENTHuman error assessment is used by safety analysts and engineers in two ways: 1) prospectively to identify the likelihood of human errors initiating, exacerbating, or mitigating accident events; and 2) retrospectively to identify human error-related causes of events. In the past, human error assessment in either mode often has been rather superficial. Many PHAs claim to have included “operator action” but in fact primarily consider operator action as mitigating factors, and often with a 100% likelihood of success. Many root cause analyses have claimed to address human error, but in effect dig no deeper into causal factors than merely identifying “human error” as the root cause.

Quantitative vs. Qualitative HRAOne of the reasons for past reluctance to employ human reliability analysis (HRA) techniques may be that most of the techniques have been developed and presented in the context of quantitative risk assessment (QRA). Table 1 lists some of the better known HRA techniques that have been used in the U.S. Originating in the military and

Table 1. Human Reliability Analysis Techniques

Technique Primary ApplicationsOHPRA (Concord Associates, Inc.) Petrochemical industryTHERP (Sandia Lab) Nuclear power, weapons assemblySLIM, SLIM-MAUD (Brookhaven Lab) Nuclear power operationsSimulation Models (e.g., SAINT, Siegel, HOS, MicroSaint)

Military systems design

OAT, OAET (NRC, INEL) Nuclear power operationsHEART (British Gov’t) Industrial, nuclearMAPPS (NRC, Oak Ridge Nat’l Lab) Nuclear power maintenanceATHEANA (NRC) Nuclear power operations

Concord Associates, Inc. Page 4 of 22

Page 5: NPRA-042899

aerospace community and most widely and recently used by the nuclear power community, QRA methods are often perceived as being very detailed, intensive and expensive. Quantitative HRA methods associated with QRA may be viewed to some extent in the same vein – too detailed, too expensive, and lacking a sufficient “data base” to warrant the complexity and cost.

In fact, qualitative HRA can be very powerful, practical, and cost-effective for both prospective analyses, such as PHA and retrospective analyses, such as root cause investigation. Subjective, “comparative” estimates of likelihood of human error are relatively straightforward to obtain. Experts can make judgements that are meaningful and precise enough to identify problems and make decisions regarding necessary improvements, without performing a detailed quantitative analysis. There are a number of techniques, including the qualitative portions of some of the quantitative approaches included in Table 1 above, that can be used very effectively to identify important “error-likely situations” without extensive quantitative analysis. One technique, developed by Concord as a qualitative tool specifically for process industries, is the Operational Human Performance Reliability Analysis (OHPRA) method1.

OHPRA – A Practical Human Performance Model for Petrochemical OperationsThe conceptual model for the OHPRA operations module2 is illustrated in Figure 3 (there is a similar one for maintenance activities). It assumes that operator tasks can be treated as consisting of a “cognitive” portion, in which the operator correctly or incorrectly diagnosis the situation and chooses the goal(s) and action(s) to be taken; and an “action” portion, in which the chosen actions are executed. Success on a task requires success on both portions. Likelihood of success on the cognitive portion considers cues and information available to the operator for diagnosis and decision making, knowledge and abilities level, and stresses such as time demands. The model assumes that given: 1) an adequate cue, 2) appropriate operator knowledge and abilities, and 3) a manageable level of stress, the operator will choose the correct action. If the operator does not choose the correct action, the entire task is failed.

The four factors considered under “opportunity to perform” are as follows:Access to operating area

Covers elements such as habitability, people resources, actions that place the operator at risk, etc.

The ability of the operator to gain access to the needed equipment, controls, information, etc. to perform manual actions called for including the personnel safety hazards which may be involved.

Equipment availability/operability Deals with systems’ and components’ ability to function or be utilized by the

operator. The operability of the necessary equipment, controls, information systems,

special tools, etc., needed to support the operators manual actions.

Concord Associates, Inc. Page 5 of 22

Page 6: NPRA-042899

OPERATIONS CUE TO ACTION

U NITOPE RATOR STRE SS

KNOW LE DGE & ABILITIE S

CORRE CT ACTION CHOSE N

FAILURE

FAILURE

EQUIPM EN TOPE RABILITY

ACCE SS TOOPE RATING AREA

SY STEMINFORMATION AV AILABLE

TIM E TO ACT

SU CCES SFU L

OPP ORTUNITY TO PE RFORM

•CORRECT ACTION CHOSEN

•ACCESS TO OPERATING AREA

•EQUIPMENT OPERABILITY (AVAILABILITY)

•SYSTEM INFORMATION AVAILABLE

•TIME TO ACT

Figure 3. The OHPRA Operations Model

Time to act Deals with measured or predicted time envelopes to complete specific actions,

external influences that modify performance times, etc. The window of time which bounds the duration available for the operator to

perform actions.System information available

Deals with instrumentation, status monitoring, information reporting, etc. that provide needed information to the operator to make decisions and confirm status of action taken.

Operator inputs including cues to action and system information feedback, which enable the operator to process what, when and how actions are to be taken. Information includes process instrumentation, procedural guidance, visual observations, shift crew communications, etc.

Note that there is a feedback loop from the action phase back to the cognitive phase that is dependent on information feedback from the system to the operator when actions are taken. Part of the assessment is to evaluate the quality of operator feedback from alarm and display systems, etc.

A structured subjective evaluation process, with detailed questions, guides the analyst through consideration of each of the factors and “sub-factors” to arrive at an estimate of “zero”, “low”, “medium” or “high” as the likelihood of success. Management can then set its own “criterion level” for the level of success likelihood that warrants action.

Concord Associates, Inc. Page 6 of 22

Page 7: NPRA-042899

The OHPRA approach makes use of the judgement and experience of subject matter experts, (SME's), in particular the operations and maintenance personnel in the facility being assessed who have in-depth knowledge of that facility’s processes and systems. The OHPRA analyst is a facilitator who has both operations/maintenance experience and an understanding of human reliability analysis. The conceptual model described above and the associated detailed guides are used not as a rigid “checklist”, but as an aid for both the facilitator and the SMEs. Interviews and in-plant walk-downs are conducted both to provide context and to validate expert input. Additional supporting information is obtained from multiple sources, e.g., results from process hazards analysis, unit operat-ing, startup, shutdown and safeing procedures, maintenance procedures, incident reports, unit-specific and general employee training materials, unit process diagrams, operations and maintenance departmental directives, and federal and state regulations (OSHA, EPA, etc.).

The OHPRA analysis is focused on those individuals who can come in direct physical contact with the unit processes, namely operators and maintenance personnel. Other departments or services, such as management, engineering, planning and scheduling, warehouse, industrial hygiene, safety, quality assurance, etc., are not excluded from consideration; their contribution to risk is assessed as an influence on the operations or maintenance worker.

OHPRA and the qualitative portions of similar HRA methods can be used effectively by knowledgeable subject matter experts with a modest amount of training. Such approaches provide a systematic framework for assessment of human error and the factors influencing human performance. They can provide meaningful assessment of the likelihood of error and can suggest areas of improvement for reducing that likelihood without the time and expense of detailed quantitative analysis.

Human Error Assessment in Root Cause AnalysisMost companies now routinely employ a structured root cause analysis process to identify causes of significant incidents. The methods may be as “simple” as “why trees”, but typically involve some sort of logic tree structure combined with well-established techniques such as “effects and causal factors analysis”, “barrier analysis”, and “change analysis”. Many companies also are incorporating techniques or additional modules in their root cause analysis protocols to dig deeper and ferret out the underlying systemic factors that lead to significant human error (i.e., factors that create conditions in which human error is more likely and/or more consequential). The OHPRA model is one such human error assessment technique that can be very powerful, particularly when the event and the human actions involved are complex and causal factors are difficult to identify. The choice of approaches is, in our view, not as important as the skills and experience of the analysts, nor as important as the management commitment and culture that supports root cause analysis, critical self-evaluation and peer evaluation, and open communication in a non-punitive environment.

Concord Associates, Inc. Page 7 of 22

Page 8: NPRA-042899

AREA 2: CONDUCT OF OPERATIONSIntelligence, flexibility, and adaptability are strengths of human beings. They allow us to respond to new situations, evaluate alternatives, adjust to adverse conditions, make judgments with less-than-complete information, and to perform tasks that machines, even computers, cannot do very well. However, these same qualities can lead to a high degree of variability in human performance in process systems. And, that variability can be a significant cause of inefficiencies and errors. Inconsistency in performance from facility-to-facility, day-to-day, shift-to-shift and person-to-person tends to increase the likelihood for error. Thus there is a tradeoff between establishing formal, highly structured controls on human performance and allowing humans the flexibility to do what they do best – think. We want intelligent and qualified operators to run the plant responsibly and responsively. We don’t want robots for operators, but we also don’t want completely “seat-of-the-pants flying”.

A good Conduct of Operations (ConOps) program can be viewed (Figure 4) as a control system that appropriately tightens the boundaries on allowed human performance. It permits appropriate variability for practical operations, while maintaining performance within a desired safety envelope. A good ConOps program also encourages and supports a culture of self-discipline and professionalism, which is the core of a safety culture.

Safety Boundary

Minimum Performance (Compliance)

Safety Boundary

Optimum

Operational Excellence

Minimum Performance (Compliance)Operational Excellence

DesignLimit

Figure 4. Conduct of Operations Policies Act to Control Human Variability

Formal documentation of a ConOps policy and training on ConOps requirements is a powerful management tool. Documentation should include requirements and expectations for safe and effective performance in both routine, day-to-day operations and emergencies. Examples of items to be included are:

Concord Associates, Inc. Page 8 of 22

Page 9: NPRA-042899

compliance with written procedures use of “in-hand” or “reference” procedures modifying procedures exchange of information at shift turnover use of “closed-loop” communications required reading on-shift training control room/area activities control of equipment and system status inhibiting safety systems completing operating logs emergency response drills equipment and piping labeling.

AREA 3: PERSONNEL SUBSYSTEMSThe foundation for excellence in human performance is the underlying knowledge, skills, abilities and attitudes of the work force. In order to apply systems performance engineering to the area of personnel qualifications we first must recognize that “personnel subsystems” are an integral part of the total system. Further, we need to understand that requirements for the human performance necessary to meet system performance requirements can be identified, made explicit, engineered, measured, and demonstrated, just as can hardware or software requirements. Performance-based design of the personnel subsystem is a key to excellence in operations.

The ISD ProcessThe general framework that has proven most effective for performance-based design of training and qualifications programs is the Instructional Systems Design (ISD) process first publicized in this country by the U.S. Air Force3. It has been referred to as the Systems (or Systematic) Approach to Training (SAT) and other similar names, but all of these approaches fundamentally are representations of the ISD process. The process typically is described at the highest level as consisting of five interrelated steps or phases (Figure 5). In practice, the activities in these phases are continuous and iterative.

The analysis phase identifies and documents the performance required and the underlying human capabilities – knowledge, skills and abilities (KSAs) – necessary to accomplish the functions that have been allocated to humans. The primary activity in the analysis phase, and a critical activity for the entire human performance engineering program, is Job/Task Analysis (JTA). JTA (in varying forms) provides the fundamental information base that supports job design, training and qualification, procedures, human-machine interface, human error assessment, and human-performance measurement. JTA basically identifies who does what and how well they need to do it. It identifies the specific tasks that individuals must perform to accomplish assigned functions under the complete range of system states (startup, normal operations, abnormal/emergency operations, shutdown). JTA also provides information on the key factors that influence human performance – the system information required by the operator, the environment

Concord Associates, Inc. Page 9 of 22

Page 10: NPRA-042899

Implementation

DesignEvaluation

Analysis

Development

SystemsAnalysis

Figure 5. The Instructional Systems Design (ISD) Process

and conditions of operation, necessary tools/aids, etc. JTA provides information necessary to determine the necessary KSAs and the criteria, or measures of performance. Training is then designed to assure the necessary KSAs, the necessary performance capability, has been attained and is maintained. Thus training is focused on the performance required to do the job as specified and only on that performance.

Major activities in the design phase of ISD include: (1) assessing the “entry-level” capabilities of the trainee population, identifying the gap between entry-level and desired level, and identifying where training is the preferred solution for closing that gap (rather than, say, increasing entry-level requirements); (2) determining the best “setting” for training (e.g., simulator, classroom or on-the-job); (3) developing terminal learning objectives; (4) developing the training/evaluation standards that assure linkage of the training to the objectives (this involves determining the enabling objectives that support the terminal objective, performance tests or measures, KSAs being tested in each task element, conditions and standards for the test, scoring methods, etc.); and, (5) documenting the training plan for the specific training program being designed.

The development phase is where the training content is developed/assembled and the training materials are produced. Development involves refining the media selection, developing student and instructor lesson plans, developing lesson materials and training support materials, developing test items, and conducting training tryouts, or “pilot courses”.

The implementation phase is the actual delivery of the training, including testing and in-training evaluation. Activities include pre-testing trainees, instructor preparation, delivering the lessons, evaluating trainee performance, collecting training effectiveness evaluation data, and documenting completed training.

Concord Associates, Inc. Page 10 of 22

Page 11: NPRA-042899

Evaluation is actually a continuous and iterative process throughout the entire training process. The goal is to continually measure and improve the effectiveness of the entire training process. It involves identification of appropriate measures, collecting information, analyzing information to identify systemic problems, and implementing corrective actions. Indicators for training effectiveness come from multiple sources at different levels. Trainee test performance, test statistics and trends, trainee evaluations, instructor evaluations, post-trainee surveys, supervisor feedback, on-the-job performance observations, incident reports, and other sources of information can provide indicators for training effectiveness.

AREA 4: PROCEDURESWhy do we write procedures? How do we expect procedures to be used? Procedures are an operator aid. They are designed to help qualified, well-trained operators perform required tasks correctly and efficiently. Procedures are intended to be used, on the job.

The purpose of training is to bring the individual’s knowledge, skills, abilities and attitudes from their “entry-level” state to the level required to meet overall system performance requirements. Training should provide the “why” of the procedure – the process knowledge, understanding of system response, principles of equipment design and operation, safety implications of operator actions, and many other basic elements of knowledge that are essential to an effective operator are the focus of training. Training most certainly also should provide the “how” of procedures – practice on exactly what steps are to be carried out to perform specific tasks in the optimum manner. Thus training on procedures is absolutely essential. And, using the actual procedure as part of training is important.

However, procedures are NOT primarily training material. Indeed, properly designed procedures usually make poor training materials, and well-designed training materials do not make good procedures.

All operations need to be in conformance with the approved procedure. In some cases, where tasks involve many steps or high complexity, where errors have significant consequence, etc., every operator should be required to have the procedure “in-hand”, and in some cases step-by-step check-off or sign-off should be required. In other cases, experienced operators who perform the task regularly and where complexity and consequences are not as high, reference procedures are appropriate. These can be used as required or optional reading prior to performing an infrequently performed task.

We recommend using a “risk-based” assessment to categorize procedures according to their required “level of usage”. Such an evaluation process should include at least the usual “Criticality/Frequency/ Difficulty” ratings of tasks that are commonly performed as part of the training needs assessment. What are the consequences of an error, what is the

Concord Associates, Inc. Page 11 of 22

NOTE: Procedures are an operator aid, NOT a lesson plan.

Page 12: NPRA-042899

frequency with which this task is performed, and how complex is the task? Figure 6 illustrates a spreadsheet tool we have used to classify procedures. In this example there are only three “levels” of procedures: (1) “critical”, which are to be used “in-hand” at all times, perhaps with some or all portions requiring check-off; (2) “reference” to be referred to prior to completing the task and used in hand as necessary; or (3) no written procedure is required.

FREQUENCY OF USE

Frequency Point Value Point Weighting FrequencyDaily 3 Value X Factor ScoreWeekly 2

Monthly 1 X 0.8 = 0.0

Semi-Annually 2Annually 3

TASK COMPLEXITY

PointDescriptor Point Scale ValueInformation Access (How difficult is it to get information to perform task?) (1-3)Mental Loading (How great is the "mental workload" to complete the task?) (1-3)Physical Loading (How complex are the physical actions required?) (1-3)Communication Loading (How complex and demanding are communication requirements?) (1-3)Stress (How great is the psychological stress due to time demands, (1-3)

abnormal conditions, or exposure to hazardous conditions?)Complexity

Score

Total 0.0 ÷ 5 = 0.0

CONSEQUENCE

Descriptor Point ScaleNone (No Safety, Health, Quality or Environmental Concerns) 0Minor (First Aid, Minor Environmental Impact, Decrease in Product Quality etc.) 2 ConsequenceModerate (Loggable Injury, Some Environmental Damage, etc) 4 ScoreMajor (Lost Time Injury, Moderate Equipment Damage, Major Drop in Quality etc.) 6

Extreme (Fatality, Significant Equipment Damage, Severe Environmental Damage etc.) 8

TOTAL SCORE: 0.0

PROCEDURE CLASSIFICATION

0 to 5 NO PROCEDUREIF SCORE IS: >5 to 8 THEN CLASSIFY AS: REFERENCE

>8 CRITICALCLASSIFICATION =

Figure 6. Illustration of a Risk-Based Assessment Tool to Categorize Procedure Usage

Human Factors Guidelines for Procedure Writing If we demand procedural compliance, we must design procedures that give clear, precise directions, that are easy to understand, and easy to use. Human factors guidelines for procedure writing have captured the knowledge from cognitive science, information technology, learning, and related fields as well as actual experience on errors committed in following procedures. They provide practical guidance on how to present operators with the necessary information in the most usable format. Guidelines for formatting, for place-keeping, for caution and warning statements, for action steps, etc., are available4 to

Concord Associates, Inc. Page 12 of 22

Page 13: NPRA-042899

help write procedures that reduce the likelihood for errors in executing tasks under even the most severe conditions. Application of these guidelines can dramatically reduce the frequency of errors of omission and errors of commission in executing procedures.

We encourage clients to involve a broad representative spectrum of operators with human factors specialists to develop a procedure writers guide that standardizes procedure writing practice throughout the facility and even throughout the corporation. Standardization helps to make the procedure writing and management of change process more cost effective, and helps to minimize problems when people are transferred from one unit or site to another.

Verification and ValidationProcedure verification has to do with the technical accuracy of the procedures. It requires input from personnel in engineering, safety management, or other areas who understand process design, control systems, plant response to off-normal conditions and accidents, etc. Most facilities provide some sort of engineering or technical review of procedures for purposes of verification. Fewer facilities have systematic programs to validate procedures.

Validation has to do with clarity, understandability and usability, which depend on the user and on the context and conditions under which they are used. It is risky to have procedures designed by engineers and used by operators. In general, operators and engineers understand and visualize systems differently, have different purposes in mind, have different levels of formal education, different affinities for abstract reasoning, different cultures, different language, etc. And, operators under actual operating plant conditions, especially abnormal conditions, are under a much higher level of stress or system demands than is an engineer at a desk. The process of validation by “simulation” (walk down, in the plant, with the least qualified operator authorized to use the procedure independently, and with as near to actual conditions, time constraints, etc. as possible) can dramatically reduce the likelihood for errors in procedures. Plants that routinely conduct thorough validations find that it is unusual to find a procedure that is “perfect” as written “at the desk” and could not be improved by validation. Implementation of a systematic, user-centered process for design, development, implementation, and continual evaluation will produce procedures that are consistent in structure, easily understood, and easily followed by knowledgeable operators. Technically accurate, up-to-date procedures that are human engineered for effective use are one of the primary defenses against operational errors. Well-designed procedures that are used and do not simply sit on the shelf WILL reduce human errors and enhance process safety.

AREA 5: ERGONOMICSHuman factors specialists promote the viewpoint that machines are designed to help humans accomplish certain goals and therefore should be designed to match human capabilities and limitations, behavioral tendencies, and preferences. Figure 7 illustrates the issues addressed by human factors engineers in the systems engineering design

Concord Associates, Inc. Page 13 of 22

Page 14: NPRA-042899

process, from top-level system/mission goals, through function allocation and analysis, job/organizational design, personnel subsystem design, and human interface design.

Practical Applications of Human Factors for Process Safety Improvement The field of Human Factors draws from a broad spectrum of disciplines and scientific and engineering specialty areas. Perception, cognition, learning, memory, motivation, anthropometry, and speech communication are some of the required subspecialties in psychology and physiology. Engineering expertise in design of visual, auditory, and tactile displays, controls, biomechanics, noise, illumination, vibration, and other areas also is required. Very often, the expertise in psychology and engineering is shared among team members, and human factors specialists usually work best in a team environment providing input and feedback to engineering designers.

System

Functions

Tasks/jobs

Interface

Mission success, system performance requirements,user needs/satisfaction

Requirements analysis, allocation to human, hardware,software, facilities

Task requirements, conditions, demands on humans -time, accuracy, precision, perception, vigilance

Capabilities and limitations - knowledge, skills, abilities,attitudes; physiological, psychological, cognitive, social

Facilities, workspace, controls, displays, habitability,accessibility, protective equipment

Personnel

Figure 7. Human Factors Activities in Systems Engineering Design

The plant manager or process safety manager committed to improving safety and plant performance by improving human performance rarely has this kind of specialized expertise readily available at the plant site, and only in relatively few corporations is it accessible from corporate headquarters. Fortunately, much of the accumulated knowledge and experience from application of human factors (or failure to apply human factors) in design has been codified by human factors specialists in design guidelines, and further synthesized into checklists and evaluation guides5,6,7 that address each of the basic areas of human factors design, such as:

Workspace layout Controls and display design Alarms and annunciators Environmental factors such as noise, illumination, temperature Communication systems Equipment and piping labeling and color coding Protective clothing

Concord Associates, Inc. Page 14 of 22

Page 15: NPRA-042899

Since most human-equipment interfaces in modern technology involve a computer, a major area of human factors engineering has evolved into its own specialty field, Human-Computer-Interaction, with its own university courses, professional organizations, cross-disciplinary specialists, and specific design guidelines8,9.

These human factors and human-computer-interaction guidelines and checklists do not provide the expertise of an experienced human factors design engineer, but they do provide a practical way to compare existing design with good design practice. With a modest amount of human factors training and practice, individuals knowledgeable of the system can use these them effectively to at least identify less-than-adequate human-machine design and potential error-inducing conditions. Often, modifications for practical, cost-effective improvements present themselves simply by identifying the problem. In other cases, it will be necessary to bring in the deep expertise of a skilled human factors engineer. But, even in those cases, solutions will be facilitated by having in-plant personnel familiar with human factors principles and guidelines who can serve as an interface providing process systems knowledge to the human factors expert and practical feedback to plant personnel. Small investments in human factors training and in-plant practice can pay great dividends in error-reduction, process safety, personnel safety, and overall operational performance.

AREA 6: MEASUREMENT AND FEEDBACKIn the early to mid-1990s, Peter Senge and other authors10,11,12 began to emphasize the critical need for “systems thinking” in management and organizations, and the ideal of a “learning organization”. A key characteristic of a learning organization is a culture that supports and thrives on learning from experience – from successes and from failures. Certainly such a culture is key to employing a systems approach to reducing human error and improving process safety. Incident investigation and root-cause analysis, discussed earlier, are the central pillars of a system for experiential learning. However, we encourage clients to develop a comprehensive program of performance measurement, causal analysis, and retrospective analysis of multiple sources of operational information. Some examples of additional sources of operational experience information are:

Near-miss reports, self reporting of corrected errors, successful interventions Narrative and operating log reviews Maintenance requests, trends, equipment histories, equipment failure reports Process hazards analysis, risk assessments, safety reviews Internal and external evaluations and audits Training evaluations and feedback Emergency drills Review of applicable events at other sites, other companies, other industries “Behavioral” safety programs in which safety behaviors are systematically

sampled Safety meetings, quality meetings, and other open discussion forums Systematic involvement of senior operations staff to capture experience Performance measures.

Concord Associates, Inc. Page 15 of 22

Page 16: NPRA-042899

The quality of information available in these sources, of course, depends on time available, training, and management support. A good information management system to aid not only collection, but analysis, synthesis, and dissemination of information is essential. Above all, success in the lessons learned program depends on maintaining a culture that values learning, that rewards self-critical evaluation, and that fosters open, punishment-free communication. Figure 8 illustrates the information flow in such a comprehensive operational experience feedback program. Performance measurement is a special area that bears further discussion.

LessonsLearned

OperationalExperienceSources

Analysis

AuditsSelf AssessmentsPerf. MeasuresIncident ReportsNear-Miss ReportsIndustry EventsGovt. ReportsER DrillsTrainingEvaluationsSelf-ReportsPHA Findings

Root CausePerformance TrendsReliability AnalysisHuman Error AnalysisEquip. Failure AnalysisTraining NeedsSystems Analysis

Information Management System

Figure 8. Operational Performance Feedback System

Human Performance MeasurementNo engineer or CEO would think of constructing a process plant without clear performance specifications and comprehensive measurement of process variables. Yet management systems and human performance systems in the past routinely have been “engineered” with virtually no clear performance requirements or measures. The total quality management movement of the 1980s has dramatically changed viewpoints on the need for measures of management systems, and most companies have invested substantial resources in development of performance measures for organizations and processes. Managers today recognize the power of measures to not only assess performance, but to lead people to the desired levels of performance. Most organizations recognize that a “family of measures”13 covering multiple levels and involving multiple types of measures – input measures, process measures, output measures, outcome measures – is required to provide a complete picture of performance.

Rummler and Bache14 discuss three levels of performance measurement and three areas of performance for each level (see Figure 9). A comprehensive performance measurement program will address goals, design, and management for organizations, for processes, and for job/individuals. The American Institute for Chemical Engineers Center for Chemical Process Safety (CCPS) program for development of measures for Process Safety Management (PSM) 15 is an example of a state-of-the-art performance measurement system that addresses multiple levels in the area of process safety

Concord Associates, Inc. Page 16 of 22

Page 17: NPRA-042899

Goals ManagementDesign

OrganizationGoals

Job/IndividualGoals

ProcessManagement

ProcessDesign

ProcessGoals

OrganizationDesign

OrganizationManagement

Job/IndividualDesign

Job/IndividualManagement

Figure 9. Multiple Levels of Measures

management. It is being constructed using the Authority Referenced Measures System (ARMS) approach originally developed by Connelly16 to construct an integrated set of “real-time” measures based on capturing and quantifying judgment of CCPS authorities in process safety management. There are three basic steps to building a measure using ARMS:

1. Extracting the authority’s knowledge and judgment to define a structure for the measure (factors, subfactors and indicators of importance to the performance), capturing the authority’s judgment and preferences by rating example performances,

2. Capturing the authority’s judgment and preferences by rating example performances,

3. Synthesizing a mathematical measure that is capable of reproducing performance ratings like the authority would make given the same examples of performance.

Usually, a single authority is used to build a “strawman” measure that can then be reviewed and modified by other authorities. The authorities’ judgment is compared to existing structures and available data to further assure completeness and consistency. For example, in the case of a job/individual measure, the JTA performed as part of a formal training analysis would provide the basic data for job performance measurement, and the authority’s subjective measure should be consistent with and compatible with the JTA data base.

The review of the strawman measure usually is conducted first in a small-group environment and then with a broader audience, to obtain completeness and identify critical differences in rating strategies. For example, a performance measure built for the Senior Operator position used one Senior Operator to build the strawman, three

Concord Associates, Inc. Page 17 of 22

Page 18: NPRA-042899

Senior Operators to provide peer review and “validation”, review by supervisors and operators to provide a perspective from individuals above and below the Senior Operator in the organization, and review by Senior Operators at other facilities in the corporation to assure the measure was applicable to all facilities.

After initial development a trial field use is recommended to test both effectiveness and practicality of the measure, and to further obtain “buy-in” from the population being measured. After necessary adjustment, the measure is fully implemented with continual review and evaluation to adjust the measure for changing conditions.

The same basic ARMS process is used to build measures of organizational effectiveness, program/process effectiveness, and job/individual effectiveness. The result is an integrated set of measures that provides continual feedback on human and organizational effectiveness. The measurement process provides early feedback on developing problems and serves as an ongoing means of communication about performance throughout the organization.

The routine, continuous feedback from the performance measurement process, combined with causal analysis from events and retrospective analysis of other sources provides a comprehensive base of information for learning and continuous improvement in safety and operational effectiveness.

IMPLEMENTATION OF THE HPE PROCESSFigure 10 illustrates a typical sequence we have used to implement a human performance engineering program in an operating facility. The initial step is an assessment of the existing state of the “human systems”. A team with expertise in each of the six areas summarized above works intensively with facility personnel at all levels to perform a diagnostic evaluation of current human performance systems. The evaluation includes review of program documentation, review of incidents, interviews with management and staff at multiple levels, evaluation “checklists” and similar tools, in-plant walk-downs, and independent observation of on-the-job performance. The assessment focuses on the effectiveness of underlying management systems and processes that support human performance, but also captures information on specific problems of implementation. The evaluation typically requires a week to ten days at a process plant. Strengths and weaknesses are assessed, and a summary report of issues identified with recommended solutions are prepared.

Some of the specific “implementation” problems can be addressed immediately, often with very limited cost. Often, there are structural or “systemic” problems that need to be resolved before specific improvement actions can be expected to be successful. Training development, procedures design and use, and other areas of human performance typically have not been developed under as rigorous requirements and formal structure as has been the hardware and process side of the system, though there has been substantial progress in the past five years as PSM programs are maturing. Even at facilities with rather sophisticated and mature PSM programs, we often find two areas that have not been adequately addressed – ConOps and Measures. These two areas address the most basic

Concord Associates, Inc. Page 18 of 22

Page 19: NPRA-042899

of management requirements: (1) to tell people exactly what you expect of them, and 2) to follow up, find out whether they are performing as expected, and provide them feedback. Improvements in these areas can produce substantial performance improvement with out extensive rework or cost.

Assess ExistingSystems

EvaluationReport

SystemicProblems ? Program

Requirements

ContinuousImprovement

Program

System Design

Human ErrorConOpsTrainingProceduresHMIMeaures/ Feedback

Near-TermFixes

YES

MOCProgram

NO

NO

YESSpecificProblems ?

Figure 10. Implementing a HPE Program at an Operating Facility

The final result of the human performance engineering implementation is a continuous performance improvement effort following the general model presented earlier in Figure 2. This “systems engineering performance” model is used to continually assess performance requirements, measure performance, and make improvements as necessary. It is a framework for maintaining the highest levels of human performance, safety, and operations excellence.

CONCLUSIONLong-term fundamental improvement in human performance requires identification and resolution of systemic problems that lead to human error. This paper has presented a comprehensive framework for identifying weaknesses and developing improvements to the underlying structures that influence human performance. We refer to this framework as Human Performance Engineering, which is application of the systems engineering viewpoint to human performance in an operating system. The paper discussed six areas of integrated technical activity that comprise a practical HPE program.

Many of the human performance issues in these six areas have been addressed by chemical process and petroleum facilities, in some cases in considerable depth, using

Concord Associates, Inc. Page 19 of 22

Page 20: NPRA-042899

various industry process safety management (PSM) frameworks17,18,19,20. However, they have not always been integrated and presented in a way that provides a comprehensive solution to the problem of human error and its impact on safety and operational performance. Many managers are expressing dismay that they have made substantial investments in PSM initiatives and continue to have accidents. As they reflect, they are realizing that their efforts focused heavily on equipment and process issues and far less on the human side of the system. Many are becoming keenly aware that most significant incidents involve human error as the root cause or a significant contributing causal factor. Yet few have the technical background or guidance to determine exactly what to do reduce human error.

We believe that implementation of comprehensive human performance engineering program with the elements outlined in this paper will minimize the frequency and consequences of human error and thereby enhance process safety. Further, it will help establish a culture of continual improvement in human performance, a culture of “operational excellence”, which can result in dramatic improvements in overall system effectiveness and productivity.

Concord Associates, Inc. Page 20 of 22

Page 21: NPRA-042899

REFERENCES

1. Haas, P.M., P.J. Swanson, and E.M. Connelly, “Operational Human Performance Reliability Assessment,” Proceedings of the Sixteenth Reactor Operations International Topical Meeting, Long Island, NY, 1993.

2. Swanson, P.J. and P.M. Haas, “Operational Human Performance Reliability Analysis (OHPRA), A Tool for the Assessment of Human Factors Contribution to Safe and Reliable Process Plant Operation,” CA/TR-93003, Concord Associates, Inc., 1993.

3. Handbook for Designers of Instructional Systems, AFP50-58, U.S. Department of the Air Force, 1974.

4. Guidelines for Writing Effective Operating and Maintenance Procedures, Center for Chemical Process Safety, AIChE, New York, NY, 1996.

5. Human System Interface Design Review Guideline, NUREG- 0700. U.S. Nuclear Regulatory Commission, 1996.

6. American Standard for Human Factors Engineering of Visual Display Terminal Workstations. The Human Factors Society. (ANSI/HFS 100 – 1988), Santa Monica, California, 1988.

7. Human Engineering Design Criteria for Military Systems, Equipment, and Facilities. MIL-STD-1472D, U.S. Department of Defense.

8. Brown, C.M., Human-Computer Interface Design Guidelines, Aplex Publishing Corp., Norwood, NJ, 1988.

9. Gilmore, W.E., Gertman, D.I., Blackman, H.S., User-Computer Interface in Process Control, Academic Press, San Diego, CA, 1989.

10. Senge, Peter M., The Fifth Discipline: The Art and Practice of a Learning Organization, Currency Doubleday, New York, NY, 1990.

11. Senge, Peter M., Art Klein, Charlotte Roberts, Richard B. Ross, and Bryan J. Smith, The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization, Doubleday, New York, NY, 1994.

12. Chawla, Sarita, and John Renesch, Editors, Learning Organizations: Developing Cultures for Tomorrow’s Workplace, Productivity Press, Portland, OR, 1995.

13. Thor, Carl G., The Measure of Success: Creating a High Performance Organization, Oliver Wright Publications, Inc., Essex Junctions, VT, 1994.

Concord Associates, Inc. Page 21 of 22

Page 22: NPRA-042899

14. Rummler, Geary A., and Alan P. Brache, Improving Performance: How to Manage the White Space on the Organization Chart, Second Edition, Jossey-Bass Publishers, San Francisco, CA, 1995.

15. Campbell, D.J., E.M. Connelly, J.S. Arendt, B.G. Perry, and S. Schreiber, “Performance Measurement of Process Safety Management Systems,” International Conference and Workshop on Reliability and Risk Management, Center for Chemical Process Safety, AIChE, 1998.

16. Connelly, E.M., “A Theory of Human Performance Assessment,” Proceedings of the Human Factors Society, 31st Annual Meeting, 1987.

17. Guidelines for Technical Management of Chemical Process Safety, Center for Chemical Process Safety, AIChE, New York, 1989.

18. Process Safety Management, Chemical Manufacturers Association, Washington, DC, 1985.

19. 29 CFR 1910.119, Process Safety Management of Highly Hazardous Chemicals, Occupational Safety and Health Administration, Washington, DC, 1992.

20. American Petroleum Institute Recommended Practice 750, Management of Process Hazards, 1st ed., Washington, DC, 1990.

Concord Associates, Inc. Page 22 of 22