11
HUMAN FACTORS IN DISTRIBUTED PROCESS CONTROL David A. Strobhar Beville Engineering, Inc. 280 Regency Ridge Drive, Suite 2000 Dayton, Ohio 45459 (937) 434-1093 ABSTRACT The advent of d is tributed process co ntrol has significantly changed the manner in which the refinery operator interacts with the process. Human factors eng ineering is the branch of engineering dedicated to t he analysis and specification of human interactions with complex systems. The basic principles of human factors engineering are discussed. Key concerns and trends relative to d istrib uted process contro l, operator performance, and system safety are set forth. Picture yourself on a cool winter evening. You decide to make some tea, so you go to your kitchen and put a pan of water on the stove and you leave. Several minutes later you come back to see a pan of cool water sitting on one burner and the burner behind it is red- hot. You might curse yourself f or your stupidi ty, then t urn on the co rrect burner. A simple case of human error. Yet, it is a common error that most people have experienced, and it is with a syste m that we use almost daily. It’s a non-threatening error in the kitchen, but if the red -hot burner was a group of tubes in a furnace and the coo l water the charge to that furnace, the simple error may become catastrophic. Why do people make t hese frequent mistakes with a system on which t hey are well trained? The problem in this example is in that the displays (i.e. the burners) are incompatible in the arrangement with the co ntrols. The displays are usually arranged in an array, but the controls are arranged linearly. The controls and displays are not properly integrated. The problem is not exclusive to the household. A coking unit at one refinery was shut down when a field operator was turning a valve that had the flow meter installed upside down. The operator thought they were increasing flow when in fact they were shutting it off. Both the previous examples are what are referred to in human factors eng ineering as design-induced operator error. The design o f the system helped to pro duce the error, and the error could have been in fact predicted based upon the poor design. It was design- induced operator errors that led to the development of human factors engineering. The U.S. Air Force found in World War II that the sophisticated machines of war were being limited by the people who operated them. Aircraft could not be consistently fl own at design specifications because of operator error. In fact, more pilots died in WWII during training than in combat, often due to poor selection or system design. In most cases, the design of the man-machine system interface (e.g., instrumentation) did not account for human characteristics and limi tations.

Human Factor in BPCS

Embed Size (px)

Citation preview

Page 1: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 1/11

HUMAN FACTORS IN DISTRIBUTED PROCESS CONTROL

David A. StrobharBeville Engineering, Inc.

280 Regency Ridge Drive, Suite 2000

Dayton, Ohio 45459(937) 434-1093

ABSTRACT

The advent of distributed process control has significantly changed the manner in whichthe refinery operator interacts with the process. Human factors engineering is the branch

of engineering dedicated to the analysis and specification of human interactions withcomplex systems. The basic principles of human factors engineering are discussed. Key

concerns and trends relative to distributed process control, operator performance, andsystem safety are set forth.

Picture yourself on a cool winter evening. You decide to make some tea, so you go toyour kitchen and put a pan of water on the stove and you leave. Several minutes later you

come back to see a pan of cool water sitting on one burner and the burner behind it is red-hot. You might curse yourself for your stupidity, then turn on the correct burner.

A simple case of human error. Yet, it is a common error that most people have

experienced, and it is with a system that we use almost daily. It’s a non-threatening errorin the kitchen, but if the red-hot burner was a group of tubes in a furnace and the cool

water the charge to that furnace, the simple error may become catastrophic.

Why do people make these frequent mistakes with a system on which they are welltrained? The problem in this example is in that the displays (i.e. the burners) are

incompatible in the arrangement with the controls. The displays are usually arranged inan array, but the controls are arranged linearly. The controls and displays are not properly

integrated. The problem is not exclusive to the household. A coking unit at one refinerywas shut down when a field operator was turning a valve that had the flow meter installed

upside down. The operator thought they were increasing flow when in fact they wereshutting it off.

Both the previous examples are what are referred to in human factors engineering as

design-induced operator error. The design of the system helped to produce the error, and

the error could have been in fact predicted based upon the poor design. It was design-induced operator errors that led to the development of human factors engineering. TheU.S. Air Force found in World War II that the sophisticated machines of war were being

limited by the people who operated them. Aircraft could not be consistently flown atdesign specifications because of operator error. In fact, more pilots died in WWII during

training than in combat, often due to poor selection or system design. In most cases, thedesign of the man-machine system interface (e.g., instrumentation) did not account for

human characteristics and limitations.

Page 2: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 2/11

Due to the operator-limited nature of their systems, the Air Force began to incorporatecharacteristics of the human operator into the design of the system. This has allowed the

aerospace industry to progress from the Wright B Flyer to the F-14 Tomcat withoutincreasing the number of operators (i.e., pilots) required.

 Key words: human factors engineering, control center consolidation, display design,distributed control.

The need to consider human factors in the design of the system is increasing in therefining industry. Much of the need can be attributed to the introduction of distributed

 process control systems, which radically alter the operator-process interface. Properconsideration of human performance characteristics, the human factor, in the design of a

system enhances system usability, performance, and effectiveness.

There are certain basic principles or characteristics of human behavior that need to beconsidered in the design of a system. Figure 1 is a model of human performance. In the

model, process information is being detected by instruments, which must be selected forviewing by the operator. Once viewed, the data from the instruments enters the operator

sub-system, is processed, and a response is made through a set of controls. All of this isoccurring in some sort of environment, with a unique combination of environmental

factors, such as light, noise, people, etc.

While all aspects of the man-machine system are important (the lightening, noise, displayselection, etc.), it is understanding the activity in the operator sub-system the often plays

the most important part in preventing design induced operator error. The operator’sinformation processing system, composed of short and long-term memory, plays a major

role in how people interact with complex systems.

Information form our environment (displays) enters into short-term memory, or what iscommonly referred to as our conscious. Short-term memory is a capacity-limited

system. Human short-term memory can only hold about seven chunks of information atone time. If more information is presented, either previous information will be dropped or

the new information will be ignored. The goal of the human factors engineer is to preventoverloading the short-term memory system, by putting as much data into a single

“chunk” of information as possible. An example of “chunking” data is the old chemistrymnemonic, LEO GER (loss of electron oxidation, gain of electron reduction). The

mnemonic allows a large amount of data to be contained in two manageable chunks ofinformation.

Once information is in short-term memory, mental resources must be available in order to

do anything with it. Mental resources, or mental workload capacity, are also fixed, butonly in the short term. As a person tries to solve a problem or process information, they

use some of the mental workload reserve. The more complex the processing, the more thereserve is used. Stress also uses some of the reserve, reducing its availability for

information processing. However, it is possible to dedicate some of the reserve to specific processing tasks. Most people have experienced the “cocktail party phenomenon”, where

Page 3: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 3/11

 

Figure 1. Model of Operator-Process Interaction And Characteristics

Page 4: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 4/11

you are engaged in a conversation, oblivious to the conversations around you untilsomeone says your name. That piece of mental workload capacity that has been reserved

for name recognition comes into play. Just as mental workload reserve can be reservedfor name recognition, so to can it be reserved for other information processing tasks.

If complex problem solving is required, more than likely long-term memory will beutilized. Long-term memory is what we usually think of as memory, the compilation ofour training, knowledge, and experience. A person’s ability to utilize the information in

long-term memory is partially a function of how it’s structured. If information is stored inlong-term memory in an erroneous model of not in the same form as it will be accessed,

than retrieval of information will be difficult.

Consider the operators at Three Mile Island. They had been trained that the pressurizer ontheir pressurized water reactor was a valid representation of coolant inventory. No one

had prepared them for a leak out the top of the pressurizer. The data was storedimproperly in their long-term memory. So when the leak occurred at the top of the

 pressurizer and they tied to diagnose, or solve, the problem, they interpreted the rise inthe pressurizer level as an excess of coolant in the system. Their model of how the system

functioned was erroneous and produced erroneous results (i.e., reduction in coolantinventory).

In order to understand why consolidation improves operator performance, and when it

won’t, a brief discussion of how the operator interacts with the process is in order.Operator performance in process control can be represented by the model in figure 2a.

The process receives some disturbance which alters the process output. The operatormust detect that the output is different than his goal for the process, identify the correct

course of action, and implement that action. The stability of the system is a function ofthe operator speed in performing those three tasks correctly. If information on the

disturbance can be provided to the operator in advance of its impact on the system, then performance on the three tasks is significantly enhanced and total system performance is

improved. Control room consolidation should take advantage of supplying the operatorthe right information on what will be impacting his process unit. Simply providing the

operator more information (i.e., putting him in a central control room) is insufficient; theyneed the information on what will impact them.

Consolidation works when the flow of information from interacting units is enhanced as

 part of the consolidation. This means that those units that impact each other should be putinto the same control room. Consolidation criteria should not be based upon management

structure or, necessarily, physical proximity of units. It should be based on optimizationof critical interactions.

Figure 3a shows the interactions in the major refinery. The original consolidation plans

would have put the highlighted units in the same control room, because they were in thesame party of the refinery and under the same manager. However, only some of the units

interact, and therefore, only some of the units would benefit. For example, the sweetcrude unit has a strong interaction with the lube plant and very little interaction with the

Page 5: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 5/11

 

Figure 2A. Impact of the Operator In Process Control

Figure 2B. Impact of Consolidation On Operator Performance

Figure 2. Model of Operator-Process Interaction

Page 6: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 6/11

 

Figure 3. Link Analysis Progression For Refinery Analysis

Page 7: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 7/11

 

Figure 4. Control Room Design Based Upon Unit Interactions

Page 8: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 8/11

rest of the refinery. The sweet crude unit operators would probably have received little benefit from being consolidated with non-lube units and would probably have had a

negative performance impact due to being located remotely from their unit. Grouping ofthe units as in figure 3b better enhances the interactions present in the refinery.

Delineation of the interactions also allows evaluations of different consolidation options,

which units should be grouped together.

Delineation of the influences of refinery units on one another indicates what units should

 be consolidated together. Extension of the principle should dictate the layout of theworkstations in the control room. Figure 4a shows the interactions between the units at

one refinery. Using those interactions as a base, the layout of the control consoles wasrevised from the original array approach to a more circular concept (Figure 4b). As the

transfer of the critical information between operators is to be verbal in a consolidatedcontrol room, distances of over 20 feet between consoles essentially renders verbal

communication useless. Proper arrangement of the consoles to facilitate informationtransfer is almost as important as consolidating those units that need to exchange

information.

The major change that people notice with distributed process control is the use of CRT’s by the operators to control the process. No longer does the operator roam the board,

rather they sit and select the information they need. The presentation of the informationitself can be significantly different from what was previously utilized. Both the

 presentation of warning information (i.e., alarms) and operating information (i.e.,displays) have changed, and not always for the better.

The presentation of alarm information has proven to be a particular trouble spot. The old

hardwired alarms utilized the principle of ‘chunking’ the data. The alarm panels’ positionin the control room and the location of the alarm in the panel provided structure to the

alarm, so that without even reading the title the operator had information on the alarm’snature. Early distributed control system presented alarms chronologically, preventing the

operator from applying information attributes, or clustering the alarms into somethingmore meaningful. In a major upset, the alarm system became essentially useless.

Compounding the problem is the ease with which alarms can be added in a distributed

control system. Unfortunately, the “new” alarms are often simply on a single processvariable being out-of-tolerance. Alarms systems, like all display systems, should attempt

to provide the operator information, not just data. For example, 5134341093 is data. Theaddition of two hyphens, 513-434-1093, transforms the data into information, namely that

the numbers are in the form of a telephone number. The data should be synthesized intosomething meaningful. For example, a pump trip alarm on a spare pump is essentially

worthless, as the pump is probably usually “tripped”. A failure of the pump to start afterthe auto-start signal should have been given is useful information, combining data on the

 pump with data on the situation.

Page 9: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 9/11

 

Figure 5. Graphic Process Display Example

Page 10: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 10/11

 

Figure 6. Revised Process Graphic Display

Page 11: Human Factor in BPCS

7/22/2019 Human Factor in BPCS

http://slidepdf.com/reader/full/human-factor-in-bpcs 11/11

The presentation of operating information usually has problems in three areas:consistency, coding, and content. Figure 5 is an example of a process graphic display that

exhibits problems in each of the three areas.

First, information is not always presented in a consistent manner. It has been said that

 people are creatures of habit. Habits are another way of saying that people havedeveloped certain responses to certain stimuli. Repetition of a stimuli to elicit a properresponse builds the bond between the stimulus (the display) and the response (its

meaning). Given that certain operator responses are “good”, consistent repetition ofinformation should be employed. In the display example, valve designator and output %

are sometimes located to the side of the valve, sometimes above, sometimes underneath,etc.

Second, little complex coding information (e.g., position, shape, etc.) is usually done.

Coding of information is a type of chunking, with the goal of having as many ways tocode data as possible (consider a “stop” sign; red, octagonal, says “stop”). Distributed

control systems have tremendous potential for coding information; a potential that isoften untapped. In the display example, information is only rudimentarily coded, with a

simple use of color (red, yellow, green) that has any meaning (color is used extensively, but only the three mean anything).

Third, the content of the displays is often oriented towards data, and not information. The

display in figure 5 shows four heat exchangers, and yet as thermo couplers exist only atthe entrance and exit of the four, they might as well be one of one hundred exchangers.

Showing that four exchangers exists is data, but it is no real information.

Beville engineering revised the display in figure 5 to address some of the previouslymentioned deficiencies (Figure 6). Since the position of the designator is consistent,

making a position code, the units for the data need not be shown. Many of the piping runshave been simplified (including the heat exchangers), with no loss in technical accuracy.

Principles of human perception have been incorporated to make the display easier to read.

Human factors engineering has been used to reduce the potential for design inducedoperator errors in a number of complex systems. Consideration of how people use

information only minimally impacts project time, while significantly improving systemoperation. In oil refining, the advent of distributed process control has increased the need

to account for the human factor. Consideration of operator interactions is essential for thesuccess of consolidation. Incorporation of operator characteristics into alarm and display

system design, presenting information and not just data, will facilitate operator performance.