Upload
april-rose-b-tolentino
View
10
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Credit to it's proper owner
Citation preview
The Accidental User Marsden and Hollnagel
Page 1
HUMAN INTERACTION WITH TECHNOLOGY: THE
ACCIDENTAL USER
Phil Marsden, Ph. D.
Human Reliability Associates Ltd., School House, Higher Lane
Dalton, Lancs., WN8 7RP, UK
Erik Hollnagel, Ph.D.
OECD Halden Reactor Project
P. O. Box 173, 1751 Halden, Norway
ABSTRACT
Information technology is part of a growing number of applications in work and
everyday life. It seems inevitable that the average person soon will have to interact
with information technology in many ways, even when there is no desire to do so.
Examples include finding a book in a library, personal financial transactions, the
health sector, traffic and transportation, process control, etc. People who in this
way are forced to interact with information technology shall be called accidental
users. The accidental user poses a particular challenge to the design of
technological artefacts because the disciplines of dealing with human-machine
interaction are predicated on the assumption that users are motivated and have a
minimum level of knowledge and skills. In particular, models of “human error”
and human reliability implicitly assume that users are benign and only fail as
anticipated by designers. In this paper we investigate the extent to which current
models of human erroneous actions and cognitive reliability can be used to
account for interactions between accidental users and technology.
Keywords: Human reliability, system design, human error model
Classification code: 4010 Human Factors Engineering
1. INTRODUCTION
It is a fundamental premise of classical ergonomics that interfaces and functionality must be
designed specifically to optimise performance in a given task. The ability to describe and
model predictable erroneous actions is therefore crucial for good system design.
Unfortunately, the provision of an adequate model to explain these cases has proved difficult
because psychological processes are both hidden and highly adaptive. Thus, preferred styles of
responding and acting appear to undergo radical change as a result of learning (Hoc et al.
1995), while the quality of performance of particular individuals frequently varies widely as a
function of factors such as stress and fatigue (Swain, 1982).
The Accidental User Marsden and Hollnagel
Page 2
Whereas erroneous actions play a prominent role in applied disciplines such as human
reliability analysis and cognitive engineering, the problem of modelling erroneous actions has
rarely been adequately addressed by the HCI research community. Here investigators have
tended to adopt general frameworks which portray the average user as someone who knows
how to deal with information technology and who willingly participates in the interaction.
Clearly, there are many situations where such a characterisation is appropriate. Increasingly,
however, information technology is finding its way into application areas where the user is
unfamiliar with it or may even be ill motivated.
1.1 The Accidental User
When the various categories of users are discussed, it is common to make a distinction
between novice and expert users (Dreyfus and Dreyfus, 1980). This distinction refers to the
degree of knowledge and skill that a user has with regard to a specific system and makes the
implicit assumption that the user has a basic motivation, i.e., that the user is motivated to use
the system. The spread of information technology, however, means that there are many
situations where users interact with information technology systems because they have to do it
rather than because they want to do it. The possibility of doing it in another way has simply
disappeared. Examples include finding a book in a library, personal financial transactions, the
health sector, traffic and transportation, process control, etc. A trivial example is the typing of
a letter, since many offices no longer have a type-writer. Another is recording something from
the TV. Here there was never any other choice, but the problems in using a VCR appropriately
has led to the development of many interesting strategies, many of which aim at minimising
the interaction. A more complex example is driving a train, flying an aircraft, or monitoring
anaesthetics in the operating room (Woods et al. 1994). In the near future there will be even
more cases because the conventional modes of interaction disappear, often in the name of
efficiency!
People who in this way are forced to interact with information technology shall be called
accidental users. An accidental user is not necessarily an infrequent or occasional user; the
use (e.g. of the VCR) can occur daily or weekly, but the use is still accidental because there is
no better alternative. An accidental use is not necessarily a novice or an inexperienced user.
For instance, most people are adept at getting money from an automated teller machine but the
use is still accidental because the alternatives are rapidly disappearing. An accidental user is
not necessarily unmotivated, although it is frequently the case. The motivation is, however,
aimed at the results of the interaction rather than the interaction itself. An accidental user is a
person who is forced to use a specific system or artefact to achieve an end, but who would
prefer to do it in a different way if the alternatives existed. From the point of view of the
accidental user, the system is therefore a barrier that is blocking access to the goal - or which
at least makes it more difficult to reach the goal (Lewin, 1951).
The accidental user poses a particular challenge to the design of technological artefacts
because the disciplines of Human-Computer Interaction (HCI) and Man-Machine Interaction
(MMI) are predicated on the assumption that users are motivated and have a minimum level
of knowledge and skills. In particular, models of “human error” and human reliability
implicitly assume that users are benign and only fail as anticipated by designers. In this paper
The Accidental User Marsden and Hollnagel
Page 3
we investigate the extent to which current models of human erroneous actions and cognitive
reliability can be used to account for interactions between accidental users and technology.
1.2 User Models And Accidental Users
In HCI / MMI design the notion of a user model looms large. The user model helps the
designer to predict what the likely user reactions will be, hence to develop an interface and a
dialogue flow that is as good as possible for the tasks. Newell (1993) has eloquently argued
for the need to consider users that are temporarily or permanently impaired in their perceptual
and motor functions. In addition to that, the notion of the accidental user argues that designers
should consider how people who have little or no motivation will perform. In particular,
designers should consider the following possibilities:
� That the user will misinterpret the system output, e.g. instructions, indicators and
information. The misinterpretation need not be due to maliciousness, though that might
sometimes be the case, but simply that users do not know the same things that designers
know, and that users do not see the world in the same way. A simple example is that the
user has a different culture or native language and therefore is unfamiliar with symbols
and linguistic expressions.
� That the user’s response will not be one of the set of allowed events, hence not be
recognisable by the system. System design can go some way towards preventing this by
limiting the possibilities for interaction, but the variability of user responses may easily
exceed the built-in flexibility of the system.
� That the user will respond inappropriately if the system behaves in an unexpected way.
This means that the user may get stuck in an operation, break and/or restart a sequence
any number of times, loose orientation in the task and respond to the wrong prompts,
leave the system, use inappropriate modes of interaction (hitting it, for instance), etc.
In some sense, the accidental user should be considered as if governed by a version of
Murphy’s Law, such as: “Everything that can be done wrongly, will be done wrongly”. More
seriously, it is necessary to be able to model how actions are determined by the context rather
than by internal information processing mechanisms, and how the context may be partially or
totally inappropriate. In order to design a system it is necessary to consider all the situations
that can possibly occur. This means that the designer must be able to account for how users,
whether accidental or not, understand and control the situation.
The purpose of this paper is to consider the issue of modelling erroneous actions for situations
which involve the accidental user. Specifically, we explore whether which current framework
models of human performance and erroneous actions can account for interactions involving
information technology and accidental users. It is argued that a cognitive systems framework
is best suited to providing design guidance in this vital growth area because of the special
emphasis the approach places on contextual, as opposed to experiential, determination of
behaviour. The discussion of user models begins by considering what is meant by the term
erroneous actions.
The Accidental User Marsden and Hollnagel
Page 4
2. THE CONCEPT OF ERRONEOUS ACTIONS
It has long been recognised that the occurrence of erroneous actions constitutes a major source
of vulnerability to the efficiency and integrity of HCI. Although reliable figures are difficult to
obtain there is general agreement that somewhere in the range 30-90% of all system failures
involve a human contribution of some type (Hollnagel, 1993a). Despite the apparent level of
agreement among HCI researchers regarding the scale of the problem, however, there is much
less agreement concerning the issue of what is meant by the term erroneous actions (Embrey,
1994; Reason, 1984; Senders and Moray, 1991; Singleton, 1973; Woods et al. 1994). Thus, an
engineer might prefer to view the human person as a system component subject to the same
kind of successes and failures as equipment. Psychologists, on the other hand, often begin
with the assumption that human behaviour is essentially purposive and can only be fully
understood with reference to subjective goals. Finally, sociologists have traditionally ascribed
the primary forms of erroneous actions to features of the prevailing socio-technical system.
Irrespective of the above differences there seem to be at least three intuitive parts to any
definition of erroneous action:
� First, there must be a clearly specified performance standard or criterion against
which a deviant response can to be measured. Human reliability analysis has
traditionally dealt with the criterion problem by using objective measures such as system
parameters as the standard for acceptable behaviour. Thus Miller and Swain (1987)
argued that erroneous actions should be defined as “any member of a set of responses
that exceeds some limit of acceptability. It is an out-of-tolerance action where the limits
of performance are defined by the system”.
In contrast to the above, cognitive psychology has tended to define erroneous action
relative to subjective criteria such as the momentary intentions, purposes and goal
structures of the acting individual. Defined thus there are two basic ways that an action
can go wrong. In one the intention to act is adequate but a subsequent act is incorrectly
performed; in the other actions proceed according to plan but the plan is inadequate
(Reason and Mycielska, 1982). In the former case the erroneous action is conventionally
defined as a slip, in the latter it is usually classed as a mistake (Norman, 1981).
� Second, there must be an event - either cognitive or physical - which results in a
measurable performance shortfall such that the expected level of system performance
is not met by the acting agent. Irrespective of how one chooses to define erroneous
actions most analysts agree that erroneous actions are largely linked with events in
which there is some kind of failure to meet a pre-defined performance standard.
Despite this level of agreement in the literature there has been much less of a consensus
between investigators regarding how best to conceptualise the psychological
mechanisms that lead to erroneous actions. Some investigators have adopted a
pessimistic interpretation of human performance capabilities (Vaughan and Maver,
1972), whilst others have attempted to account for the occurrence of erroneous actions
from within a framework of competent human performance (Reason, 1979, 1990;
Woods et al. 1994). According to this erroneous actions have their origins in processes
The Accidental User Marsden and Hollnagel
Page 5
that perform a useful and adaptive function. Such approaches take a much more
beneficial view of erroneous actions and relate their occurrence to processes that
underpin the human ability to deal with complex, ambiguous, and uncertain data.
� Third, there must be a degree of volition such that the actor had the opportunity to act in
a way that would be considered appropriate. Thus as Zapf et al. (1990) have observed, if
something was not avoidable by some action of the person, it is not acceptable to speak
of erroneous action. According to Norman (1983) factors that occur outside the control
of the individual, for example, “acts-of-God”, are better defined as accidents.
In this paper we propose that the concept of erroneous action for the accidental user is best
defined with reference to the observable characteristics of behaviour. Specifically, we suggest
that erroneous actions among this group are simply actions with undesirable system
consequences. Such a definition is in keeping with the treatment of erroneous action in human
reliability analysis and avoids the potential confusion which can arise when discussing the
causes and consequences of erroneous action (e.g., Hollnagel, 1993b). Moreover, a
behavioural definition of the concept of erroneous action is neutral with regard to the issue of
what causes failures in human performance. In relation to this debate we take an optimistic
viewpoint of human performance capabilities and propose that the erroneous actions made by
accidental - and other - users have their origins in processes that perform a useful and adaptive
function in relation to most everyday activities. This suggests that the search for causes in the
accidental user requires an analysis of the complex interactions that occur between human
cognition and the situation or context in which behaviour occurs (e.g., Hollnagel, 1993a;
Woods et al. 1994).
3. FRAMEWORK MODELS OF ERRONEOUS ACTIONS
Modelling user behaviour requires a framework model of human performance that can be
used to characterise the likely forms of erroneous actions that people will exhibit. A review of
the psychological literature indicates that there are three classes of models of human
performance failures that are candidates to explain erroneous actions. One comes from
traditional human factors where focus is on the overt behaviour of the human component of a
man-machine system. The second comes from a line of work which views the human as an
information processing system. The third arises in work carried out in a cognitive engineering
tradition and views the human-machine combination as a joint cognitive system. In this
section, the major features of each class of model are reviewed in turn.
3.1 Traditional Human Factors Models
Several attempts have been made over the years to develop models of user behaviour that
characterise man-machine interaction failures in terms of their basic behavioural (e.g.,
Embrey, 1992; Gagné, 1965; McCormick and Tiffin, 1974). In one early scheme proposed by
Altman (1964) the observable characteristics of erroneous behaviour were differentiated
according to three general types of work activity: (a) activities involving discrete acts, (b)
tasks that involve a continuous process, (c) and tasks which involve a monitoring function.
The Accidental User Marsden and Hollnagel
Page 6
Altman suggested that these three categories of behaviour constituted the basis of an error
model of observable action. An example of a recent and much more detailed error model has
been that provided by Embrey (1992). This model is part of the PHEA (Predictive Human
Error Analysis) technique and is specified at the level of the behaviour of a user of an
automated aid. Described in overview, PHEA breaks down erroneous actions into six major
categories and these are further subdivided to identify the basic error types. PHEA has been
used on several occasions to quantify the risk posed to the integrity of complex man-machine
systems by the actions of the human component.
To a large degree most human factors models, including those developed by Altman and
Embrey, are variants of a scheme first proposed by Alan Swain in the early 1960’s (Swain,
1963) and used in a number of guises since that time (see for example, Swain, 1982). In
essence, Swain’s basic model makes a distinction between: (a) errors of omission, defined as
the failure to perform a required operation, (b) errors of commission, defined as an action
wrongly performed, and (c) extraneous errors, defined as the wrong act performed, cf. Figure
1. Of these three error modes, errors of omission and commission are ubiquitous in the field of
human reliability analysis.
Actioncarried out
Need
to act!
Correctaction
Correctexecution
Action / error type
Erroneous execution
Correctly performed action
CommissionNo
No
Yes
No
Yes
Yes
Omission
Figure 1: A pseudo-event tree for omission-commission error classification.
Human factors models provide very simple descriptions of the causes for erroneous actions,
more in terms of observable characteristics than in terms of mental or cognitive functions. In
relation to these models two important points need to be made.
� Despite minor variations in detail there remains considerable agreement between the
various schemes concerning the issue of what constitutes the basic categories of
erroneous actions when discussed in terms of observable behaviour. In many ways the
model originally proposed by Swain and his colleagues remains the “market standard”
in relation to the topic of human reliability analysis although it is clear from field
investigations that the basic framework often needs to be modified to take account of
special constraints imposed by an actual operating environment.
� The second point relates to an observation made by Reason (1986), who pointed out that
there is typically a large measure of agreement between judges when they are asked to
assign erroneous actions to these relatively limited behavioural categories. This finding
The Accidental User Marsden and Hollnagel
Page 7
suggests that behavioural models of erroneous actions such as the ones identified above
possess a high degree of face validity.
Traditional human factors models of human performance and erroneous action are
nevertheless quite weak in their ability to characterise events in terms of their psychological
features and are essentially without an underlying theory. Although the categories of
omission-commission are easy to apply, all the available evidence suggests that erroneous
actions which appear in the same behavioural categories frequently arise from quite different
psychological causes, while, erroneous actions which differ according to their external
manifestations share the same or similar psychological origins (e.g., lack-of-knowledge,
failure of attention). For this reason, in the mid 1970s to mid 1980s a number of people began
a line of research aimed at specifying the psychological bases of predictable erroneous action.
The products of this research effort are considered in more detail in the following section.
3.2 Information Processing Models
Models of human behaviour aimed at providing explanations of erroneous actions in terms of
the malfunction of psychological mechanisms have been influenced by the adoption of the
digital computer as the primary model for human cognition (e.g., Newell and Simon, 1972;
Reason, 1979; Simon, 1979), and the belief that the methods of information theory can be
meaningfully applied to the analysis of mental activities of all types (e.g., Anderson, 1980;
Attneave, 1959; Lindsay and Norman, 1976). An analysis of human performance from an
information-processing standpoint aims to trace the flow of information through a series of
stages that are presumed to mediate between a stimulus and response. Resultant theories are
conventionally expressed in terms of information flow diagrams analogous to those which are
prepared when developing a specification for a computer program.
Information processing analyses use either quantitative or qualitative methods. Quantitative
methods involve assessing a person’s performance under the controlled conditions of the
psychological laboratory. Qualitative models, on the other hand, are usually developed on the
basis of observations of human performance under real or simulated conditions. The relative
advantages of these two approaches are of central importance in the behavioural sciences and
have been discussed at length elsewhere (e.g., Bruce, 1985; Neisser, 1982). In relation to the
explanation of erroneous actions the two approaches have produced models of human
performance and error that are quite distinct.
3.2.1 Models Based Upon Quantitative Analysis
A good example of a human performance model based upon a quantitative analysis is the
general model of human cognition developed by Christopher Wickens and his associates
(Wickens, 1984; 1987). In this model, shown in Figure 2, information is described as passing
through three basic stages of transformation: (a) a perceptual process involving the detection
of an input signal and a recognition of the stimulus material, (b) a judgmental phase in which
a decision must be made on the basis of that information, relying where necessary on the
availability of working memory, and (c) a response stage in which a response must be selected
and executed. Each stage has optimal performance limits (as determined by limited attention
The Accidental User Marsden and Hollnagel
Page 8
resources) and when these limits are exceeded each is subject to error. The determination of
optimal performance limits and the various error forms that emerge when these limits are
exceeded were estimated for the model using data obtained from laboratory-based
psychological experiments.
(c)(b)(a)
Feedback
Controls
Displays
Plant state
Decision and
response
selection
Control
actions
Working
memory
Long term
memory
Sensory store
Attentional resources
Human operatorProcess Interface
Figure 2: Wickens’ (1984) model of human information processing applied to the human-machine interface.
Although this type of model has not been universally accepted it is nevertheless representative
of a wide variety of theoretical models which have been developed on the basis of error data
elicited from experiments conducted in the psychological laboratory. Parasuraman’s (1979)
attempt to explain vigilance decrement in terms of working memory defects is an example of
a typical “error experiment”. Similarly, Signal Detection Theory (SDT), response latency and
the speed-accuracy trade-off paradigms have provided methodological tools appropriate to
this type of research.
3.2.2 Models Based Upon Qualitative Analysis
Arguably the most influential qualitative information processing model is the skill-based,
rule-based, knowledge-based framework proposed by Rasmussen and his associates (e.g.,
Rasmussen, 1981; Rasmussen and Jensen, 1974). The SRK model is based upon the
proposition that there is a normal and expected sequence of processing activities that a person
will engage in when performing a problem-solving or decision-making task, but that there are
many situations where people do not perform according to the ideal case. Borrowing a phrase
first used by Gagné (1965), Rasmussen talked of people ‘shunting’ certain mental operations
where varying amounts of information processing can be avoided depending on the person’s
familiarity with the task.
The error component of the SRK framework has been discussed at length by Reason (1986)
and Reason and Embrey (1985). The results of this effort were incorporated into an error
model called GEMS (Generic Error Modelling System), which is based on the distinction
The Accidental User Marsden and Hollnagel
Page 9
between slips and mistakes superimposed onto the SRK framework. The resultant model
gives rise to some basic error types conceptualised in terms of information processing failures.
At the lowest level of the system are skill-based slips and lapses. Reason (1990) defines these
as errors where action deviates from current intention due either to an error of execution (e.g.,
a slip) or memory storage failure (e.g., a lapse). In contrast mistakes are defined as
deficiencies or failures in the judgmental and/or inferential processes involved in the selection
of an objective. Like slips and lapses, mistakes are viewed as dividing into two broad types.
Rule-based mistakes occur from the inappropriate application of diagnostic rules of the form:
IF <CONDITION>, THEN <ACTION>. Knowledge-based mistakes occur whenever people
have no ready knowledge to apply to the situation which they face. Thus in contrast to errors
at the skill-based and rule-based levels, knowledge-base failures reflect qualities typical of the
novice.
The major features of the GEMS model are summarised in below. Each error type is
distinguished according to five factors: (a) the type of activity being performed at the time the
error is made, (e.g., routine or non-routine); (b) the primary mode of cognitive control,
(attention or unconscious processing); (c) focus of attention (on a task or activity); (d) the
dominant error form (strong habit intrusions or variable); and (e) the ease with which the error
can be detected and corrected (easy or difficult).
Characteristics Of GEMS Error Types
Activity Mode of
control
Focus of
attention
Error forms Error detection
Skill-based
slips and
lapses
Routine
actions
Mainly
automatic
processes
(schemata)
On something
other than the
task at hand
Largely predictable
“strong-but-
wrong” error
forms
(schemata)
Usually fairly
rapid
Rule-
based
Problem (rules) Directed at (rules) Hard, and often
mistakes -solving Resource- problem Variable only achieved
Knowledg
e-based
mistakes
limited
conscious
processes
related issues with help from
others
Theoretical models of human information-processing have been used on numerous occasions
to account for systematic erroneous actions in HCI / MMI. Information processing models use
three basic assumptions regarding to provided answers to questions about the relation between
erroneous actions and human cognition (e.g., Kruglanski and Azjen, 1983).
� It is assumed that there are reliable criteria of validity against which it is possible to
measure a deviant response. In some cases the performance standards employed are
derived objectively; in other cases the standard is determined subjectively in accordance
with the person’s intentions at the time the erroneous action was perpetrated.
The Accidental User Marsden and Hollnagel
Page 10
� Psychological factors that intervene in processing activities act to bias responses away
from standards considered appropriate. Most common are the various information-
processing limitations which are presumed to make human performance intrinsically
sub-optimal. These limitations are brought into play whenever there is more information
than can be processed within a particular period of time. Other psychological factors are
“emotional charge” (e.g., Reason, 1990) and performance influencing factors such as
fatigue and stress (Embrey, 1980; Swain and Guttman, 1983).
� The information processing system comprises a diverse range of cognitive limitations
which are invoked under particular conditions. Thus, failures of attention are likely to
occur whenever too much - or too little - happens in the environment, decision-making
is likely to fail whenever judgements of a certain type are to be made (e.g., estimation of
quantitative data), and so on. It is sometimes also assumed that there is a one-to-one
mapping of external manifestations of erroneous behaviour onto the categories of
“failure” that are presumed to occur in human information processing.
These assumptions have important implications for the study of erroneous actions. They imply
that the appropriate approach is to identify the basic error mechanisms which shape human
performance and couple that with an analysis of the types of information-processing a person
is likely to engage in. The results may be then be mapped out in a matrix which permits the
prediction of information-processing behaviour from knowledge of the information-
processing domain and basic error tendencies. Several analysts have attempted to define such
a matrix with perhaps the framework proposed by Reason (1987) as the best example.
3.3 The Cognitive Systems Perspective
A third class of models are based on the perspective of cognitive systems engineering
(Hollnagel and Woods, 1983; Woods, 1986). Cognitive systems engineering makes two
important assumptions regarding the analysis of human performance in a work setting.
� The interactions between the human agent and automated control systems are best
viewed in terms of a joint cognitive system. It is no longer reasonable to view the
relative roles of the person and supporting control systems in terms of a Fitts’ list type
of tables which describe the relative merits of men and machines (Fitts, 1951). Instead
the notion of the joint cognitive system implies that machine and user should be
modelled on equal terms. Furthermore, the coupling of the two models is necessary to
appreciate and analyse the details of the interaction. In other words, modelling the user
as an entity in itself is not sufficient. Because of this, classical information processing
models of human cognition are inadequate for analyses of erroneous actions. Although
the context or environment is present in the form of input (signals, messages,
disturbances) the representation is not rich enough to capture the dynamics and
complexity of the interaction. This can only be achieved by providing a coupled model
of the human-machine system, and by making the model of both parts equally rich.
Several projects have used this approach, e.g. Corker et al. (1986), Cacciabue et al.
(1992), Hollnagel et al. (1992), Roth et al. (1992).
The Accidental User Marsden and Hollnagel
Page 11
� The person’s behaviour - including possible erroneous actions - is shaped
primarily by the context in which it occurs. The information processing approach
considers activities as essentially reactive, i.e., the person responds to an input event. In
cognitive systems engineering cognition is an active process influenced by the person’s
goals and the prevailing context. The focus of attention should therefore not be confined
to an analysis of malfunctions of presumed cognitive mechanisms but rather be put on
the global characteristics of human performance - both correct and incorrect responses
to specific environmental circumstances. The implications of cognitive systems
engineering have been used to guide the definition of contextual models of behaviour.
An example of a model of this type is provided by COCOM - for Contextual Control Model -
which integrates a theory of human competence and a model of cognitive control (Hollnagel,
1993a). The competence represents the possible actions that a person may carry out (the
activity set) in order to complete a particular task, given a characteristic way in which
available actions may be grouped. The latter structure is named as the template set, to
emphasise that sets of actions will be grouped together in special ways to be utilised as a
single unit in particular circumstances (e.g., a procedure for responding to a steam generator
tube leak). The separation of competence into the activity set and the template set makes it
possible to model a number of characteristic performance phenomena, such as fixation and
mistakes, using a few relatively simple functional principles.
The purpose of cognitive control is to describe how actions are selected and subsequently
carried out. The model identifies four basic control modes as characteristic performance
regions. These are: (a) scrambled control, where the event horizon is confined to the present
and there is no consideration of preceding events or prediction of the outcome of future
events, (b) opportunistic control, where an action is chosen to match the current context with
minimal consideration given to long-term effects, (c) tactical control, where actions are
governed by longer term considerations and extensive feedback evaluation, and, (d) strategic
control, where the person is fully aware of what is happening and is deliberately making plans
to deal with the situation which requires the selection and execution of particular controlling
actions. The purpose of the model is to describe how performance switches between the
various control modes dependent upon the outcome of previous action as well as
consideration of available time, cf. Figure 3. (The dotted lines indicate a feedforward.) Other
parameters that can influence the control mode are the number of simultaneous goals, the
availability of plans (which are part of the competence), the perceived event horizon, and the
mode of execution.
4. MODEL EVALUATION
Traditional human factors models are reasonably effective where designers are interested in
possible error modes specified in terms of their external manifestations. This is especially the
case where the objective of analysis is to predict the probability of simple errors which may
occur in relatively trivial tasks such as searching for a book in a library. Although more
complex models of human behaviour have been specified at the level of observable behaviour,
the basic limitation is that the approach provides little information regarding the psychological
The Accidental User Marsden and Hollnagel
Page 12
causes of erroneous actions. It is therefore unable to distinguish between an accidental and an
intentional user. The analytic capability of human factors models is typically quite low and
resultant analyses of failures in experimental systems tend to yield few general principles to
help designers with the predicament of the accidental user.
Information processing models have a high analytic capability, but are not very good at
converting field data to useful and practical tools for prediction of possible erroneous actions.
The analytic capability derives mainly from the large number of general statements relating to
“error” tendencies that are typically part of such models. The validity of predictions based on
these models is, however, unclear. Experience shows that actions frequently fail when they
should be well within the user’s performance capability. Conversely, it is easy to document
instances where user performance has been quite accurate for tasks where the model would
predict failure (Neisser, 1982). A particular shortcoming in relation to the situation of
accidental users is that information processing models account for failures in terms of the
automation of skill-based behaviour. This means that the predominant explanation of
predictable erroneous action refers to the expertise of the perpetrator. Such a view is not
consistent with the idea of an accidental user where the hallmark of human behaviour often is
lack of specialist task knowledge and familiarity with the system.
Event / Action
feedback
Determination of outcome
Number of goals
Subjectively available time
Control mode Scrambled Opportunistic Tactical Strategic
Competence: Plans, actions, templates
Choice of next action
Next
action
Figure 3: Principles of a contextual control model.
Models from cognitive systems engineering provide a better approach for characterising the
interactions between information technology and accidental users - and intentional users as
well. These models are particularly strong in their technical content because they are based on
viable and well articulated descriptions of human action - rather than of covert mental
processes. Moreover, the emphasis on the contextual determination of human behaviour is
clearly better suited to explanations of a user predominantly driven by environmental signals
(e.g. the interface) as opposed to past experience and prior knowledge. The accidental user
will typically have limited competence and limited control, and the ability of these models to
The Accidental User Marsden and Hollnagel
Page 13
describe how the interaction between competence, control and feedback determines
performance is therefore essential. The cognitive systems perspective also affords a
comparatively high level of predictive capability in a model that can be converted into
practical tools for investigation of erroneous actions. In our view the cognitive systems
perspective is therefore the all round approach best suited to model how an accidental user
interacts with information technology.
5. REFERENCES
Altman, J. W. (1964). Improvements needed in a central store of human performance data.
Human Factors, 6, 681-686.
Anderson, J. R. (1980). Cognitive psychology and its implications. San Francisco: Freeman.
Attneave, F. (1959). Applications of information theory to psychology: A summary of basic
concepts, methods, and results. New York: Holt, Rinehart & Winston.
Bruce, D. (1985). The how and why of ecological memory. Journal of Experimental
Psychology. General. 114, 78-90.
Cacciabue, P. C., Decortis, F., Drozdowicz, B., Masson, M. & Nordvik, J.-P. (1992).
COSIMO: A cognitive simulation model of human decision making and behavior in accident
management of complex plants. IEEE Transactions on Systems, Man, Cybernetics, 22(5),
1058-1074.
Corker, K., Davis, L., Papazian, B. & Pew, R. (1986). Development of an advanced task
analysis methodology and demonstration for army aircrew / aircraft integration (BBN R-
6124). Boston, MA.: Bolt, Beranek & Newman.
Dreyfus, S. E. & Dreyfus, H. L. (1980). A five-stage model of the mental activities involved in
directed skill acquisition. Operations Research Center, ORC-80-2. Berkeley, CA: University
of California.
Embrey, D. (1980). Human error. Theory and practice. Conference on Human Error and its
Industrial Consequences. Birmingham, UK: Aston University.
Embrey, D. (1992). Quantitative and qualitative prediction of human error in safety
assessments. Major Hazards Onshore and Offshore. Rugby: IChemE.
Embrey, D. (1994). Guidelines for reducing human error in process operations. New York:
CCPS.
Fitts, P. M. (Ed). (1951). Human engineering for an effective air navigation and traffic-
control system. Columbus, OH: Ohio State University Research Foundation.
Gagne, R. M. (1965). The conditions of learning. New York: Holt, Rinehart and Winston.
The Accidental User Marsden and Hollnagel
Page 14
Hoc, J.-M., Cacciabue, P. C. & Hollnagel, E. (Eds.), (1994). Expertise and technology:
Cognition and human-computer cooperation. Hillsdale, N. J.: Lawrence Erlbaum Associates.
Hollnagel, E. (1993a). Human reliability analysis. Context and control. London: Academic
Press.
Hollnagel, E. (1993b). The phenotype of erroneous actions. International Journal of Man-
Machine Studies, 39, 1-32.
Hollnagel, E., Cacciabue, P. C. & Rouhet, J.-C. (1992). The use of integrated system
simulation for risk and reliability assessment. Paper presented at the 7th International
Symposium on Loss Prevention and Safety Promotion in the Process Industry, Taormina,
Italy, 4th-8th May, 1992.
Hollnagel, E. & Woods, D. D. (1983). Cognitive systems engineering. New wine in new
bottles. International Journal of Man-Machine Studies, 18, 583-600
Kruglanski, A. W. & Ajzen, I. (1983). Bias and error in human judgement. European Journal
of Social Psychology, 13, 1-44.
Lewin, K. (1951). Constructs in field theory. In D. Cartwright (Ed.), Field theory in social
science. New York: Harper & Row.
Lindsay, P .H. & Norman, D. A. (1976). Human information processing. New York:
Academic Press.
McCormick, E. J. & Tiffin, J. (1974). Industrial psychology. London: George Allen and
Unwin Ltd.
Miller, D. P. & Swain, A. D. (1987), Human error and human reliability. In G. Salvendy (Ed)
Handbook of Human Factors. New York: Wiley.
Neisser, U. (1982). Memory observed. Remembering in natural contexts. San Francisco:
Freeman.
Newell, A. (1993). HCI for everyone. Invited keynote lecture at INTERCHI ´93, Amsterdam,
April 27-29, 1993.
Newell, A. & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ.: Prentice-
Hall.
Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88, 1-15.
Norman, D. A. (1983). Position paper on human error. The 2nd Clambake Conference on
Human Error, Bellagio, Italy.
Parasuraman, R. (1979). Memory loads and event rate control sensitivity decrements in
sustained attention. Science, 205, 924-927.
The Accidental User Marsden and Hollnagel
Page 15
Rasmussen, J. (1981). Human factors in high risk technology (RISØ N-2-81). Roskilde,
Denmark: Risø National Laboratories.
Rasmussen, J. & Jensen, A. (1974). Mental procedures in real-life tasks. A case study in
electronic troubleshooting. Ergonomics, 17, 193-207.
Reason, J. T. (1979). Actions not as planned. The price of automatization. In G. Underwood
& R. Stevens (Eds.), Aspects of Consciousness, Vol. 1. Psychological Issues. London: Wiley.
Reason, J. T. (1984). Absent-mindedness. In J. Nicholson & H. Belloff, (Eds.), Psychology
Survey No. 5. Leicester: British Psychological Society.
Reason, J. T. (1986). The classification of human error. Unpublished manuscript. University
of Manchester.
Reason, J. T. (1987). Generic error-modelling system (GEMS): A cognitive framework for
locating human error forms. In J. Rasmussen, K. Duncan & J. Leplat (Eds.), New technology
and human error. London: Wiley.
Reason, J. T. (1990). Human error. Cambridge: Cambridge University Press.
Reason, J. T. & Mycielska, K. (1982). Absent minded. The psychology of mental lapses and
everyday errors. Englewood Cliffs, NJ.: Prentice-Hall Inc.
Roth, E. M., Woods, D. D. & Pople, H. E. Jr. (1992). Cognitive simulation as a tool for
cognitive task analysis. Ergonomics, 35, 1163-1198.
Senders, J. W. & Moray, N. (1991). Human error. Cause, prediction, and reduction.
Hillsdale, NJ.: Lawrence Erlbaum.
Simon, H. A. (1979). Models of thought. Vol. 2. New Haven: Yale University Press.
Singleton, W. T. (1973). Theoretical approaches to human error. Ergonomics, 16, 727-737.
Swain, A. D. (1963). A method for performing a human factors reliability analysis (SCR-
685). Albuquerque, NM: Sandia National Laboratory.
Swain, A. D. (1982). Modelling of response to nuclear power plant transients for
probabilistic risk assessment. Proceedings of the 8th Congress of the International
Ergonomics Association. Tokyo, August, 1982.
Swain, A. D. & Guttman, H. E. (1983). Handbook of human reliability analysis with emphasis
on nuclear power plant applications (NUREG CR-1278). Washington, DC: NRC.
Vaughan, W. S. & Maver, A. S. (1972). Behavioural characteristics of men in the
performance of some decision-making task component. Ergonomics, 15, 267-277.
Wickens, C. D. (1984). Engineering psychology and human performance. Columbus, OH:
Merrill.
The Accidental User Marsden and Hollnagel
Page 16
Wickens, C. D. (1987). Information processing, decision-making and cognition. In G.
Salvendy (Ed.), Handbook of human factors. New York: Wiley.
Woods, D. D. (1986). Paradigms for intelligent decision support. In E. Hollnagel, G. Mancini
& D. D. Woods (Eds.). Intelligent decision support in process environments. New York:
Springer Verlag.
Woods, D. D., Johannesen, L. J., Cook, R. I. & Sarter, N. B. (1994). Behind human error:
Cognitive systems, computers and hindsight. WPAFB, OH: CSERIAC.
Zapf, D., Brodbeck, F. C., Frese, M., Peters, H. & Prumper, J. (1990). Error working with
office computers. In J. Ziegler (Ed) Ergonomie und Informatik. Mitteilungen des
Fachausschusses 2.3 heft, 9, 3-25.