22
EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin Page 1 Knowledge Acquisition In Dynamic Systems: How Can Logicism And Situatedness Go Together? Guy Boy European Institute of Cognitive Sciences and Engineering (EURISCO) BP 4032, 10 avenue Edouard Belin, 31055 Toulouse Cedex, France Tel. (33) 62 17 83 11; FAX (33) 62 17 83 38 Email: [email protected] Abstract. This paper presents an investigation of knowledge acquisition in dynamic s ystems. The nature of dynamic systems is analyzed. A first ontology of the domain is proposed. Various distinctions are presented such as the agent perspective, the percept ion of temporal progression, and the notions of conseqences and expertise in dynamic systems. We use Rasmussen's model to characterize ways knowledge can be acquired in dynamic systems. Procedures are shown to be essential knowledge entities in intera ctions with dynamic systems. An emphasis on logicism and situatedness is presented and discussed around the situation recognition and analytical reasoning model. The kn owledge block representation is introduced as a mediating representation for knowled ge acquisition in dynamic systems. 1 Introduction Recent contributions clearly show that the knowledge acquisition (KA) field has grow n up to the point that formal methodologies are now available such as KADS (Wielin ga et al., 1992). However, there is very little done in the direction of KA in dynamic s ystems. Most of the work has been done in static worlds or very slow moving worlds. The kind of problems that are of interest in this paper deal with human-machine intera ction where time is a critical issue. A tremendous amount of work has been and is bei ng done in automatic control research. However, most of this work is focused on very low level activities, essentially sensory-motor. The models and technology that are us ed are primarily numerical, e.g., matrix theory, optimization, linear and non-linear con trol. In contrast to automatic control research, part of computer science has evolved to wards symbolic computation (instead of numerical) with the promotion of artificial int elligence (AI) in the 80's. In AI, the notion of feedback has not been developed to the same extent as it has in automatic control research, although, a few attempts have bee n developed in reactive planning (Drummond, 1989) and in procedural reasoning (Ge orgeff & Ingrand, 1989). These attempts took place in engineering domains and in par ticular in the space domain. It is also clear that no real effort has been made in the acq uisition of knowledge involved in the control and evolution of dynamic systems. For almost 13 years, our work has been directed towards better understanding of hum an-machine interaction in the aerospace domain. Most of the systems such as airplane s, spacecraft, air traffic control, etc., are highly dynamic and require accurate control t hat guarantees safe and reliable operations. The main concern that aerospace designer s have is that it is tremendously difficult to anticipate how end-users will use the tool t hey are designing, i.e., it is usually impossible to anticipate situations or contexts that end-users will be facing. Context is a question of focus of attention. If someone has n ot yet experienced a situation or context, then he/she cannot describe it because he/she

Knowledge Acquisition In Dynamic Systems: How Can Logicism

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 1

Knowledge Acquisition In Dynamic Systems:How Can Logicism And Situatedness Go Together?

Guy Boy

European Institute of Cognitive Sciences and Engineering (EURISCO)BP 4032, 10 avenue Edouard Belin, 31055 Toulouse Cedex, France

Tel. (33) 62 17 83 11; FAX (33) 62 17 83 38Email: [email protected]

Abstract. This paper presents an investigation of knowledge acquisition in dynamic systems. The nature of dynamic systems is analyzed. A first ontology of the domain is proposed. Various distinctions are presented such as the agent perspective, the perception of temporal progression, and the notions of conseqences and expertise in dynamic systems. We use Rasmussen's model to characterize ways knowledge can be acquired in dynamic systems. Procedures are shown to be essential knowledge entities in interactions with dynamic systems. An emphasis on logicism and situatedness is presented and discussed around the situation recognition and analytical reasoning model. The knowledge block representation is introduced as a mediating representation for knowledge acquisition in dynamic systems.

1 Introduction

Recent contributions clearly show that the knowledge acquisition (KA) field has grown up to the point that formal methodologies are now available such as KADS (Wielinga et al., 1992). However, there is very little done in the direction of KA in dynamic systems. Most of the work has been done in static worlds or very slow moving worlds. The kind of problems that are of interest in this paper deal with human-machine interaction where time is a critical issue. A tremendous amount of work has been and is being done in automatic control research. However, most of this work is focused on very low level activities, essentially sensory-motor. The models and technology that are used are primarily numerical, e.g., matrix theory, optimization, linear and non-linear control. In contrast to automatic control research, part of computer science has evolved towards symbolic computation (instead of numerical) with the promotion of artificial intelligence (AI) in the 80's. In AI, the notion of feedback has not been developed to the same extent as it has in automatic control research, although, a few attempts have been developed in reactive planning (Drummond, 1989) and in procedural reasoning (Georgeff & Ingrand, 1989). These attempts took place in engineering domains and in particular in the space domain. It is also clear that no real effort has been made in the acquisition of knowledge involved in the control and evolution of dynamic systems.

For almost 13 years, our work has been directed towards better understanding of human-machine interaction in the aerospace domain. Most of the systems such as airplanes, spacecraft, air traffic control, etc., are highly dynamic and require accurate control that guarantees safe and reliable operations. The main concern that aerospace designers have is that it is tremendously difficult to anticipate how end-users will use the tool they are designing, i.e., it is usually impossible to anticipate situations or contexts that end-users will be facing. Context is a question of focus of attention. If someone has not yet experienced a situation or context, then he/she cannot describe it because he/she

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 2

does not own the corresponding patterns allowing a situation recognition process to take place. Acquisition of such situation patterns and appropriate behavior can only be carried out when users are interacting with dynamic systems in real situations. For this reason, such a knowledge acquisition process is intrinsically incremental and situated. Furthermore, the more the system is dynamic, the more the acquisition needs to be on-line.

This paper presents several points of view on knowledge acquisition in dynamic systems. The dynamic systems that we are talking about are controlled by expert users. Humans and machines are viewed as agents that interact with each other. The knowledge to be acquired is the knowledge involved in the interaction between human and machine agents. The knowledge acquisition process is performed using the paradigm of integrated human-machine intelligence (IHMI). The notion of situation is developed, as well as the procedures that people use for controlling dynamic systems. We try to show that the knowledge level cannot be dissociated from the lower levels of human behavior in the control of dynamic systems. This makes KA more difficult. However, this view is challenged by another view supporting the fact that society is changing from an energy-based world to an information-based world. In this view, humans tend to control information-based systems using the knowledge level. This does not mean that expertise is clearly identified at design time. We claim that in information-based worlds, expertise that is necessary to control dynamic systems is a key issue for the design of user interfaces. Top-down knowledge acquisition is then possible if expert users are clearly identified during the design process. However, when there is no expertise available at design time, knowledge acquisition has to be bottom-up. In the balance of the paper, we develop a model and a mediating representation specific to KA in dynamic domains.

2 Problem Statement

In this paper, domains are restricted to human-machine systems where the machine is a dynamic system, e.g., an airplane. Knowledge we are looking for has two distinctions:

— it describes how the system works (technical knowledge);— it describes how the system is or should be used (operational knowledge).

We are mainly concerned by the second type of knowledge, i.e., what the user needs to know to control the system. However, it is obvious that there are links between technical knowledge and operational knowledge.

The main goal of this paper is to propose a methodology including appropiate formalization tools to facilitate the construction of intelligent assistant systems (IASs). We have already introduced and described the concept of IAS in previous work (Boy, 1987, 1991). From a human-machine interaction point of view, an IAS mediates interactions between a human operator and the physical system being controlled. The evolution of aircraft cockpit technology, for instance, tends to increase the IAS role to the point that pilots interact almost only with it instead of interacting directly with the aircraft (as a mechanical system). IASs create "illusions" to human operators by providing inform

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 3

ation that is not the same information processed by the mechanical system. The main problem is to validate these IASs in real world environments so that such "illusions" become natural and guarantee the safety of the overall system. In this paper, an IAS has three modules: a proposer that displays appropriate information to the human operator; a supervisory observer that captures relevant actions of the human operator; and an analyser that processes and interprets these actions to produce appropriate information for the proposer module. These modules use two knowledge bases: the technical knowledge base and the operational knowledge base.

Over the years, we have developed the paradigm of integrated human-machine intelligence (IHMI) (Boy & Nuss, 1988; Shalin & Boy, 1989; Boy & Gruber, 1991) to give a framework to acquire knowledge useful for the implementation of an IAS. This paradigm is presented in Figure 1. Arrows represent information flows. This model includes two loops:

—a short term supervisory control loop that represents interactions between the human operator and the IAS;

—a long term evaluation loop that represents the knowledge acquisition process.

EVALUATION LOOP SYSTEMBUILDER

PROPOSER

OPERATOR

ENVIRONMENTSITUATION

SUPERVISORYOBSERVER

ANALYSER

TEMPORARYOPERATIONK.B.

EVALUATORKOWLEDGE ACQUISITIONOBSERVERTE

CHN

ICA

L D

OM

AIN

K.B

.

OPE

RAT

ION

AL

K.B

.

KN

OW

LED

GE

AC

QU

ISIT

ION

K.B

.SUPERVISORYCONTROL LOOP

OBSERVEDRESULTS

Figure 1. An Integrated Human-Machine Intelligence model

The problem that we try to solve in this paper can be stated as the following question: what kind of methodology and tools can be developed today to elicit and formalize knowledge that characterizes dynamic systems? This problem can be divided into smalle

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 4

r subproblems. The first subproblem that comes to mind is to define the concept of dynamics by constructing an ontology of the dynamic systems domain. The second subproblem is to propose an appropriate knowledge representation for the dynamic systems domain. In particular, we defend the view that the concept of agent is very useful to represent dynamic systems.

3 Dynamics attributes

The representation of time has already been a concern in AI (Allen, 1985). This paper tries to develop an ontology that is useful for acquiring knowledge about dynamic systems. Aspects of dynamic systems that will be described in this paper will be elicited from real-world task environments.

Dynamics is a difficult concept to define. It deals with the duration of events or actions involving reasoning about temporal intervals. Relations between events and actions are perceived by people according to their own experience. If these relations persist then people cognitively (re)construct patterns of them upon refelection. The concept of persistence can be associated with the concept of context, i.e., as long as a fact remains true it can be included in the current context. Periodicity and rhythms are other concepts that can be associated with dynamics. Parallel occurences of events leads to the concept of choice. Dynamics is also associated with the notion of conseqences, when a situation is poorly perceived or memorized for instance.

In this section, we give a first ontology of the concept of dynamics. Dynamics deals with action and re-action. Actions are the main characteristics of agents. Agents can be humans or machines that act. Thus, the concept of an agent is essential in dynamic systems. The perception of temporal progression is often a matter of context construction. The notion of conseqences in dynamic systems helps to characterize them. Finally, dynamic system management is a matter for experts.

3.1 Towards a first ontology of dynamics

As we already mentioned in section 2 of this paper, we differentiate between technical domain knowledge and operational knowledge. The same distinction has been made by Mizoguchi, Tijerino and Ikeda (1992) when they represent expertise by domain knowledge and task knowledge. Mizoguchi et al. proposed a domain ontology including first principles, basic theories and a device model, and a task ontology including a model of problem solving, a generic vocabulary (task-dependent verbs and nouns) and generic tasks.

We take the view that a dynamic system evolves with time according to its own internal processes and external inputs/outputs from/to the environment. The following table presents a task ontology on dynamics according to Mizoguchi et al.'s distinction. This first ontology includes hierarchical descriptions of an agent, a human, and dynamics (Table 1). The goal of these descriptions is to provide a vocabulary useful in task analysis as well as in user's activity observation and analysis. Human factors issues have guided its development.

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 5

agent – Table 1 –act

interactcontrol process follow procedures

perform a tasksend a messagewait (waiting loop)

sense informationperceiverecognize situations

learnadaptanalogize situationsgeneralise context

use cognitive capabilitiesanticipatebehave rationalycontrol and command

control continuous parametersset parameter values (set points)

coordinatedecideinterruptiterate (iteration loop)monitor signals

maintain and monitor markersmonitor and react to warning signals and alarmsmonitor trends

postpone (retardability)reason quantitatively and qualitativelyrevise hypothesesschedule, plansupervise

humanact

talk (verbal channel)use gesture resources

is a cognitive agentregulate activity

cope with workload and time pressure, respond to cognitive demandsupervise (supervisory control)switch from automatic mode to manual mode

sense informationfeel force feedbackhear (auditory system)watch (central vision)watch (peripheral vision)

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 6

use cognitive capabilities – Table 1 (cont.)-maintain attention

directed attentionfocus of attentionattention switching

maintain vigilenceprioritize tasks dynamicallyuse and improve skills

dynamics related conceptsverbs

acknowledge receptiondelegateenter (into), exit, leavepush, pullredo, undorelease, engagestart, stop, selectstay (within), keepturn

nounsaction, force, energyevents

breakdownchange, movefeedbackparallelism, corequisitesequentiality, causality, prerequisite, chronology, history

precision, uncertaintyreliability, safety, conseqencessituation

abnormal situationactual state or situationcontextdesired state or situatoinexpected or unexpected situationperceived situationsignalsituation awareness

timeinstantaneousphaserequired time, available time

In order to illustrate these concepts, we will take three short examples. The first one describes dynamics as interaction with a dynamic machine. In the second example, we describe dynamics as coordination between people to solve problems in real-time. Finally, the third example introduces the concept of procedural knowledge as a real-world requirement for safe and reliable human-machine interaction.

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 7

Example of knowledge that is involved in driving a car. The kind of knowledge that is used in driving a car includes dynamics concepts. A scenario for using a car could be the following. First you have to enter into the car and sit. Turn the ignition key and start the engine. Then, look around for merging traffic. When the road is safe, you release the handbrake, and engage the first gear, turn the steering wheel and start rolling. When driving, you have to keep yourself aware of the road situtation, e.g., obstacles, road conditions (wet, damaged surface, etc.), merging traffic, etc. Situation awareness is essential. You have also to control your car in order to keep it on the right side of the road. You have to trust the breaking system as well as the power system, turn signals and fuel gauge. This continuous monitoring ensures safety and reliability in driving.

Coordination between team members to solve problems in real-time. Cooperative work is becoming a subject of primary importance as knowledge of several experts is often needed to solve current complex problems in real-time. People need to interact to solve these problems. In an aircraft cockpit for instance, delegation is a key activity of pilots who have to trust their partners in the accomplishments of required tasks. The more the cockpit becomes automated, the more artificial agents are available to pilots. As a result, pilots have to delegate tasks to these artificial agents. The more these new agents are used the more user's trust develops along with successes and failures. Modern airplanes include larger quantities of such agents. The shift from "self delegation" (i.e., the crew member performs the task directly without delegating to anyone else) and delegation to another qualified human being, or to an artificial agent is not obvious in practice. This is because the agent metaphor or the magic of the human-machine interface (Toggnazzini, 1993) has to hold its promises, i.e., effects that are produced must match users' expectations. In particular, agents need to acknowledge accomplishments of their actions. This is true whether the cooperating agent is a human or a machine. Capturing such knowledge is not trivial because it involves experimentation and a great deal of prerequisite domain knowledge.

Procedural knowledge. Procedural knowledge is often used when safety and reliability are issues (e.g., flying an airplane). Very little effort has been devoted to the acquisition of procedural knowledge. Boy (1989, 1991), Mathé (1990, 1991, 1992) and Saito et al. (1991) have developed methods to acquire procedural knowledge. In particular, Boy and Mathé developed a mediating representation called knowledge blocks to help the acquisition of this type of knowledge (see section 6.2 of this paper). Procedural knowledge is always unstable because it is constantly revised to improve the control of dynamic systems with respect to new experimental findings. The more dymanic systems are used, the better people know how to operate them and the better good procedures can be developed. Operators need to have lots of confidence in the procedures they are using, otherwise they do not use them anymore. In the best case, they annotate them and modify them. Thus, the acquisition of procedural knowledge is necessarily a highly dynamic process itself.

3.2 The agent perspective

In dynamic systems, the issue of procedure management and maintenance is extremely important. Ususally, procedures complement interface instruments and controls. They are used to provide human operators with more or less strict guidelines that help the

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 8

m conduct these systems safely. Procedures can be more or less complicated according to the transparency of the human-machine interface, and in themselves. Automation separates the operator from the real (usually mechanical) system. As a matter of fact, automation can be seen as a deeper user interface than conventional (surface) interfaces. From this perspective, the more the interface separates the operator from the real system, the more procedures are needed either to learn how to operate new systems that do not have mechanical feedback (this is to create appropriate cognitive automatisms), or to make sure that operations are executed with respect to specifications. Conversely, when the interface presents the right information, at the right time and in the right format, the operator tends to understand what is going on and acts appropriately.

When some procedures are directly implemented in the system and show up on the human-machine interface, we will talk about artificial agents. Such agents are characters (Laurel, 1991) that have properties and behavior. They ususally act on the system being controlled. They serve the operator as an assistant would do. We say that the human operator delegates some tasks to these agents.

Interacting agents. Let us take the example of dynamic knowledge in fault diagnosis. In static domains, diagnosing a fault involves knowledge about the structural relations between components of the faulty system. In dynamic domains, diagnosing a fault is an activity in addition to the ongoing activity of controlling the faulty system. Let us say that an agent takes care of such activity (the diagnostic agent). In an airplane for instance, when a fault occurs pilots cannot stop the flying task to only focus on the diagnostic task. Let us say that the flying agent manages the flying (control) task. Furthermore, diagnosis and control are not independent activities: the fault disturbs the system being controlled; sometimes the system needs to be disturbed (tested) to find the causes of ambiguous symptoms; some regular control actions on the system modify the course of the diagnosis activity. The two corresponding agents interact. In this case, the acquisition of the operational knowledge is very complex.

There are two ways of implementing this KA process: by construction and by observation. (1) Advocates of model based reasoning would promote the construction of operational knowledge from technical knowledge on both control and diagnosis. However, this construction will hardly reproduce operators' expertise in dynamic fault management. (2) The solution of observing people diagnosing faults in dynamic environments is also problematic. Indeed, when a fault occurs, operators are ususally overloaded (time pressure), they have multiple decisions to make, etc.

3.3 Perception of temporal progression

People perceive dynamics differently according to the feedback they are able to receive. There are rapid systems and slow systems. Rapid systems can be perceived as not "moving" at all if the focus of attention is on slower agent performance. Conversely, slow systems induce problems of vigilance. It is usually better to isolate agents and acquire knowledge separately to avoid such confusions. But once each individual dynamics is understood, it is necessary to better understand how people process with several agents all together that have different dymanicities. Then, the operator's focus of attention is a crucial target for the knowledge engineer.

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 9

The perception of dynamics depends on the user's expertise. The more the user is familiar with an agent, the more he/she is able to anticipate quickly its reactions. Experts are always ahead of the dynamic systems they are controlling. Such expertise is compiled for efficiency purposes, and then difficult to elicit. In the HORSES project, we developed a methodology to acquire such an expertise by observation (Boy, 1986). Elicitation of correponding knowledge can be disturbed by workload problems. Indeed, time-pressure and high workload tends to modify the regular activity. Thus, the corresponding factors have to be detected in order to correctly validate or contextualize knowledge that is acquired.

"Psychologists often think that it is possible, in principle and in practice, to examine cognitive processes without concern with context, i.e. to neutralize the task so that performance reflects "pure process"... Evidence suggests that our ability to control and orchestrate cognitive skills is not an abstract context-free skill which may be easily transferred across widely diverse domains but consists rather of cognitive activity tied specifically to context... This is not to say that cognitive activities are completely specific to the episode to which they were originally learned and applied. In order to function, people must be able to generalize some aspects of knowledge and skills to new situations. Attention to the role of context removes the assumption of broad generality in cognitive activity across contexts and focuses instead on determining how generalization of knowledge and skills occurs. The person's interpretation of the context in any particular activity may be important in facilitating or blocking the application of skills developed in one context to a new one." (Rogoff, 1984).

The notion of context is essential in dyamic systems. The use of dynamic systems provides a very large number of situations. The awareness of the temporal progression is context-sensitive. Context is usually related to other entities like situation, behavior, point of view, relationships among agents, discourse, dialogue, etc. Context can be defined in several ways. It can be a dynamic 'window' which shows the state of the environment including the user (e.g., his/her intentions, focus of attention, perceived state of the environment, etc.) In the Computer Integrated Documentation (CID) project, the notion of context has been used to tailor a documentation system to users' information requirements. It allows the system to narrow domain and search.

3.4 Dynamics and conseqences

Whenever there is a breakdown in human adaptation during dynamic system management, human operators involve different cognitive capabilities according to the type of system they are controlling. Furthermore, the current trend is to go from energy-based systems to information-based systems. This means that whereas humans used to physically control systems, today they easily monitor information systems that mediate interaction between them and the systems being controlled (there is at least no real cognitive overload in normal situations). The main problem comes from the fact that when a failure occurs, human operators now have to understand it. Thus, even if workload is extremely low during monitoring (sometimes to the point that vigilance is an important issue), it may exponentially increase during fault diagnosis. This fact is extremely important to consider with respect to knowledge acquisition in dynamic systems. In or

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 10

der to better master this issue, we now describe instances that represent three classes of dynamic systems.

Coffee maker, car and airplane. You can stop your coffee maker if there is any problem. In this case, you can stop and nothing dramatic will result from this act. If you stop your car for any reason, let us say to avoiding a pedestrian crossing the road, you might cause another accident by the fact that the car following you did not anticipate this unpredictable stop. In this case, you can stop but... If you are a pilot and you are facing a severe problem on board, you just cannot stop the airplane, otherwise you fall! In this case, you cannot stop at all.

Unconstrained dynamic systems such as the coffee maker can be stopped safely at any time independently of the current evolution of the environment. They are evolving with time in an open-loop fashion. In other words, you can anticipate the final conditions before stopping. Loosely-constrained dynamic systems such as the car can be stopped according to conditions set by the environment. They are evolving with time in a closed-loop with the evolution of their environment. In this case, human operators have to be physically in the loop. In other words, you cannot fully anticipate the final conditions before stopping. Strongly-constrained dynamic systems such as the airplane cannot be stopped at any time when in operation. They are evolving in a closed-loop fashion. Beside this closed-loop evolution, physical control is becoming very remote from human operators. According to these distinctions, we claim that the perception of dynamics in one or the other system is quite different.

This classification of dynamic systems is done with respect to the conseqences perceived by a human operator to safely stop them. It is particularly interesting from a KA point of view. Knowledge involved in failure recovery depends on the type of dynamic system. In the first type, there is no uncertainty. Thus, the dynamic system can be operated without requiring attention. Even if the operator makes an error, the result in the manipulation of the corresponding tool will not cause any dramatic problems. In the second type, human activity requires attention during operation. Human operators have to compromise between choices. Knowledge about such compromises is very difficult to acquire. Activity variation around the task requirements is very difficult to predict off-line. Thus, observation methods must be used, and reporting systems are frequently implemented. In the third type, human activity demands continuous attention. The machine is a reactive agent. Cooperation between the human and the machine governs the entire stability of the overall system. The capture of this cooperation knowledge involved in the control task is extremely difficult for knowledge engineers that are novice in the expertise domain. Both self-training and frequent observation of real experts are always necessary.

3.5 How do experts cope with dynamic systems?

Experts usually improvize according to the situation. They have the sense of the evolving situation. They anticipate what will happen next. They are never out of the loop, except when either their vigilance is too low or their workload is too high. The way this anticipation is perfomed is very important to acquire. Experts can be "ahead" or "behind" according to several factors including:

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 11

— stage in learning (proficiency);— boredom or motivation;— complacency;— workload;— fatigue; etc.

Operators' adaptation to the task and situation is tremendously important to understand in order to capture knowledge specific to systems belonging to the third class described above.

At this point, it is essential to notice that tasks are the prescribed activities that operators must perform to reach their goals. Operators' activities are generally quite different from the tasks that are demanded. We then differenciate between task demand and operator activity in knowledge acquisition for dynamic systems. Humans "apparently" solve complex problems, but this is by using good enough solutions (Rappaport, 1993). This is very true in dynamic environments. These solutions most of the time fit within safety bounderies.

4 Knowledge involved in dynamic systems

4.1 Rasmussen's Model

Knowledge involved in the control of dynamic systems has been translated into human operator behavior (Rasmussen, 1983). Rasmussen's model was developed to represent the performance of an operator in a process control situation. It provides a framework to study human information processing mechanisms. According to our interpretation of Rasmussen's model, human beings work as hierarchical systems including two types of processors:

— a high level processor, subconscious and highly parallel; — a low level processor, conscious and sequential.

The high level processor corresponds to sensori-motor functions. It is highly dynamic because of its parallelism. The low level processor is limited by the capacity of the short term memory. However this (symbolic) information processing capacity allows the treatment of a large variety of problems with reasonable efficiency.

Rasmussen distinguishes between three levels of behavior of an operator interacting with a dynamic system (Figure 2):

— skill-based behavior;— rule-based behavior;— knowledge-based behavior.

The acquisition of both sensory-motor and cognitive skills results from long and intensive training. Skills allow rapid operations such as stimulus-response actions. At the rule-based behavior level, people manipulate specific plans, rules (know-how) or procedures (such as checklists). This level is operative. The knowledge level includes vario

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 12

us mechanisms representing what we usually call intelligence. Current expert systems are situated at the rule-based level. The main reason is that it is difficult (and often impossible) to elicit compiled expert knowledge from the skill-based level. Knowledge at the middle level is easier to formalize and elicit from expert explanations. As a professor would usually do, the expert must de-compile his/her knowledge to explain the "why" and "how" of his/her own behavior. Results from such a de-compilation can be easily implemented in a declarative fashion. Usually the IF-THEN format is being used to represent rules. However, the result of the de-compilation does not necessarily capture the expert knowledge and behavior at the skill-based level.

Identification DecisionMaking Planning

SituationRecognition

Situation(s)/ task(s)

Tasks

Sensors Effectors

Goal(s)

Environment

Knowledge

Rules

Skills

Figure 2. Rassmussen's behavioral levels.

4.2 Knowledge acquisition and machine learning in dynamic systems

Rasmussen's model is used here to classify KA and machine learning (ML) methods that would be useful for knowledge acquisition in dynamic systems. From a strict KA point of view, we experienced that the intermediary level (rule-based level) is the easiest to elicit. Researchers in ML usually distinguish three types of learning (Kodratoff, 1988):

— skill acquisition and speed-up learning;— analogy and case-based learning;— induction and empirical learning.

The first type is itself divided into two main categories: macro-operators construction (Korf, 1985) and explanation-based learning (DeJong, 1981). The second type concerns the generation of new chunks of knowledge from existing chunks (Gentner, 1989;

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 13

Hammond, 1989). The third type can be split into three groups of methods: supervised learning of concepts (Mitchell, 1982), similarity-based learning (Quinlan, 1986), discovery (Lenat, 1977; Langley et al., 1987; Falkenhainer, 1990) and nonsupervised clustering (Fisher, 1987). In the next three subsections, we review some previous work illustrating these three types of learning.

Skill acquisition to increase performance. Although interview methods have been reported to be very efficient (LaFrance, 1986), it is extremely difficult to elicit skills using such methods in dynamic systems. Protocol analysis allows us to access this type of knowledge by re-construction. This knowledge is even very difficult to design from data acquired by observation without an already sufficient account of domain knowledge. Methods that facilitate the observation of the expert (or the user) at work allow the elicitation of situational patterns. We used this method during the HORSES1 project at NASA Ames to elicit diagnostic knowledge. In all cases, a model is very useful to interpret results from these methods. Situational knowledge can be constructed from analytical knowledge. This recompilation can be performed using speed-up learning methods. We have developed an algorithm that transforms analytical knowledge into situational knowledge for the SAOTS2 project with CNES3. This work has been reported in (Boy & Delail, 1988).

Analogy and case-based learning. We worked with test pilots to build a first body of knowledge for use in the MESSAGE system4. Raw data verbalized by the pilots were very often small stories such as: "I was flying at 10 000 feet, the air traffic control gave me the clearance to prepare landing, my copilot...". Experts remember anecdotes. They tend to express their dynamic knowledge by telling stories. This is Roger Schank's point of view about knowledge expression by people in general. This type of knowledge has to be compared to knowledge that has already been acquired and eventually generalized. In our work, we used analogical methods manually (Boy, 1983). Analogy and case-based learning are essential knowledge elicitation methods in dynamic systems.

Induction and empirical learning. We developed an algorithm for dynamic empirical learning of indices in the Computer Integrated Documentation (CID) project at NASA. Context-sensitive indices are learned as CID is used. The main probem is that the more you learn the more you generate knowledge that will be more difficult to retrieve. It is then necessary for real-time reasons to cluster the resulting knowledge base to improve its accessibility. Cobweb is certainly the most well known and used concept clustering algorithm (Fisher, 1987). We have proposed a similar approach to context clustering for CID (Boy, 1991b). Induction and empirical learning are essential when situated knowledge needs to be reduced by generalization. This is very important in dynamic or evolving systems.

1Human Orbital-Refueling-System Expert System.2French acronym for "Système d'Assistance à l'Opérateur en Télémanipulation Spatiale" (Operator Assistant System in Space Telemanipulation).3Centre National d'Etudes Spatiales (French Space Center).4French acronym for "Modèle d'Equipage et des Sous-Systèmes Avion pour la Gestion des Equipements" (Crew and Aircraft Sub-Systems Model for Equipments Management).

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 14

5 The task-tool-user triangle

User-centered design (Billings, 1991) should take into account three main components, i.e., task(s), tool(s) and user(s), as well as interactions between these components. The conventional engineering approach is centered on tool construction, tasks and users are generally considered implicitely. The task is usually represented using task analysis methods and/or by modeling the process that will be controlled by the tool to be designed. This modeling work involves conceptual or physical simulations that are typically performed using software programs. Results of such analyses give a set of requirements for the tool. The user is rarely taken into account explicitely at design time. A user model is incrementally built with respect to the current task either by analogy with existing models or by specification of a syntax and a semantics. Task-tool interaction provides information requirements (from task to tool) and technological limitations (from tool to task). Task-user interaction can be analyzed through task analyses (from task to user) and user activity analyses (from user to task). Such analyses are incremental because the nature of the task can be modified by the tool. User-tool interaction is mediated through an interface that induces training requirements (from tool to user) and ergonomics modifications (from user to tool). This approach is called the task-tool-user triangle (Figure 3). The main problem is that the three components cannot be isolated, they are interdependent. The more dynamic a system is, the more this interdependency is difficult to takle and understand. In the design of a dynamic system, such as an airplane, it is important to take human operators into account in the design loop (Boy, 1988).

Task

Tool User

Information requirementsand technological limitations

Task analysis andactivity analysis

Training and ergonomics

Figure 3. The task-tool-user triangle

5.1 Situations and ready-to-use procedures

A way to handle this problem is to design an appropriate interface for the tool and operations procedures for the user who has to perform the task. If the user is able to perform the task with the tool without many procedures this means that the interface is well designed. Conversely (and it is usually the case), if the user is stuck and needs help, then procedures are usually welcome. But this may mean that the interface has been badly designed. In general, user interfaces are designed with a body of procedures that go along with them. In aeronautics for instance, procedures are designed to improve saf

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 15

ety and reliability (legal issues). The main reason is that in highly dynamic systems, people do not have time (especialy when overloaded) to fully reconstruct procedures from scratch, thus they must have already prepared and tested procedures appropriate to the current situation. The acquisition of such procedures is difficult and never completed. Good enough solutions are the best we can do. Procedures are incrementally revised.

5.2 The procedures-interface duality

Each time a procedure is well understood and "fully" tested, it can be integrated into the interface as an agent that will perform it automatically. This integration assumes that results generated by the appliation of the procedure are well identified and easily understandable by the user. There is an interesting approach that covers this practice of procedure integration into the interface that is known under the name of "programming by demonstration" (Cypher et al., 1993). Once habits have been experienced for a fair amount of time in manipulating dynamic systems, traces or procedues being followed can be stored and reused as interface agents. In dynamic systems, the difficulty comes from the fact that interface agent performance and results are extremely important issues. For instance, it is important to know how much time you need to wait for an answer from an interface agent.

In the IHMI paradigm, operational knowledge essentially includes procedures. However, these procedures have to be used in "the" appropriate situation. For this reason, a situation pattern must be ideally attached to each procedure. In practice, it is very difficult to acquire such situation patterns. Previous contributions showed that situation patterns can be constructed by direct experimentation on dynamic systems such as space fault diagnosis (Boy, 1987) and telerobotics (Boy & Caminel, 1989). Expertise in the control of dynamic systems is not only in the way procedures are constructed, it is also and foremost in the way procedures are executed at the right time and in the right format. In this sense, context-sensitivity is indexical. The main problem is to index procedures for retrieving them in the appropriate situation. We claim that interface agents should provide appropriate clues for users to retrieve the most relevant procedure.

6 Logicism and Situatedness: How can they go together?

Logicism has dominated the artificial intelligence part of computer science for a long time. In this view, knowledge has to be declarative, the procedural part is implemented separately to run declarative knowledge. The fact that libraries are full of books written in declarative form is not enough to justify this approach. It takes a long time for readers to understand very specialized books unless they are already trained in the corresponding domain. Furthermore, declarative repositories of knowledge of how to use dynamic systems are not useful if users cannot access the right information at the right time and in the right format.

6.1 The Situation Recognition and Analytical Reasoning Model

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 16

Using dynamic systems is a situated activity. Other researchers have already provided representations and logical formalisms of time. Figarol (1989) proposes a classification of time management by a human being into two cognitive processes: dynamic diagnosis and planning (or constant re-planning). We proposed a model for dynamic diagnosis after an experiment carried out at NASA in space fault management (i.e., the HORSES experiment reported by Boy, 1986): the situation recognition and analytic reasoning model (SRAR) (Figure 4). This model has been applied to various dynamic situations and problems.

A1

A2

An

SituationPatterns

AnalyticalKnowledge

s1

s2

sn

S1S2

SN

a1

a2

aN

Beginner

Expert

Figure 4. The situation recognition / analytical reasoning model. Beginners have small, static and crisp situation patterns associated with large analytical knowledge chunks. Experts have larger, dynamic and fuzzy situation patterns associated with small analytical knowledge chunks. The

number of beginner chunks is much smaller than the number of expert chunks.

When a situation is recognized, it generally suggests how to solve an associated problem. We assume, and have experimentally confirmed in specific tasks such as fault identification (Boy, 1986, 1987), telerobotics (Boy and Mathé, 1989) and information retrieval (Boy, 1991b), that people use chunks of knowledge. It seems reasonable to envisage that situation patterns (i.e. situational knowledge) are compiled because they are the result of training. We have shown (Boy, 1987), in a particular case of fault diagnosis on a physical system, that the situational knowledge of an expert results mainly from the compilation, over time, of the analytical knowledge he/she relied on as a beginner. This situational knowledge is the essence of expertise. "Decompilation", i.e. explanation of the intrinsic basic knowledge in each situation pattern, is a very difficult task, and is sometimes impossible. Such knowledge can be elicited only by an incremental observation process. Analytical knowledge can be decomposed into two types: procedures or know–how, and theoretical knowledge.

The chunks of knowledge are very different between beginners and experts. The situation patterns of beginners are simple, precise and static, e.g. “The pressure P1 is less than 50 psia”. Subsequent analytical reasoning is generally major and time–consuming

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 17

. When a beginner uses an operation manual to make a diagnosis, his behavior is based on the precompiled engineering logic he has previously learned. In contrast, when he tries to solve the problem directly, the approach is very declarative and uses the first principles of the domain. Beginner subjects were observed to develop, with practice, a personal procedural logic (operator logic), either from the precompiled engineering logic or from a direct problem–solving approach. This process is called knowledge compilation. Conversely, the situation patterns of experts are sophisticated, fuzzy and dynamic, e.g. “During fuel transfer, one of the fuel pressures is close to the isothermal limit and this pressure is decreasing”. This situation pattern includes many implicit variables defined in another context, e.g. “during fuel transfer” means “in launch configuration, valves V1 and V2 closed, and V3, V4, V7 open”. Also, “a fuel pressure” is a more general statement than “the pressure P1”. The statement “isothermal limit” includes a dynamic mathematical model, i.e. at each instant, actual values of fuel pressure are compared f u z z i l y (“close to”) to a time–varying limit [ Pisoth = f(Quantity,Time)]. Moreover, experts take this situation pattern into account only if “the pressure is decreasing”, which is another dynamic and fuzzy pattern. It is obvious that experts have transferred part of analytical reasoning into situation patterns. This part seems to be concerned with dynamic aspects.

Thus, with learning, dynamic models are introduced into situation patterns. It is also clear that experts detect broader sets of situations. First, experts seem to fuzzify and generalize their patterns. Second, they have been observed to build patterns more related to the task than to the functional logic of the system. Third, during the analytical phase, they disturb the system being controlled to get more familiar situation patterns which are usually static: for example, in the ORS experiment, pilots were observed to stop fuel transfer after recognizing a critical situation.

The following generalizations can be drawn from the HORSES experiments. First, by analyzing the human–machine interactions in the simulated system, it was possible to design a display that presented more polysemic information to the expert (e.g. a monitor showing the relevant isothermal bands). Polysemic displays include several types of related information presented simultaneously and are readily understandable to experts because the presentation is derived from their situation patterns. This improved user and system performance. Second, the HORSES assistant achieved a balance in the sharing of autonomy. The original system designer did not anticipate the way that the operators would use the system, but letting them have indirect control over the assistant allowed them to utilize what they had learned to do well.

SRAR can be used to design interface agents that would help human operators to anticipate the evolution of dynamic systems. It is intended to provide help to find the best compromise in the design of interface and procedures. This model can be compared to other work such as Amalberti's schemas (Amalberti, 1988). Abbott introduces the possibility of multi-hypothesis management in aircraft cockpits (Abbott, 1989).

6.2 The Block Representation

The SRAR model stressed the need for the development of an appropriate representation of operation procedures. Some of them can be implemented as interface agents wh

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 18

en sufficient situational knowlegde has been derived. Procedures can be represented as knowledge blocks (Boy, 1989; Mathé, 1990). Blocks have been used in the modeling of cognitive reactive control in space telemanipulation (Boy & Caminel, 1989; Mathé, 1990). Mathé developed an extended appropriate formalism for the block representation in her Doctoral thesis. The inference mechnism associated with the block formalism is independent of the content of the procedure base. A block includes: a name; a hierarchical level; a list of preconditions; a list of actions; a list of goals with their lists of associated blocks; and a list of abnormal conditions with their lists of associated blocks (Mathé & Kedar, 1992). A block is graphically represented as in Figure 5. Figure 6 shows a block as a society of other blocks (context hierarchical level).

Goal

PreConditions

AbnormalConditionsContextProcedure

Figure 5. Graphical representation of a knowledge block.

The behavior of a society of blocks is based on the inference mechanism associated with the block representation. First sensory inputs from the environment need to match the preconditions of the blocks selected in the focus of attention (they are usually called current expectations). Second the most critical of the best matched expectations is selected. Human selection has been observed to be frequency-based (Reason, 1986). Reason talks about frequency gambling. Depending on the type of the selected expectation, the corresponding block(s) is (are) ready for execution. If there is a contextual hierarchy of blocks the actions of the terminal blocks are executed according to a given strategy (usually a sequence).

The block representation has been successfully used in two very different applications: telerobotics assistance (Boy & Caminel, 1989; Mathé, 1990; Mathé & Kedar, 1992), and computer integrated documentation (Boy, 1989, 1991). In the former, blocks were used to contruct procedures to help telemanipulation operators. In the latter, blocks have been implemented as contextual links to acquire users' preferences when they actually use the CID system.

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 19

Figure 6. A block as a society of other blocks.

7 Conclusion and Perspectives

This paper reports some of the work in the area of modeling and KA in dynamic systems. Researchers such as McDermott (1982) proposed the notion of linearity for past events, and the notion of branching of chronics for possible future events. Dynamics deals with several notions such as causality, sequentiality, possible futures, expectation, anticipation and intervals. Our contribution is not based on an axiomatic approach to the dynamics concept. It is based on experience in the management of dynamic systems such as aerospace systems. We have tried to elicit a first set of features that can be reused in future KA work in such domains.

Dynamics is perceived differently by different people according to their skills and expertise. This has many implications in the design of knowledge-based assistants. In particular, dynamic knowledge acquired from experts would be difficult. Situation patterns would be different. However, when one understands how these patterns were built, it is a major input to training courses.

Designers of dynamic systems and associated assistant systems, and cognitive scientists have already started to create workable elicitation techniques ranging from interviews to field observation. Our contribution is in the design of appropriate mediating representation that helps these people to better acquire dynamic knowledge. One of the main issues is certainly the representation of context. Context deals with time and hypot

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 20

hetical features. It is often associated with the notion of point of view. Context includes the concept of persistence, i.e., when a parameter stays constant for a period of time it becomes part of the context. A better grasp of the concept of context will improve the development of knowledge acquisition in dynamic system. This is because it will allow simplification of huge amounts of knowledge that it would be necessary to acquire otherwise. In any case, understanding how experts patterns were built can provide a major input to training.

There is a need for a better development of the ontology of dynamics that we initiated in this paper. Some work should be devoted to testing such an ontology in real-world applications. The real-world is dynamic by nature. Its complexity is perceived differently according to the observation tools that we have. The block representation has been very successful to date to represent real-world procedures. The use of this representation in a broader range of applications will certainly contribute to eliciting better concepts about dynamic systems. This would be extremely useful in future designs and operations.

Acknowledgements

Many ideas described here benefited from discussions with Philippa Gander, Nathalie Mathé, Erik Hollnagel, Marc Pelegrin, Jeff Bradshaw and Alain Rappaport. I would like to thank Nathalie Nanard and Helen Wilson for their comments on an early draft of this paper.

References

Abbott, K.H., (1989), "Human-Centered Automation and AI: Ideas, Insights, and Issues From the Intelligent Cockpit Aids Research Effort", Proceedings of the IJCAI-89 Workshop Report on Integrated Human-Machine Intelligence in Aerospace Systems, Detroit, Michigan, U.S.A., August.

Allen, J.F. (1985). Maintaining knowledge about temporal intervals. Readings in Knowledge Representation. Brachman R.J. and Levesque H.J. (Eds), Morgan Kaufmann Publishers.

Amalberti, R. (1988). Savoir-faire de l'opérateur: théorie et pratique. XXIVème Congrès de la SELF.

Billings, C.E. (1991). Human-centered aircraft automation philosophy. Technical Memorandum 103885, NASA Ames Resarch Center, Moffett Field, CA.

Boy, G.A. (1987). Operator Assistant Systems. Int. J. Man-Machine Studies, 27, pp. 541-554.

Boy, G.A. (1989). The Block representation in knowledge acquisition for computer integrated documentation. Proceedings of the Fourth AAAI-Sponsored Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff, Canada, October 1-6.

Boy, G.A. (1991a). Intelligent Assistant Systems. Academic Press, London, U.K.Boy, G.A. (1991b). Computer Integrated Documentation. NASA Technical Memoran

dum, NASA Ames Research Center, Moffett Field, CA.

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 21

Boy, G.A. & Caminel, T. (1989). Situation pattern acquisition improves the control of complex dynamic systems. Third European Workshop on Knowledge Acquisition for Knowledge-Based Systems, Paris, July.

Boy, G.A. & Delail, M. (1988). Knowledge Acquisition by Specialization-Structuring: A Space Telemanipulation Application. AAAI-88, Workshop on Integration of Knowledge Acquisition and Performance Systems, St Paul, Minnesota, USA.

Boy, G.A., & Gruber, T. (1990). Intelligent Assistant Systems: Support for Integrated Human-Machine Systems. Proceedings of the AAAI Spring Symposium on Knowledge-Based Human Computer Communication, Stanford, March 27-29.

Boy, G.A. & Nuss, N. (1988). Knowledge acquisition by observation: application to intelligent tutoring systems. Proceedings of the Second European Workshop on Knowledge Acquisition for Knowledge-Based Systems, Bonn, Germany.

Cypher, A. (1993). Watch What I Do—Programming by demonstration. The MIT Press, Cambridge, MA.

Dejong, G. (1981). Generalization based on explanation. Proc. IJCAI, pp. 67-69.Drummond, M., (1989), "Situated Control Rules", Proceedings of the First Internatio

nal Conference on Principle of Knowledge Representation and Reasoning, Morgan Kaufmann Pub., Toronto, May.

Falkenhainer, B.C. (1990). A unified approach to explanation and theory formation. In J. Shrager & P, Langley (Eds.), Computational models of scientific discovery and theory formation. Morgan Kaufmann, San Mateo.

Figarol, S. (1989). Airline pilot's anticipatory knowledge. Masters thesis. Universié Toulouse Le Mirail, France (In French).

Fisher, D. (1987). Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2, pp. 139-172.

Gentner, D. (1989). Mechanisms of analogical learning. In S. Vosniadou & A. Ortony (Eds.) Similarity and analogical reasoning. Cambridge University Press, London.

Georgeff, M.P. & Ingrand, F.F. (1989). Decision making in an embedded reasoning system. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, pp. 972-978.

Hollnagel, E. (1993). Requirements for dynamic modelling of man-machine interaction. CRI paper. Nuclear Engineering and Design. Denmark.

Hutchins, E. (1991). How a cockpit remembers its speed. Technical report, University of California at San Diego, Distributed Cognition Laboratory.

Korf, R.E. (1985). Learning to Solve Problems by Searching for Macro-Operators. Research Notes in Artificial Intelligence, Pitman, Boston.

LaFrance, M. (1989). The quality of expertise: Implications of Expert-Novice differences for knowledge acquisition. SIGART Newsletter, April, pp. 8-14.

Langley, P., Simon, H.A. & Bradshaw, G.L. (1987). Heuristics for empirical discovery. In L. Bolc (Ed.), Computational models of learning. Springer-Verlag, Berlin.

Laurel, B. (1991). Computer as Theatre: A dramatic theory of interactive experience. Addison Wesley, Reading, Massachusetts.

Lenat, D.B. (1977). The ubiquity of discovery. Artificial Intelligence, 9, pp. 257-285.Leplat, J. (1985). The elicitation of expert knowledge. NATO Workshop on Intelligent

Decision Support in Process Environments, Rome, Italy. September.Mathé, N. (1990). Intelligent Assiatnce for Process Control: Application to Space Tel

eoperation. PhD Dissertation, ENSAE, Toulouse, France.Mathé, N. & Kedar, S. (1992). Increasingly Automated Procedure Acquisition In Dyn

amic Systems. Proceedings of the Knowledge Acquisition Workshop for Knowledg

EKAW'93 Proceedings. Lecture Notes in AI. Springer Verlag, Berlin

Page 22

e-Based Systems, Banff, Canada, October. Also as a NASA Technical Report FIA-92-23, June.

McDermott, D. (1982). A temporal logic for reasoning about processes and plans. Cognitive Science, 6, pp. 101-155.

Mitchell, T.M. (1982). Generalization as search. Artificial Intelligence, 118, pp. 203-226.

Mizoguchi, R., Tijerino Y. & Ikeda, M. (1992). Task Ontology and its Use in a Task Analysis Interview System. Proceedings of the Second Japaneese Knowledge Acquisition for Kowledge-Based Systems Workshop, JKAW'92, Kobe, Japan.

Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 1.Rappaport, A. (1993). Invariants, Context and Expertise in the Knowledge Milieu. Thi

rd International Workshop on Human and Machine Cognition, Seaside, Florida, May 13-15.

Rasmussen, J. (1983). Skills, rules, and knowledge: Signals, signs, and symbols and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, 13, pp. 257-266.

Reason, J. (1986). Decision aids: prostheses or tools ? pp. 7-14 in Cognitive Engineering in Complex Worlds, eds E. Hollnagel, G. Mancini & D.D. Woods. Academic Press, London.

Rogoff, B. (1984). Introduction: Thinking and learning in social context. In Everyday Cognition: Its Developement in Social Context, Rogoff, B. and Lave J., Harward University Press Pub., Cambridge, MA.

Shalin, V. & Boy, G.A. (1989). Integrated Human-Machine Intelligence. IJCAI'89, Detroit, MI.

Shalin, V., Geddes, N., Bertram, D.,Szczepkowski, & DuBois, D. (1993). Expertise in Dynamic, Physical Task Domains. Third International Workshop on Human and Machine Cognition, Seaside, Florida, May 13-15.

Sheridan, T.B., (1984), "Supervisory Control of Remote Manipulator, Vehicles and Dynamic Processes: Experiment in Command and Display Aiding", Advances in Man-Machine System Research, J.A.I. Press, Vol. 1, pp. 49-137.

Suchman, L.A., (1987), "Plans and Situated Actions. The Problem of Human-Machine Communication", Cambridge University Press.

Toggnazzini, B. (1993). Principles, Techniques, and Ethics of Stage Magic and Their Potential Application to Human Interface Design. Proceedings of INTERCHI'93, ACM Press, New York, Conference held in Amsterdam, The Netherlands.

Wielinga, B., Van de Velde, W., Schreiber, G. & Akkermans, H. (1992). The CommonKADS Framework for Knowledge Modelling. Proceedings of the Seventh Knowledge Acquisition for Knowledge-Based Systems AAAI Workshop, Banff, Canada, October.

Woods D.D. & Hollnagel E., (1986). Mapping cognitive demands and activities in complex problem solving worlds. Proceedings of the Knowledge Acquisition for Knowledge-Based Systems AAAI Workshop, Banff, Canada, November.