9
Toward Assessing the Impact of TADMUS Decision Support System and Training on Team Decision Making Joan H. Johnston, Ph.D. and Carol Paris, Ph.D. Naval Air Warfare Center Training Systems Division Code 4961 12350 Research Parkway Orlando, FL 32826-3275 C.A.P Smith, Ph.D. Space and Naval Warfare Systems Center RDT&E Division, Code D44210 San Diego, CA 92152-5001 Abstract This paper reports on the collaborative research plans for the last thrust of the Tactical Decision Making Under Stress program sponsored by the Office of Naval Research. Findings from the research will be used to develop principles and guidelines for combining training with decision support, and applications to advanced technology development research. 1 Introduction In its first seven years, the Tactical Decision Making Under Stress (TADMUS) research program, sponsored by the Office of Naval Research, was highly successful in demonstrating the separate impact of training and decision support on combat team performance (Cannon- Bowers & Salas, 1998). The last two years of the TADMUS program have been focused on conducting team research to assess the combined impact of training and decision support. This paper is an update on the research plans, progress, and lessons learned on the collaborative effort among Navy labs (Naval Air Warfare Center Training Systems Division and Space and Naval Warfare Systems Center), Academia (San Diego State University), Industry (Sonalysts, Inc.), and fleet end users in developing the plans for the final research thrust. 2 Background Previous research has shown that the TADMUS training interventions and the DSS have improved performance under stress (Cannon- Bowers & Salas, 1998; Kelly et al., 1996). In the case of TADMUS training some experiments were conducted with individuals, but most were conducted with three- and five-person teams. The DSS research thrust involved evaluating the impact of the DSS on the performance of the Commanding Officer (CO) and Tactical Action Officer (TAO) dyad. A major task for this last experiment was to develop and conduct research, whereby, the two major TADMUS interventions (DSS and training) would be linked together in a final demonstration of effectiveness for tactical decision making teams. The teams represent the ship’s air defense combat team, which is composed of the CO, TAO, Air Warfare Coordinator (AAWC), Tactical Information Coordinator (TIC), Identification Supervisor (IDS), and Electronic Warfare Supervisor (EWS). The main hypothesis for the experiment is that teams provided the TADMUS DSS and training will perform significantly better than the control. It is expected that compared to the control group, trained teams using DSS will: a) be more timely and accurate in the Detect-to Engage (DTE) sequence (i.e. team decision making), b) report more accurate threat priorities, c) use more effective teamwork processes (information exchange, supporting behavior, initiative/ leadership, and communication), d) perceive less workload and

107 Johns

Embed Size (px)

DESCRIPTION

john

Citation preview

Page 1: 107 Johns

Toward Assessing the Impact of TADMUS Decision Support Systemand Training on Team Decision Making

Joan H. Johnston, Ph.D. and Carol Paris, Ph.D.Naval Air Warfare Center Training Systems Division

Code 4961 12350 Research ParkwayOrlando, FL 32826-3275

C.A.P Smith, Ph.D.Space and Naval Warfare Systems Center

RDT&E Division, Code D44210San Diego, CA 92152-5001

Abstract

This paper reports on the collaborativeresearch plans for the last thrust of the TacticalDecision Making Under Stress programsponsored by the Office of Naval Research.Findings from the research will be used todevelop principles and guidelines forcombining training with decision support, andapplications to advanced technologydevelopment research.

1 Introduction

In its first seven years, the Tactical DecisionMaking Under Stress (TADMUS) researchprogram, sponsored by the Office of NavalResearch, was highly successful in demonstratingthe separate impact of training and decisionsupport on combat team performance (Cannon-Bowers & Salas, 1998). The last two years of theTADMUS program have been focused onconducting team research to assess the combinedimpact of training and decision support. Thispaper is an update on the research plans, progress,and lessons learned on the collaborative effortamong Navy labs (Naval Air Warfare CenterTraining Systems Division and Space and NavalWarfare Systems Center), Academia (San DiegoState University), Industry (Sonalysts, Inc.), andfleet end users in developing the plans for the finalresearch thrust.

2 Background

Previous research has shown that theTADMUS training interventions and the DSS haveimproved performance under stress (Cannon-Bowers & Salas, 1998; Kelly et al., 1996). In thecase of TADMUS training some experiments wereconducted with individuals, but most wereconducted with three- and five-person teams. TheDSS research thrust involved evaluating theimpact of the DSS on the performance of theCommanding Officer (CO) and Tactical ActionOfficer (TAO) dyad. A major task for this lastexperiment was to develop and conduct research,whereby, the two major TADMUS interventions(DSS and training) would be linked together in afinal demonstration of effectiveness for tacticaldecision making teams. The teams represent theship’s air defense combat team, which is composedof the CO, TAO, Air Warfare Coordinator(AAWC), Tactical Information Coordinator (TIC),Identification Supervisor (IDS), and ElectronicWarfare Supervisor (EWS). The main hypothesisfor the experiment is that teams provided theTADMUS DSS and training will performsignificantly better than the control. It is expectedthat compared to the control group, trained teamsusing DSS will: a) be more timely and accurate inthe Detect-to Engage (DTE) sequence (i.e. teamdecision making), b) report more accurate threatpriorities, c) use more effective teamworkprocesses (information exchange, supportingbehavior, initiative/ leadership, andcommunication), d) perceive less workload and

Page 2: 107 Johns

mental stress, and e) perceive greater utility ofDSS for making decisions.

3 Approach

The next section outlines the experimentalapproach including details on design, participants,the task, research conditions, measures, andprocedure.

3.1 Design and Participants

The experimental design is between groupswith multiple combat scenarios as post-tests.Eight six-person teams performing on the DecisionMaking Evaluation Facility for Tactical Teams(DEFTT) simulator (control group) will becompared with eight six-person teams providedDEFTT, DSS and training (experimentalcondition). In addition, in 1998, five such teamsparticipated in a DSS-only condition, without thetraining component, in order to refine and validatethe experimental protocol. Team members areexperienced Navy department head students atSurface Warfare Officer’s School (SWOS). Theresearch protocol for the current effort is based onover six years of team research at SWOS andother combat team training facilities (Johnston,Koster, Black, & Seals, 1998; Johnston, Poirier, &Smith-Jentsch, 1998; Kelly et al., 1996).

3.2 The Task and Research Conditions

The 6-person Combat Information Center(CIC) Air Warfare (AW) teams (TAO, CO,AAWC, TIC, IDS, and EWS) perform the detect-to-engage (DTE) sequence on four 30-minutePersian Gulf AW scenarios. The scenarios areevent-based with time-tags in order to trackspecific expected team behaviors throughout (Oseret al., 1998). The four teams in the control groupcondition perform their watchstation taskassignments on a PC-based combat training systemthat simulates shipboard CIC AW displays(DEFTT) (see Johnston, Koster, Black, & Seals,1998 and Johnston, Poirier, & Smith-Jentsch,1998 for details). In the DSS condition, the TAO

and CO were each assigned to a DSS, while theremaining team members worked on the sameDEFTT watchstations as the control group. Headsets are worn to support verbal communicationsamong team members and with role players whoread a script. The DEFTT system and the DSSrun in tandem with each other, but the softwaredoes not communicate. Any minor discrepanciesin scenarios depicted on each system are handledby roleplayers. All research participants have hadat least 48 hours of DEFTT experience prior to theexperiment.

3.2.1 Control Condition: DEFTT-Only

In the DEFTT-only condition the six teammembers use DEFTT to perform their individualtask assignments. The TAO and CO interact withthe AEGIS Display Screen (ADS) simulation. TheTIC, IDS, AAWC, and EWS each have aCommand and Decision display. In addition, theEWS has a SLQ32 simulated display. Allinformation is unclassified. The research protocolfor this condition was based on the typical combattraining simulation strategy the Department HeadStudents receive during their six month curriculumat SWOS.

3.2.2 Experimental Condition: Combined DSSand Training

Decision Support System. In theexperimental condition the TAO and CO areassigned the DSS (see Morrison et al., 1998 fordetails). Prior to conducting scenario runs theTAO and CO receive a 45 minute computer-basedtutorial that describes display functions and allowspoint and click practice. For scenario runs DSSoperates as a standalone, but is synchronized torun in tandem with DEFTT. The same DEFTTdisplays are assigned to the other team members asin the control condition. Any minor discrepanciesin scenarios depicted on each system is handled byrole players. The San Diego State University eyetracking system is used to track TAO scan patternson the DSS. Pilot testing of DSS with DEFTTand eye tracking was conducted with five teams in

Page 3: 107 Johns

1998 in order to refine the research protocol andtiming of procedures.

Training Strategy. Overall, it would takeapproximately 12 ½ hours to conduct all of theTADMUS training strategies together (seeCannon-Bowers & Salas, 1998). It would havebeen an impossible and needless task todemonstrate the combined effectiveness of thetraining for the current experiment. Therefore, thetraining strategy for the current effort was basedon a need to integrate the most effective traininginnovations into an efficient program that could beimplemented within the limited research protocolestablished at SWOS. Many lessons learnedduring previous training experiments led to thedecisions made to refine and develop the currenttraining. Training priorities were established basedon a literature review of TADMUS traininginterventions (McCarthy et al., 1998). Three maincompetencies were documented based on thelearning objectives identified in the TADMUSliterature: a) Decision Making (35 LearningObjectives), b) Teamwork (31 LearningObjectives), and c) Stress Management (11Learning Objectives). Therefore, by way ofprioritizing training content, decision making andteamwork skills were chosen as the primarycompetencies to train.

Secondly, TADMUS training strategies werereviewed in terms of the types of learning methodsthat were used (e.g., lecture/videotape, computerbased training (CBT), scenario-based training,behavior modeling and demonstration,performance feedback). In most cases, acombination of these strategies were used.Therefore, a two-step approach for trainingdecision skills and teamwork skills competencieswas taken. First, multimedia based training (CBTand videotape) was chosen as an efficient andeffective way of teaching declarative knowledge atthe individual level. Two training modules weredeveloped based on this approach: DecisionMaking Skills Training and Team DimensionalTraining (TDT). The Decision Making SkillsTraining adapted training developed from critical

thinking research (Cohen et al., 1998), as well asother research on naturalistic decision making andtraining (Zsambok & Klein, 1998). TDT wasdeveloped and validated under previous TADMUSresearch, and later refined under a separate 6.3research program for shipboard instructor trainingand support (Smith-Jentsch et al., 1998).

Scenario-based team training (including behavioralmodeling, performance feedback, and team self-correction) was developed based on results ofresearch on critical thinking training, team leadertraining, team adaptation and coordinationtraining, and team self-correction training. Thesestrategies have demonstrated that individuals canapply and practice previously learned declarativeknowledge in a team environment (Cohen et al.,1998; Serfaty & Entin, 1998; Smith-Jentsch et al.,1996; Tannenbaum et al., 1998). Next is adetailed description of each of the modules.

Decision Making Skills Training (Duration 2 ½hours). Students participate in a two and one halfhour CBT for decision making skills. Theinteractive computer-based curriculum iscomprised of seven modules using multiple CICvignettes that have events instantiated withworkload and ambiguity, team requirements, anddecision requirements typical to the CIC.Vignettes are presented to students in which theyare asked to apply the knowledge they aredeveloping throughout the CBT. The CBTenables students to understand and develop acommon language on decision making that theycan transfer to the scenario-based team trainingenvironment. Testing is embedded in the trainingto ensure students meet minimum knowledgerequirements. At the end of training students areprovided a booklet that serves as a quick referencestudy guide for decision making skills.

Team Dimensional Training (Duration 30minutes). Students receive a workshop comprisedof a 10 minute videotape on Teamwork Skills andTeamwork Dimensions and a 20 minute instructorfacilitated training, with the support of CBT, tolearn and practice identifying specific CIC

Page 4: 107 Johns

teamwork behaviors. Students receive a TDTBooklet to use during the facilitated training, andas a quick reference study guide for teamworkskills.

Scenario-Based Team Training using DSS(Duration 2 ½ hours). Immediately following theTDT workshop, students assemble in the DEFTTlaboratory to begin scenario-based team training.The objective of this training is to enable studentsto conduct structured scenario pre-briefings anddebriefings (via team self-correction). The trainingis designed to enable members to identify andimplement effective teamwork process behaviorsand decision making skills during scenarios. First,the team conducts a scenario situation update for aTADMUS scenario and then proceed to theirrespective watch stations. During scenario runobservers record good and poor examples ofteamwork behaviors on the TDT DebriefingGuide. In addition, observers evaluate and recordstudent performance on the DTE sequence for thethree critical events in the scenario using aScenario Event Summary Sheet. Followingscenario run, the team is provided the ScenarioEvent Summary Sheet for performance feedback.

Next, an instructor/facilitator leads the teamthrough a scenario recap and discussion of theirDTE performance using the DSS as an aid toillustrate scenario ground truth and discussdecision making strategies. For example, Rules ofEngagement call for verifying that an inboundaircraft is clear of its territorial airspace before awarning is issued. To make matters even morecomplicated, in littoral environments like thePersian Gulf, a potentially hostile aircraft will oftenbe within its weapons release range of ownshipbefore leaving its territorial airspace. Situationssuch as these create substantial time pressure, andare often mishandled by relatively inexperiencedstudents. The DSS permits the participants toreplay and pause the scenarios at these crucialjunctures. This feature facilitates discussions thathelp the team plan how they will handle similarsituations in the future. In addition, the DSSdebrief provides cross training by familiarizing all

team members with information the TAO and COhave available to them on the DSS during ascenario. Cross training research has shown teamcommunications improve when members knowwhat information needs to be passed to other teammembers (Blickensderfer et al., 1998).

Following DTE debriefing, the last step in theprocess is an instructor leads the team in astructured debrief using the TDT DebriefingGuide. The instructor guides the students in ateam self-correction process for identifyingteamwork performance improvements.

The second training scenario run allows the teamto take responsibility for practicing anddemonstrating the structured pre-briefing anddebriefing strategy. First, the team memberdesignated as the CO conducts the structured pre-briefing using teamwork performance objectivesidentified in the previous debrief. Next, the teamperforms a second TADMUS scenario run.During scenario run, observers record good andpoor examples of teamwork behaviors on the TDTDebriefing Guide. In addition, observers evaluateand record DTE performance on the ScenarioEvent Summary Sheet. Following scenario run theteam is provided the Scenario Event SummarySheet as feedback. With minimal facilitation fromthe instructor the designated CO leads the teamthrough a scenario recap and discussion of theirDTE performance. The CO directs the instructorto manipulate scenario recap on the DSS. Then,the CO leads the team in a structured debrief usinga blank TDT Debriefing Guide. The teammembers have to recall specific instances wheretheir performance was good and poor, and engagein team self-correction to identify teamworkperformance improvements. Following theseactivities, teams receive feedback from observersand instructors on their debriefing performance.

3.3 Measures

3.3.1 Air Defense Warfare Team PerformanceIndex (ATPI)

Page 5: 107 Johns

The ATPI provides a set of scores on teambehaviors related to the DTE sequence (Paris etal., 1998). It provides diagnostic information withrespect to team-level decision making based onMarshall’s (1995) theory and research on decisionschema (identification, elaboration, planning andexecution). A paper-based tool is used by SMEsto record whether or not, and when, teammembers performed event-based actions requiredby a scenario. Video and audio tapes enabletranscriptions of communications for furtheranalyses.

A major challenge in this research effort has beenachieving adequate rater agreement on the ATPI(Johnston et al., 1997). Therefore, significantefforts have been directed at tool redesign andrater training. To improve the tool’s sensitivity itwas modified to incorporate a similar measurementstrategy that Marshall et al. (1998) used in theirresearch on CIC teams. Marshall’s researchshowed that certain Team DTE behaviors (e.g.,level 1 queries, warnings, covering, andengagement) should be evaluated with respect toboth Planning and Execution. Marshall determinedthat team members communicate their planning,but communication of their execution of actionsmay be delayed, and could have an impact onoverall team effectiveness. This is an importantissue for performance diagnosis and subsequentremedial training. Team members should gainknowledge during debriefs on how their behaviorshad an impact on the final outcome (to engage ornot engage).

Secondly, rater training was improved. First, raterswere brought to concensus on how they wouldevaluate each stage of the DTE process. Then,they developed concise and clear details for therating criteria. Last, a rater training notebook wasdeveloped as a quick reference guide so that raterscould refresh their memory prior to evaluatingtranscripts of team communications.

3.3.2 TAO/CO Threat Prioritization

Periodic situation updates are requested by anobserver during each scenario run to obtainTAO/CO threat priorities and threat levels.Comparison of updates between the control andexperimental conditions will be made in order toidentify the impact of the DSS and training onthreat priorities.

3.3.3 Air Defense Warfare Team ObservationMeasure (ATOM)

ATOM provides a set of scores on fourdimensions of teamwork behaviors: SupportingBehavior, Leadership/Initiative, InformationExchange, and Communications (Johnston et al.,1997). A paper-based tool is used by trained ratersto record teamwork performance. Video and audiotape enable transcriptions of communications forfurther analyses. Comparisons will be madebetween the control and experimental conditionsto determine differences in teamwork. In addition,team behaviors in the DSS-alone condition will becompared to the other two conditions to determinethe impact of DSS on teamwork behaviors.

3.3.4 NASA TLX

Perceived workload and fatigue will bemeasured with the NASA TLX (Likert scaleversion) to determine perceived stress differencesamong conditions and across scenarios.

Page 6: 107 Johns

3.3.5 Briefing Protocol Analyses

As a manipulation check, team communicationsduring scenario briefings for the control group andexperimental group will be analyzed to determinethe impact of training and DSS on briefingstrategies.

3.3.6 Eyetracking Metrics

Using the SMI EyeLink System, the eyemovements of the student assigned to the role ofTAO are tracked throughout DSS training andDSS scenario runs. Pupil dilation and point-of-gaze data are collected for both left and right eyes,sampling at a rate of 250Hz. The point of gazedata consists of the horizontal and verticalcoordinates giving pixel location for eachobservation. Using the procedures developed bySandra Marshall and her research group, the pupildilation is analyzed using wavelet theory toproduce estimates of cognitive workload. Theseestimates show how workload changes during thecourse of the scenario. The point-of-gaze datashows the usage of the different parts of the DSS.The overall percent of time spent in every regionfor each scenario is tabulated, as well as the use ofthe regions during several minutes that have beenidentified as critical points in the scenario. For thecritical events, the cognitive workload will becorrelated with the eye movements to determinewhen and where the highest levels of mental effortoccur.

3.4 Procedure

3.4.1 Control Condition

The DEFTT-only condition involves DHstudent participation for a single day. At thebeginning of the day informed consent forms arecompleted and team members respond to ademographics questionnaire in order to determinelevel of expertise. Team members with the mostCIC expertise are assigned as TAO and CO.Following selection, team members are trained ontheir respective watch stations. A training

administrator provides an introduction to CICwatch station responsibilities and DEFTTfunctions. Next, training scenarios are used tofamiliarize DEFTT operators with systemfunctions, operations, and team interactions.Information packets are provided to members thatdevelop a context and rationale for the researchscenarios (e.g., environmentals, geopolitical,situation update, ROE, threat matrix, etc.).Following DEFTT familiarization training on twotraining scenarios team members participate infour posttest scenarios that are counterbalancedacross teams. At the end of each scenario sessionteam members fill out the NASA TLX, and areprovided with the Scenario Event Summary Sheetto use for review in their debrief. At the end ofthe day, students are provided feedback onperformance after the last scenario as a way toensure they received training value for theirefforts.

3.4.2 Experimental Condition

The experimental condition involves DHstudent participation for two days. The first dayrequires students to participate in the CBT fordecision skills. The second day is similar to thecontrol condition protocol with the addition ofDSS, TDT, and scenario-based team training.While four students learn DEFTT functions theTAO and CO students participate in a 45-minuteDSS CBT tutorial. Eye tracking calibration takesplace during DSS practice. Next, students receivethe TDT training followed by the scenario-basedteam training using the same two trainingscenarios as in the control condition. Then, teammembers participate in the four posttests. At theend of each scenario session team members fill outthe NASA TLX, and are provided with theScenario Event Summary Sheet and the TDTdebriefing guide to use for review in their debriefalong with the DSS. At the end of the day,students are provided feedback on performanceafter the last scenario as a way to ensure theyreceived training value for their efforts.3.5 Progress and Summary

Page 7: 107 Johns

In this paper we have described the researchprotocol and plans for conducting the finalexperimental thrust for the TADMUS program:conducting a demonstration of the combined effectof the TADMUS DSS and training on tacticalteam performance. To date four teams havecompleted the experimental condition and fourteams have completed the control condition. Theremaining four teams in the experimental andcontrol conditions will participate in June 1999and September 1999, respectively. Including theDSS only condition, the total number ofparticipants to date is 78. Data analyses are in theprocess of being conducted. Products from thiseffort will include:

♦ Principles and guidelines for integratingtraining and decision support systems,

♦ Designing diagnostic tools for evaluating teamtactical decision making performance,

♦ Developing and conducting rater training andconcise documentation of rater guidelines,

♦ Developing computer based training fordecision skills and team performance,

♦ Developing and Conducting Scenario-BasedTraining,

♦ Application of gaze patterns to identifyeffective decision support strategies,

♦ Utilizing the DSS display capabilities fortraining and debriefing team decision making,and

♦ Efficacy of eyetracking technology forassessing human cognition and performance

3.6 Transition Plans

A number of significant transitions ofTADMUS research methods are currently inprogress. Specifically, the Science and TechnologyReduced Manning Initiative sponsored by theOffice of Naval Research is conducting ademonstration of comparing fully mannedshipboard AW CIC teams to teams performing ona newly developed CIC console that will requirefewer AW team members—the Multi-ModalWatchstation (Campbell et al., 1997). It is a jointprogram of academia, industry and four DoD labs:

Naval Air Warfare Center Training SystemsDivision, Space and Naval Warfare SystemsCenter, Naval Surface Warfare Center, and NavalUnderwater Warfare Center. The researchprogram includes basic, applied, and advancedtechnology development initiatives. The followingtransitions are direct applications from theTADMUS program: 1) a modified ATPI to beused as an on-line/realtime measurement tool, 2)the ATOM will be used in its current form, 3) ratertraining for the ATPI, 4) event-based scenariodevelopment with two levels of stress, 5) DecisionSupport System display components, and 6) otherTADMUS research protocol components (i.e.,research team preparation, prebriefing materials,role player scripts, etc.).

In addition, the ATPI is being developed as anelectronic performance support tool to aid thereduced manning researchers with real time teamperformance evaluations. The paper-based devicewill be modified and applied to a handheld PC toimprove reliability and ease of ratings. Data input,data reduction, and data presentation interfaceswill be developed to speed recovery of dataanalyses.

References

Blickensderfer, E., Cannon-Bowers, J.A., & Salas, E.(1998). Cross training and team performance. In J.A.Cannon-Bowers & E. Salas (Eds.), Making decisionsunder stress: Implications for individuals and teams(pp. 299-311). Washington, DC: APA.

Cannon-Bowers, J.A., & Salas, E. (Eds.) (1998).Making decisions under stress: Implications forindividuals and teams. Washington, DC: APA.

Campbell, G.E., Cannon-Bowers, J.A., & Villalonga,J. (1997). Achieving training effectiveness and systemaffordability through the application of humanperformance modeling [CD-ROM]. Proceedings ofthe 19th Interservice/ Industry Training, Simulation,and Education Conference, Orlando, FL, 763-770.

Cohen, M.S., Freeman, J.T., & Thompson, B.(1998). Critical thinking skills in tactical decision

Page 8: 107 Johns

making: A model and training strategy. In J.A.Cannon-Bowers & E. Salas (Eds.), Making decisionsunder stress: Implications for individuals and teams(pp. 155-189). Washington, DC: APA.

Johnston, J.H., Koster, A., Black, E., & Seals, M.(1998). A learning methods approach to designingteam training simulations: Lessons learned from theTactical Advanced Simulated Warfare IntegratedTrainer (TASWIT—Device S14A13). Proceedingsof the Command and Control Research andTechnology Symposium, Monterey, CA, 721-730.

Johnston, J.H, Poirier, J., & Smith-Jentsch, K.A.(1998). Decision making under stress: Creating aresearch methodology. In J.A. Cannon-Bowers & E.Salas (Eds.), Making decisions under stress:Implications for individuals and teams (pp. 39-59).Washington, DC: APA.

Johnston, J.H., Smith-Jentsch, K.A., & Cannon-Bowers, J.A. (1997). Performance measurementtools for enhancing team decision making. In M.T.Brannick, E. Salas, & C. Prince (Eds.), Assessmentand management of team performance: Theory,research, and applications (pp. 311-330). Hillsdale,NJ: LEA.

Kelly, R. T., Hutchins, S.G., & Morrison, J.G.(1996). Decision process and team communicationswith a decision support system. Proceedings of theCommand and Control Research and TechnologySymposium, Monterey, CA, 186-203.

Marshall, S.P. (1995). Schemas in problem-solving.London: Cambridge University Press.

Marshall, S. P., Wilson, d.M., Knust, S.R., Smith,M.E., & Garcia, R.J. (1998). Shared decision makingknowledge in tactical situations. Proceedings of theCommand and Control Research and TechnologySymposium, Monterey, CA, 693-699.

McCarthy, J.E., Johnston, J.H., & Paris, C. (1998).Toward development of a tactical decision makingunder stress integrated trainer [CD-ROM].Proceedings of the 1998 Interservice/Industry

Training, Simulation, and Education Conference,Orlando, FL, 818-904.

Morrison, J.G., Kelly, R.T., Moore, R.A., &Hutchins, S.G. (1998). Implications of decision-making research for decision support and displays. InJ.A. Cannon-Bowers & E. Salas (Eds.), Makingdecisions under stress: Implications for individualsand teams (pp. 375-406). Washington, DC: APA.

Oser, R.L., Gualtieri, J.W., & Cannon-Bowers, J.A.(1998). Enhancing training systems design:Implementing an event-based approach. Proceedingsof the Human Factors and Ergonomics Society 42nd

Annual Meeting, Santa Monica, CA, 1309-1313.

Paris, C., Johnston, J.H., & Reeves, D. (1998). Atheoretical framework and measurement strategy fortraining team tactical decision making. Proceedingsof the Command and Control Research andTechnology Symposium, Monterey, CA, 700-708.

Serfaty, D., Entin, E.E., & Johnston, J.H. (1998).Team adaptation and coordination training. In J.A.Cannon-Bowers & E. Salas (Eds.), Making decisionsunder stress: Implications for individuals and teams(pp. 221-245). Washington, DC: APA.

Smith-Jentsch, K.A., Payne, S.C., & Johnston, J.H.(1996). Guided team self-correction: A methodologyfor enhancing experiential team training. In K.A.Smith-Jentsch (Chair), When, how, and why doespractice make perfect? Paper presented at the 11th

annual conference of the Society for Industrial andOrganizational Psychology, San Diego, CA.

Smith-Jentsch, K.A., Zeisig, R.L., Acton, B., &McPherson, J.A. (1998). Team dimensional training.In J.A. Cannon-Bowers & E. Salas (Eds.), Makingdecisions under stress: Implications for individualsand teams (pp. 271-297). Washington, DC: APA.

Tannenbaum, S.I., Smith-Jentsch, K.A., & Behson,S.J. (1998). Training team leaders to facilitate teamlearning and performance. In J.A. Cannon-Bowers &E. Salas (Eds.), Making decisions under stress:

Page 9: 107 Johns

Implications for individuals and teams (pp.247-270).Washington, DC: APA.

Zsambok, C.E., & Klein G. (Eds.) (1997).Naturalistic decision making. Mahwah, NJ: LEA.