17
This article was downloaded by: [University of Aberdeen] On: 04 October 2014, At: 07:24 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK The International Journal of Aviation Psychology Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hiap20 Distributed Mission Operations Within-Simulator Training Effectiveness Brian T. Schreiber a , Mark Schroeder a & Winston Bennett Jr. b a Lumir Research Institute , Grayslake, Illinois b Air Force Research Laboratory , Mesa, Arizona Published online: 05 Jul 2011. To cite this article: Brian T. Schreiber , Mark Schroeder & Winston Bennett Jr. (2011) Distributed Mission Operations Within-Simulator Training Effectiveness, The International Journal of Aviation Psychology, 21:3, 254-268, DOI: 10.1080/10508414.2011.582448 To link to this article: http://dx.doi.org/10.1080/10508414.2011.582448 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or

Distributed Mission Operations Within-Simulator Training Effectiveness

  • Upload
    winston

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Distributed Mission Operations Within-Simulator Training Effectiveness

This article was downloaded by: [University of Aberdeen]On: 04 October 2014, At: 07:24Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

The International Journal ofAviation PsychologyPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/hiap20

Distributed Mission OperationsWithin-Simulator TrainingEffectivenessBrian T. Schreiber a , Mark Schroeder a & WinstonBennett Jr. ba Lumir Research Institute , Grayslake, Illinoisb Air Force Research Laboratory , Mesa, ArizonaPublished online: 05 Jul 2011.

To cite this article: Brian T. Schreiber , Mark Schroeder & Winston Bennett Jr.(2011) Distributed Mission Operations Within-Simulator Training Effectiveness,The International Journal of Aviation Psychology, 21:3, 254-268, DOI:10.1080/10508414.2011.582448

To link to this article: http://dx.doi.org/10.1080/10508414.2011.582448

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly or

Page 2: Distributed Mission Operations Within-Simulator Training Effectiveness

indirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone isexpressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 3: Distributed Mission Operations Within-Simulator Training Effectiveness

THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 21(3), 254–268ISSN: 1050-8414 print / 1532-7108 onlineDOI: 10.1080/10508414.2011.582448

Distributed Mission OperationsWithin-Simulator Training Effectiveness

Brian T. Schreiber,1 Mark Schroeder,1 and Winston Bennett, Jr.21Lumir Research Institute, Grayslake, Illinois

2Air Force Research Laboratory, Mesa, Arizona

This study examined the effectiveness of distributed mission operations (DMO)training using objective and subjective measures. DMO consists of multiplayernetworked environments that facilitate the training of higher order individual andteam-oriented combat skills. Objective measures included performance assessments,and subjective measures included performance ratings by subject matter experts andpilot perceptions of DMO utility. Results indicated that DMO training improvedpilot performance, most notably in the reduction of the number of enemy strik-ers reaching their target and the number of F-16 mortalities. Considerations ofreal-world reductions in loss of life and expenditures are discussed.

In recent years a shift has begun within the U.S. Air Force (USAF). The AirForce has been reducing its number of flying hours available for training. It isaugmenting its frequency-based training system, known as the Ready AircrewProgram (RAP; U.S. Department of the Air Force: Flying Operations, 2002) withdistributed mission operations (DMO). The goal is to maintain proficiency on mis-sion essential competencies (MECs; Colegrove & Alliger, 2002). DMO has beenemployed as a means of providing training for many of these essential experiencesand skills in a cost-effective manner, but the effectiveness of DMO has not beenrigorously assessed. The focus of the work presented here is to provide a robustand rigorous representation of the effectiveness of using the DMO environmentfor training.

This article not subject to U.S. copyright law.Correspondence should be sent to Brian T. Schreiber, Lumir Research Institute, 195 Bluff Ave.

Grayslake, IL 60030. E-mail: [email protected]

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 4: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 255

RISE OF DMO ENVIRONMENTS

The Air Force has attempted to better identify the experiences and skills neces-sary for mission success. The MECs defined in Colegrove and Alliger’s (2002)manuscript provide a basis for developing specific skills and experiences to buildoverall proficiency, which is philosophically and functionally quite different froma frequency-based program.

The competency-based simulation training program discussed herein employsa building block procedure in which specific skills are developed through exer-cises designed to target those skills first in isolation and then in union with otherskills (Symons, France, Bell, & Bennett, 2006). For example, initial scenariostarget a limited set of MEC experiences (e.g., 1:1 force ratio) and MEC skills(e.g., interprets sensor output). Subsequent scenarios promote further training byintroducing additional experiences and skills, to further develop expertise.

For many pilots, these training exercises represent their first exposure to someof the MEC experiences, and repeated exposures serve to increase recognition anddiscrimination of trigger events, thereby improving pilot performance. This typeof deliberate knowledge building distinguishes structured, principled competency-based training from mere mission rehearsal.

Whereas several MECs can be reasonably developed in a single live aircraft(e.g., supersonic employment), others require a substantial number of aircraft(e.g., 1:3+ force ratio). The cost associated with the number and variety of livejets over multiple engagements can escalate quickly, and the use of a networkedsimulator environment—specifically the DMO environment—has been consid-ered to be a more cost-effective alternative. A few current examples of DMOfacilities include the F-16 mission training center at Shaw Air Force Base; theAir Force Research Laboratory in Mesa, Arizona; the F-22 mission training cen-ter at Langley, Virginia; and the F-15 mission training centers at Langley andLakenheath Air Force bases. The primary focus of this study is to validate theeffectiveness of such networked training facilities.

DMO TRAINING

DMO training is generally defined as events that can bring multiple warfighterstogether to train complex individual or team tasks during the course of largerscale, realistic combat missions (Chapman & Colegrove, in press). Although sim-ulators have been around for decades, DMO simulation training environmentsare relatively new. Until the late 1990s, warfighters received training on com-plex tactical missions almost exclusively during infrequent larger scale “live”range exercises. Technological advances have made virtual environments moreand more realistic and flexible. In contrast to stand-alone simulators of the pastthat served primarily to train individuals for emergency procedures or routine

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 5: Distributed Mission Operations Within-Simulator Training Effectiveness

256 SCHREIBER, SCHROEDER, AND BENNETT

tasks, DMO’s networked environments enable frequent training of higher orderindividual and team-oriented skills. This allows warfighters the ability to cometogether as a team to train against manned or simulated adversaries (Callander,1999; Chapin, 2004), providing opportunities to gain battle-like experiences notfrequently gained outside of war.

A survey of 94 F-15 air combat pilots revealed that higher order experientialareas (e.g., multibogey, reaction to surface-to-air missiles, etc.) received less thanadequate training in their current unit (Houck, Thomas, & Bell, 1991). For eachidentified training area, F-15 pilots reported that higher order training was betterconducted using a simulator, which translates into a potential advantage for DMOtraining environments.

Simulator training efficacy studies to date have utilized opinion, rater, andobjective data to assess progress on simpler tasks representative of a small por-tion of a mission (e.g., manual bomb delivery, one-vs.-one air combat), and allstudies have found simulator training beneficial (e.g., Gray, Chun, Warner, &Eubanks, 1981; Gray & Fuller, 1977; Jenkins, 1982; Kellogg, Prather, & Castore,1980; Lintern, Sheppard, Parker, Yates, & Nolan, 1989; McGuinness, Bouwman,& Puig, 1982; Wiekhorst & Killion, 1986). For reviews, we refer the reader toBell and Waag (1998) and Waag (1991). Some multiplayer simulation researchsuggests that networked environments enhance individual and team skills for avariety of different aircraft and supporting forces (Berger & Crane, 1993; Crane,Schiflett, & Oser, 2000; Houck et al., 1991; Huddlestone, Harris, & Tinsworth,1999). Finally, DMO-specific research (Gehr, Schreiber, & Bennett, 2004) hasgiven indications of DMO within-simulator training effectiveness, where F-16pilots demonstrated substantial improvements in mission outcomes.

ASSESSING TRAINING EFFECTIVENESS

Because testing and training typically occurs in a virtual environment, there mightbe limitations on how DMO results extend to the real world; other forms ofassessment are required to make findings more robust. Useful frameworks fortraining effectiveness have been provided by Kirkpatrick (1975) and Bell andWaag (1998). Kirkpatrick’s model identified four levels of evaluation—traineeperceptions, measured evaluations of learning, observed performance, and impact.Bell and Waag’s framework varied slightly, consisting of warfighter opinions,instructor or expert rater observations, objective data from the simulator, andcomparable “real-world” transfer tasks.

Common to both frameworks is a thorough evaluation necessitating multiplesources of data converging on the same outcomes. This study adopted a methodol-ogy closer to that of Bell and Waag (1998) by employing three major types of mea-surements: objective data to quantify effectiveness by measuring improvements

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 6: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 257

on outcomes and skill proficiency; expert observation data provided expertassessment of competency; and user opinion data captured opinions on theusefulness of the training system, its pros and cons, and which tasks are bestsuited for the system.1

This study extends the existing literature in three ways. First, many previ-ous studies rely heavily on subjective ratings, which have measurement issuesincluding the potential vested interest and bias on the part of raters, a lack of sen-sitivity, and demonstrated inconsistency in tracking simple statistics without error(Krusmark, Schreiber, & Bennett, 2004). Second, the preponderance of researchthat employs objective measures does so for relatively simple tasks. Finally, thestudies that use objective measures of complex performance rely on small sam-ples. This study extends this body of research by using a multifaceted approachof measuring complex performance objectively and subjectively using the largestdata set of DMO performances known to exist.

GENERAL METHOD

Participants

From January 1, 2002, to October 22, 2004, 76 fighter pilot teams consisting of384 pilots and Airborne Warning and Control System (AWACs) operators par-ticipated. To be included in the study, operational F-16 squadrons volunteeredfor vacant DMO training research weeks. Because analyses were conducted fornonorthogonal groups of participants, descriptive statistics are provided with eachset of measures.

Measures

Objective outcome and process/skill measures. Participants were asample of 53 teams (272 pilots). All but 3 were male with a mean age of 33.1years, a mean military service of 10.8 years, and a mean number of hours inan F-16 of 1,016. Outcome measures were recorded during benchmark missionson Monday and Friday and included number of enemy strikers reaching base,closest distance achieved by strikers, number of F-16 mortalities, and numberof enemy striker and fighter mortalities. Process and skill measures includedweapons employment metrics, weapons engagement zone management metrics,wingman formation metrics, and communication use. Data were collected elec-tronically at a rate of five data points per second for more than 3,000 missions,and were aggregated to a meaningful scale when necessary.

1Additional analyses were conducted but not reported due to space restrictions. Please contactauthors for results.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 7: Distributed Mission Operations Within-Simulator Training Effectiveness

258 SCHREIBER, SCHROEDER, AND BENNETT

Expert ratings. Participants included 146 men and 2 women with a meanage of 32.8 years, a mean number of years of military service of 10.4, and amean number of hours in an F-16 of 905.7. Ratings consisted of subject matterexpert (SME) performance ratings of these 148 pilot participants from 37 teams.Two types of ratings were collected: SME ratings provided in real time whilepilots were flying their missions, and “blind” ratings completed using recordedbenchmarks without SME knowledge of team or pre- and posttraining benchmarkcondition.

Pilot perceptions and opinions. Participants included 327 pilots and 49AWACS participants. All but 2 of the 327 pilots were male, with a mean age of33.0 years and a mean number of hours in an F-16 of 1,039. AWACS demographicinformation was available for 45 of the 49 participants. All but 3 of those 45controllers were male, with a mean age of 30.4 years. Participant opinion datawere collected via surveys during the familiarization session, at the end of thetraining week, or both. Surveys included DMO feedback forms consisting of sixopen-ended questions, DMO reaction ratings in which pilots rated on a scale of1 (strongly disagree) to 4 (strongly agree) the extent to which they agreed to astatement, and ratings of the extent to which MEC experiences could be gainedin eight different training environments. Only pilots who completed all 5 days oftraining were included in analyses.

DMO training facility. The DMO facility employed an Automated ThreatEngagement System, an instructor operator station, a briefing and debriefing facil-ity, four high-fidelity F-16 simulators, and one high-fidelity AWACS simulator.The F-16s, AWACS, and threat entities interoperated according to DistributedInteractive Simulation (DIS) standards version 4.02 or version 6.0. See Schreiberand Bennett (2006) for a more complete description of the facility.

The Mesa DMO site used in this study underwent upgrades to its simulationsystems during data collection, and thus the DMO environment was not con-stant for all participants in this study. Although changing the apparatus duringthe course of a scientific study typically threatens the validity of the conclusions,we viewed these changes in the DMO environment as highly desirable because allDMO environments, as a system of integrated technologies, change and are rou-tinely upgraded at every field location, and this serves to strengthen the externalvalidity of the study.

Procedure

On arrival, participants were briefed on objectives and procedures of DMO andthe simulators. Table 1 provides a general timeline for each team.

Pilots participated in one of four similar syllabi; each consisting of nine 3.5-hrsessions. Each session entailed a 1-hr briefing, 1 hr of flying multiple engagements

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 8: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 259

TABLE 1Participant General Timeline

Day/TimeMondaySession 1

TuesdaySession 1

WednesdaySession 1

ThursdaySession 1 Friday

AM Inbrief Pilot brief Pilot brief Pilot brief Pilot briefAdmin Fly 4–8

engagementsFly 4–8

engagementsFly 4–8

engagementsFly 3 benches

Pilot brief Pilot debrief Pilot debrief Pilot debrief Pilot debriefFly fam Feedback surveyPilot debrief Reaction survey

PM MondaySession 2

TuesdaySession 2

WednesdaySession 2

ThursdaySession 2

Pilot brief Pilot brief Pilot brief Pilot briefFly 3 benches Fly 4–8

engagementsFly 4–8

engagementsFly 4–8

engagementsPilot debrief Pilot debrief Pilot debrief Pilot debrief

of the same mission genre, and 1.5 hr of debriefing. All missions were fourF-16s versus X number of threats, and complexity of missions increased eachday. Trigger events elicited the targeted skills for training during that training ses-sion. Briefings focused the pilots’ attention on aspects of the mission that wouldbe most relevant and debriefings utilized multiple screens to dissect missions andhighlight strengths and weaknesses, thereby facilitating cognitive growth and skilldevelopment.

Pilot performance on benchmark scenarios was recorded on Monday prior totraining sessions and on Friday after all training had been completed. Benchmarksessions consisted of flying three-point defense engagements. All benchmarkpoint defense scenarios pitted four F-16s and their AWACS controller againsteight threats (six hostiles and two strikers) (see Figure 1 for an illustrativeexample). All benchmarks were designed to be equally complex according toa complexity scoring scheme outlined by Denning, Bennett, and Crane (2002).Strict data collection protocols governed all benchmarks to maintain a realisticcombat environment. Benchmarks terminated under one of the following condi-tions: all F-16s dead, all air adversaries dead, enemy strikers reached their target,or 13 min elapsed time. The primary goal for the point defense benchmark was toprevent enemy strikers or bombers from reaching the base, with success definedas striker denial or kill.

Participants completed DMO reaction rating forms and self-report feedbackforms after all sessions were completed. The feedback forms contained open-ended questions asking if participants believed that their objectives had been metand what factors facilitated or hindered their performance.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 9: Distributed Mission Operations Within-Simulator Training Effectiveness

260 SCHREIBER, SCHROEDER, AND BENNETT

FIGURE 1 Example mirror-image point defense benchmark scenarios used during pre- andposttraining.

RESULTS

Objective Outcome and Process/Skill Measures

The mean of each team’s pretraining and posttraining performance were com-puted to produce session averages. Results of t test analyses showed significantimprovement in mission outcome measures and on many of the skill and processmeasures. Results for outcomes and skill metrics of interest are summarized inTable 2. All significant effects were found to be in the expected direction (i.e.,improved performance from Monday to Friday).

Expert observer ratings

Analyses of average pretraining and posttraining performance were performedseparately for real-time and blind ratings. For real-time ratings, a total of 57 per-formance constructs were rated on a scale from 0 to 4. Average construct ratingsranged from 0.81 to 1.97 (M = 1.36) on Monday to 2.10 to 2.93 (M = 2.51)on Friday. Because the number of engagements was different from the num-ber of briefs and debriefs, separate analyses were conducted for these trainingareas. A within-subject analysis of variance (ANOVA) showed that SMEs ratedparticipants significantly higher on Friday’s brief and debrief (M = 2.76) than on

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 10: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 261

TABLE 2Summary Results for Objective Metrics

Variable Name Change Monday–Friday (%) p

“Top Gun” scoring schemea +314.21% <.01No.of enemy strikers reaching target −58.33% <.01Closest distance achieved in #1 +38.10% <.04No.of Viper mortalities −54.77% <.01No.of enemy strikers killed (before reaching base) +75.26% <.01No.of enemy aircraft killed +9.20% <.01Proportion of Viper AMRAAMs resulting in a kill +6.82% <.03Proportion of Threat Alamos resulting in a kill −51.60% <.01Avg. time allowing hostiles into MAR (sec) −55.20% <.01Avg. time allowing hostiles into N-pole (sec) −60.33% <.01Slant range at AMRAAM pickle +10.31% <.01Mach at AMRAAM pickle +5.28% <.01Altitude at AMRAAM pickle +7.97% <.01Loft angle at AMRAAM pickle +14.80% <.01G-loading at AMRAAM pickle ns nsDetonation range (hits and misses) +8.12% <.01

Note. AMRAAM = Advanced medium range air to air missile; MAR = Minimum AbortRange.

aCompositescore of fratricides, strikers killed before or after target, and hostile fighter andF-16 mortalities.

Monday’s brief and debrief (M = 1.75), F(1, 28) = 97.22, p < .001. Across allengagement “flying” constructs, grade sheet scores were significantly higher onFriday’s benchmarks (M = 2.40) compared to Monday’s benchmarks (M = 1.20),F(1, 47) = 150.86, p < .001. Follow-up t tests revealed that for all 57 real-timerated constructs, Friday’s score was significantly higher than Monday’s (p < .001for all). Summary results are provided in Table 3.

For blind ratings, there were 36 matched Monday–Friday benchmarks.Average ratings for a given construct ranged from 1.33 to 2.86 (M = 1.85) on

TABLE 3Summary Results for Subject Matter Expert Real-Time and Blind Observer Rating Data

Monday Friday

Construct Rated No.Constructs N M SE N M SE

Average brief (realtime) 8 29 1.74 0.14 29 2.77 0.11Average engagement (real time) 40 48 1.20 0.10 48 2.40 0.12Average engagement (blind) 32 36 1.85 0.13 36 2.33 0.11Average debrief (realtime) 9 29 1.72 0.16 29 2.80 0.12

Note. For the blind ratings, briefs and debriefs could not be observed.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 11: Distributed Mission Operations Within-Simulator Training Effectiveness

262 SCHREIBER, SCHROEDER, AND BENNETT

Monday to 1.92 to 2.91 (M = 2.33) on Friday. Within-subject differences inbenchmark ratings were statistically significant, F(1, 35) = 14.588, p = .001.These ratings appeared robust, given that raters did not know which day’s bench-mark they were watching. Follow-up t tests revealed that of the 32 constructsrated under the “blind” condition, 27 showed significant improvement (p < .05),2 approached statistical significance (p = .094 and p = .076, respectively), and 3did not show significant improvement (p > .1).

Pilot Self-Report Ratings

DMO rating results showed favorable DMO ratings from both pilots and AWACSoperators across 58 statements (all rated on a 1–4 scale). For pilots, average state-ment ratings ranged from a low of 2.56 (“As a result of this training, I haveimproved my VID tactics”) to a high of 3.94 for two statements (“I would rec-ommend this training experience to other pilots/controllers” and “DMO willpositively impact my combat mission readiness”). For AWACS operators, aver-age statement ratings ranged from a low of 2.45 (“This training provided excellentexperience in radar mechanics”) to a very high, almost unanimous score of 3.98(“I would recommend this training experience to other pilots/controllers”). Forpilots and AWACS participants, only 4 out of 58 and 9 out of 58 average indi-vidual statement ratings, respectively, were below a 3.0. The 58 statements weregrouped into seven categories and weighted mean ratings for each category areprovided in Table 4.

Both pilot (91.6%) and AWACS (80.7%) data reached an 80% agreement relia-bility criterion for coding of comments regarding DMO training. Statements were

TABLE 4Summary Categories for the 58 Statements Participants Were Asked to Rate

on a 1–4 Scale

Pilots AWACS

Seven Summary Categories Weighted Mean (n; SE) Weighted Mean (n; SE)

Overall DMO training value 3.69 (6, 521; .03) 3.58 (915; .09)DMO expectations 3.63 (1, 949; .03) 3.59 (279; .10)DMO opinions 3.72 (3, 591; .03) 3.69 (527; .07)Home unit conditions 3.40 (1, 959; .04) 3.01 (275; .13)DMO general statements 3.20 (1, 304; .04) 3.01 (181; .12)DMO scenario characteristics 3.23 (953; .04) 3.23 (137; .10)DMO syllabus mission flow 3.57 (2, 612; .03) 3.53 (378; .09)

Note. AWACS = Airborne Warning and Control System; DMO = distributed mission operations.Due to space restrictions detailed accounts of the surveys are not provided here. Please contact theauthors for more information.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 12: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 263

TABLE 5Comments by Category

Percent ofComments Frequency (N)

Category Pilots AWACS Pilots AWACS

Realistic qualities 28.14 1.07 316 2Skill improvement/acquisition 24.49 2.14 275 4Briefs/debriefs (nonspecific) 16.00 .53 3 1Briefs/debriefs (training-specific facilities) 10.95 8.02 123 15Communication 7.30 8.56 82 16Tactics 6.77 7.49 76 14Scenarios (quantity/variety/quality) 3.83 22.99 43 43Controller/AWACS integration 3.21 8.02 36 15SIM characteristics 3.21 7.49 36 14Situation awareness 3.03 6.99 34 13Cold ops 2.32 2.67 26 5Threats 2.23 2.14 25 4Incidentals (non-DMO references) 1.96 1.60 22 3Other training-related benefits 0.98 1.60 11 3Weapons/weapon employment 0.80 0 9 0Briefs/debriefs (skill improvement acquisition) 0.53 18.72 6 35

Note. AWACS = Airborne Warning and Control System; SIM = Simulator; DMO = distributedmission operations.

then grouped into categories and the percentage of statements endorsed per cate-gory is provided in Table 5. Whereas pilots most appreciated the realistic qualitiesof the stimuli or the skill improvement and acquisition, AWACS operators mostappreciated the scenarios or the skill improvement gained from the briefings anddebriefings.

Pilots rated each of the 45 MEC experiences (e.g., task saturation, lost mutualsupport, etc.) on a 5-point scale as to the extent that different environmentsprovided training for that experience. Results are provided in Table 6. A within-subjects ANOVA revealed statistically significant differences in average MECratings among environments, F(7, 217) = 11.96, p < .01, with the Mesa DMOenvironment rated highest overall and the Weapons Tactical Trainer/DeployableTactical Trainer (WTT/DTT) environment rated lowest overall. Contrast testscomparing average ratings of the DMO environment to average ratings for each ofthe other environments revealed that DMO rated significantly higher (at α = .01)than all but one other environment (the RAP Flag/Composite Force Training[CFTR] environment). Only three of the eight environments (DMO and thetwo RAP environment categories) were judged to provide at least half of theexperiences with improvement “to a moderate extent” or better.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 13: Distributed Mission Operations Within-Simulator Training Effectiveness

264 SCHREIBER, SCHROEDER, AND BENNETT

TABLE 6Ratings Averaged Over All 45 Mission Essential Competencies Experiences

Percentage of Experiences Rated

EnvironmentMean Rating Over

All Experiences 3 or Higher 2 or Higher 1 or Higher

Mesa DMO 2.65 40% 84.4% 95.6%RAP Flag/CFTR 2.37 4.4% 75.6% 100%RAP except

Flag/CFTR2.09 4.4% 60% 97.8%

UTD 1.85 0 44.4% 93.3%Sustained combat ops 1.74 2.2% 33.3% 91.1%MTC/FMT 1.54 0 11.1% 86.7%ONW/OSW 1.08 0 0 60%WTT/DTT 0.93 0 0 40%

Note. DMO = distributed mission operations; RAP = Ready Aircrew Program; CFTR =Composite Force Training; UTR =unit training device; MTC = mission training center; FMT = fullmission trainer; ONW = Operation Northern Watch; OSW = Operation Southern Watch; WTT =weapons tactical trainer; DTT = deployable tactical trainer.

Individual cell rating results showed drastically different averages for eachof the eight environments across 45 experiences, ranging from the lowest rat-ing of .25 for two environments (Mission Training Center/Full Mission Trainer[MTC/FMT] and WTT/DTT for G-induced physical limitations) to a high of 3.66for the DMO environment (1:3+ force ratio experience). The DMO environmentwas rated best (or tied for best) for 64.4% of experiences, and the WTT/DTTenvironment category was rated worst (or tied for worst) for 73.3% of experiences.

DISCUSSION

DMO training provides opportunities not available elsewhere, and new train-ing techniques and technologies are much more easily assessed and addressedin DMO than with an actual airframe. Training in DMO environments is likelyto have a number of indirect financial benefits to the military as well, such asreductions in travel expenses and repair and maintenance costs. Unlike stand-alone simulators of the past, pilots using DMO can exercise higher order skillsand teamwork. Although live-fly exercises also provide this opportunity, pilotsreported training on these higher order skills to be infrequent, identifying a cur-rent training gap that DMO could fill. Furthermore, DMO provides repetitionlevels not possible with live-fly situations in a significantly shorter amount of totaltraining time.

In this study, an F-16 team flew over 40 total engagements, employing severalhundred missile-shots against hundreds of threats over eight nonfamiliarization

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 14: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 265

mission sessions. A simulator session in this work lasted 1 hr, creating an averageof five “many versus many” scenarios per hour. Pilots one generation ago mightneed an entire career to achieve 40 such experiences that, in the current DMOprotocol, required just the equivalent of one working day of simulator time. Thequestion at hand is whether such repetition produces measurable and significantlearning such that a more competent warfighter emerges from DMO training.

Results of this study provided substantial evidence that pilots become morecompetent in the simulator as a function of DMO training. Despite several factorsthat could have prevented the detection of statistically significant within-simulatorDMO learning effects (e.g., an extremely complex and ecologically valid task,changes to the experimental environment over the course of the study, tasks thatcould be only partially controlled, and use of a highly experienced participantpool), the detection of statistically significant performance differences betweenpre- and posttraining strongly suggests that DMO training yields considerablewithin-simulator warfighter competency improvement.

The number of enemy strikers reaching base—the most important combat-relevant metric in this study—declined by 58.33% from Monday to Friday.Furthermore, F-16 mortalities decreased by 54.77%. If this learning effect trans-fers to true combat, consider (a) the capability of force difference, (b) thenumber of friendly force lives saved, and (c) the financial implications for thegovernment in both real-world combat and training expenses. Further buttressingthe findings of DMO benefits were tremendous improvements in metrics thatwere not attributable to pilots negatively altering their risk tolerance. That is,even though teams routinely denied enemy strikers and more easily disposedof threats posttraining compared to their pretraining benchmarks, the teamsdid so while reducing their vulnerability exposure. The F-16s launched theirAdvanced medium range air-to-air missile (AMRAAMs) at greater ranges andgreatly reduced their exposure to hostiles (Minimum abort range [MAR], Notch-pole). The significant differences observed were not attributable to a risk–rewardtrade-off; rather, they illustrated skill proficiency gains across all measures.

Other sources of data reinforced objective combat ratings. Real-time aswell as experimentally blind SME observer ratings corroborated objective data.SMEs rated pilots’ performance at the end of the week as significantly higher.Additionally, using self-report data, pilots gave favorable ratings on all 58 state-ments regarding the utility of DMO training. Pilots highly recommended thetraining to their peers. Furthermore, an analysis of pilots’ responses to what theyliked best about training in the DMO environment revealed skill acquisition to bemost frequently mentioned.

General opinion in the DMO community has been that DMO training providesgreat benefit. Converging results from this study indicate strong support and jus-tification for raising our expectations of DMO’s training potential. This study,however, does not address some potentially negative training issues associated

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 15: Distributed Mission Operations Within-Simulator Training Effectiveness

266 SCHREIBER, SCHROEDER, AND BENNETT

with DMO. Examples include lack of consequences for running out of fuel,the lack of g-force effects, the lack of emergency procedures, or having todeal with inclement weather during missions. Additionally, this study was per-formed on a nonrandom sample of F-16 pilots, limiting the generalizability ofresults. Subsequent studies should utilize random samples in different missionsand different domains.

Furthermore, the amount of improvement on scores cannot entirely be brokendown into percentage of improvement accounted for by individual constructs (i.e.,what percentage of an individual’s or team’s improvement was due to learningwithin each pilot rather than to pilots learning to coordinate with one another).Learning effects in this study reflect the week’s experience in total, rather thanthe individual learning processes. As such, the total learning effect could not beteased apart (e.g., proportion of learning from flying vs. debriefing).

Future research is also needed on application-oriented studies regarding thedegree of transfer to live-fly training events and how quickly skills decay.Although current results support DMO’s training potential, future studies will helpus better understand its benefits and how best to implement DMO training.

ACKNOWLEDGMENTS

The opinions expressed are those of the authors and do not necessarily reflect theviews of the sponsoring and employing organizations. This work was funded bythe U.S. Air Force under contract #F41624-97-D-5000, provided jointly by AirForce Research Laboratory/Human Effectiveness Directorate (AFRL/HEA) andAir Combat Command (ACC).

REFERENCES

Bell, H. H, & Waag, W. (1998). Evaluating the effectiveness of flight simulators for training combatskills: A review. The International Journal of Aviation Psychology, 8, 223–242.

Berger, S., & Crane, P. M. (1993). Multiplayer simulator based training for air combat. In Proceedingsof 15th Industry/Interservice Training Systems Conference (pp. 439–449). Orlando, FL: NationalSecurity Industrial Association.

Callander, B. D. (1999). Training in networks. Air Force Magazine, 82(8). Retrieved fromhttp://www.afa.org/magazine/aug1999/

Chapin, M. C. (2004). Message from the Air Force executive. I/ITSEC Newsletter, 3(4). Retrievedfrom http://www.iitsec.org/documents/Nov04_IITSEC.pdf

Chapman, R., & Colegrove, C. M., (in press). Transforming operational training in the combat airforces. In W. Bennett, Jr., B. T. Schreiber, & H. H. Bell (Eds.), Challenges in transforming militarytraining: Research and application of advanced simulation and training technologies and methods[Special issue]. Journal of Military Psychology.

Colegrove, C. M., & Alliger, G. M. (2002, April). Mission essential competencies: Defining com-bat mission readiness in a novel way. Paper presented at the NATO RTO Studies, Analysis andSimulation (SAS) Panel Symposium. Brussels, Belgium.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 16: Distributed Mission Operations Within-Simulator Training Effectiveness

DISTRIBUTED MISSION OPERATIONS 267

Crane, P. M., Schiflett, S. G., & Oser, R. L. (2000). Roadrunner 98: Training effectiveness in a dis-tributed mission training exercise (Tech. Rep. No. AFRL-HE-AZ-TR-2000-0026). Mesa, AZ: AirForce Research Laboratory.

Denning, T., Bennett, W., Jr., & Crane, P. M. (2002). Mission complexity scoring in distributed missiontraining. In 2002 Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)proceedings (pp. 720–730). Orlando, FL: National Security Industrial Association.

Gehr, S., Schreiber, B. T., & Bennett, W. (2004). Within-simulator training effectiveness evaluation. In2004 Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) proceedings(pp. 1652–1661). Orlando, FL: National Security Industrial Association.

Gray, T. H., Chun, E. K., Warner, H. D., & Eubanks, J. L. (1981). Advanced flight simulator:Utilization in A-10 conversion and air-to-surface attack training (Tech. Rep. No. AFHRL-TR-80-20, AD A094 608). Williams Air Force Base, AZ: Air Force Human Resources Laboratory,Operations Training Division.

Gray, T. H., & Fuller, R. R. (1977). Effects of simulator training and platform motion on air-to-surface weapons delivery training (Tech. Rep. No. AFHRL-TR-77-29, AD A043 648). WilliamsAir Force Base, AZ: Air Force Human Resources Laboratory, Operations Training ResearchDivision.

Houck, M. R., Thomas, G. S., & Bell, H. H. (1991). Training evaluation of the F-15 advanced aircombat simulation (Tech. Rep. No. AL-TP-1991-0047, AD A241 675). Williams Air Force Base,AZ: Armstrong Laboratory, Aircrew Training Research Division.

Huddlestone, J., Harris, D., & Tinsworth, A. (1999). Air combat training—The effectiveness of multi-player simulation. In Proceedings of the Interservice/Industry Training Systems and EducationConference (pp. 386–395). Arlington, VA: National Defense Industrial Association.

Jenkins, D. H. (1982). Simulation training effectiveness evaluation (TAC Project No. 79Y-001F).Nellis Air Force Base, NV: Tactical Fighter Weapons Center.

Kellogg, R., Prather, E., & Castore, C. (1980). Simulated A-10 combat environment. Human FactorsSociety Annual Meeting Proceedings, 24(7), 573–577.

Kirkpatrick, D. (1975). Techniques for evaluating training programs. Alexandria, VA: AmericanSociety for Training and Development.

Krusmark, M., Schreiber, B. T., & Bennett, W. Jr. (2004). The effectiveness of a traditional gradesheetfor measuring air combat team performance in simulated distributed mission operations (Tech. Rep.No. AFRL-HE-AZ-TR-2004-0090). Air Force Research Laboratory, AZ: Warfighter ReadinessResearch Division.

Lintern, G., Sheppard, D., Parker, D., Yates, K., & Nolan, M. (1989). Simulator design andinstructional features for air-to-ground attack: A transfer study. Human Factors, 31, 87–100.

McGuinness, J., Bouwman, J. H., & Puig, J. A. (1982). Effectiveness evaluation for air combat train-ing. In Proceedings of the 4th Interservice/Industry Training Equipment conference (pp. 391–396).Washington, DC: National Security Industrial Association.

Schreiber, B. T., & Bennett, W. (2006). Distributed mission operations within-simulator training effec-tiveness baseline study: Metric development and objectively quantifying the degree of learning(Tech. Rep. No. AFRL-HE-AZ-TR-2006-0015-Vol II). Mesa AZ: Air Force Research Laboratory,Warfighter Readiness Research Division.

Symons, S., France, M., Bell, J., & Bennett, W., Jr. (2006). Linking knowledge and skills to mis-sion essential competency-based syllabus development for distributed mission operations (Tech.Rep. No. AFRL-HE-AZ-TR-2006-0041; ADA453737). Mesa, AZ: Air Force Research Laboratory,Warfighter Readiness Research Division.

U.S. Department of the Air Force: Flying Operations. (2002, August). Air Force Instruction(AFI) 11–2F-16, Volume 1. Retrieved from http://www.e-publishing.af.mil/pubfiles/af/11/afi11–2f-16v1/afi11–2f-16v

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4

Page 17: Distributed Mission Operations Within-Simulator Training Effectiveness

268 SCHREIBER, SCHROEDER, AND BENNETT

Waag, W. L. (1991). The value of air combat simulation: Strong opinions but little evidence. InTraining transfer—Can we trust flight simulation? Proceedings of the Royal Aeronautical SocietyConference on Flight Simulation and Training (pp. 4.1–4.12). London, UK: Royal AeronauticalSociety.

Wiekhorst, L. A., & Killion, T. H. (1986). Transfer of electronic combat skills from a flight simulatorto the aircraft (Tech. Rep. No. AFHRL-TR-86-45, AS C040549). Williams Air Force Base, AZ:Air Force Human Resources Laboratory, Operations Training Division.

Manuscript first received: March 2009

Dow

nloa

ded

by [

Uni

vers

ity o

f A

berd

een]

at 0

7:24

04

Oct

ober

201

4