318
TEST AND EVALUATION MANAGEMENT GUIDE JANUARY 2005 FIFTH EDITION PUBLISHED BY THE DEFENSE ACQUISITION UNIVERSITY PRESS FORT BELVOIR, VA 22060-55654

DAU T&E Management Guide.pdf

Embed Size (px)

DESCRIPTION

DAU T&E Management Guide

Citation preview

Page 1: DAU T&E Management Guide.pdf

i

TEST AND EVALUATIONMANAGEMENT GUIDE

JANUARY 2005

FIFTH EDITION

PUBLISHED BYTHE DEFENSE ACQUISITION UNIVERSITY PRESS

FORT BELVOIR, VA 22060-55654

Page 2: DAU T&E Management Guide.pdf

ii

Page 3: DAU T&E Management Guide.pdf

iii

ACKNOWLEDGMENTS

This January 2005 update to the Defense Acquisition University’s Test and Evaluation ManagementGuide includes updates from the Military Services, Defense Agencies, and other organizations, aswell as changes in the May 2003 DoD 5000 series. Inputs were collated and finalized by DAUProgram Director for Test and Evaluation Dr. John Claxton. Freelance writer Christina Cavoli andDefense AT&L editor-in-chief Collie Johnson edited the final document. Freelance designer andcartoonist Jim Elmore designed the cover, and final desktop publishing and layout was accomplishedby Bartlett Communications and DAU Visual Information Specialist Deborah Gonzalez.

Page 4: DAU T&E Management Guide.pdf

iv

Page 5: DAU T&E Management Guide.pdf

v

FOREWORD

This book is one of many technical management educational guides written from a Department ofDefense perspective; i.e., non-Service specific. They are intended primarily for use in courses atDefense Acquisition University and secondarily as a desk reference for program and project man-agement personnel. These guidebooks are written for current and potential acquisition managementpersonnel who are familiar with basic terms and definitions employed in program offices. The guide-books are designed to assist government and industry personnel in executing their management re-sponsibilities relative to the acquisition and support of defense systems.

The objective of a well-managed test and evaluation program is to provide timely and accurate in-formation. This guide has been developed to assist the acquisition community in obtaining a betterunderstanding of whom the decision-makers are and determining how and when to plan test andevaluation events.

Dr. John D. ClaxtonProgram Director, T&EDefense Acquisition University

4

Page 6: DAU T&E Management Guide.pdf

vi

Page 7: DAU T&E Management Guide.pdf

vii

CONTENTS

Page

Acknowledgments .............................................................................................................................. iii

Foreword .............................................................................................................................................. v

MODULE I — Management of Test and Evaluation

Chapter 1 – Importance of Test and Evaluation

1.1 Introduction ........................................................................................................... 1-1

1.2 Testing as a Risk Management Tool .................................................................... 1-1

1.3 The T&E Contribution at Major Milestones ....................................................... 1-4

1.4 Summary................................................................................................................ 1-8

Chapter 2 — The Test and Evaluation Process

2.1 Introduction ........................................................................................................... 2-1

2.2 Defense System Acquisition Process ................................................................... 2-1

2.3 T&E and the Systems Engineering Process (SEP) ............................................. 2-5

2.4 Definitions ............................................................................................................. 2-9

2.5 The DoD T&E Process ....................................................................................... 2-11

2.6 Summary.............................................................................................................. 2-12

Chapter 3 — T&E Policy Structure and Oversight Mechanism

3.1 Introduction ........................................................................................................... 3-1

3.2 The Congress ........................................................................................................ 3-1

3.3 OSD Oversight Structure ...................................................................................... 3-2

3.4 Service T&E Management Structures .................................................................. 3-5

3.5 The T&E Executive Agent Structure ................................................................. 3-11

3.6 Summary.............................................................................................................. 3-12

Chapter 4 — Program Office Responsibilities for Test and Evaluation

4.1 Introduction ........................................................................................................... 4-1

4.2 Relationship to the Program Manager ................................................................. 4-1

4.3 Early Program Stages ........................................................................................... 4-1

4.4 PMO/Contractor Test Management ...................................................................... 4-4

Page 8: DAU T&E Management Guide.pdf

viii

4.5 Integrated Product Teams for Test and Evaluation .............................................. 4-4

4.6 Test Program Funding/Budgeting ......................................................................... 4-4

4.7 Technical Reviews, Design Reviews, and Audits ................................................ 4-5

4.8 Contractor Testing ................................................................................................. 4-5

4.9 Specifications ........................................................................................................ 4-5

4.10 Independent Test and Evaluation Agencies .......................................................... 4-6

4.11 PMO Responsibilities for Operational Test and Evaluation ............................... 4-6

4.12 Summary.............................................................................................................. 4-10

Chapter 5 — Test-Related Documentation

5.1 Introduction ........................................................................................................... 5-1

5.2 Requirements Documentation ............................................................................... 5-1

5.3 Program Decision Documentation........................................................................ 5-4

5.4 Program Management Documentation ................................................................. 5-4

5.5 Test Program Documentation ............................................................................... 5-6

5.6 Other Test-Related Status Reports ....................................................................... 5-9

5.7 Summary.............................................................................................................. 5-10

Chapter 6 — Types of Test and Evaluation

6.1 Introduction ........................................................................................................... 6-1

6.2 Development Test and Evaluation ........................................................................ 6-1

6.3 Operational Test and Evaluation........................................................................... 6-3

6.4 Multi-Service Test and Evaluation ....................................................................... 6-6

6.5 Joint Test and Evaluation ...................................................................................... 6-6

6.6 Live Fire Testing ................................................................................................... 6-7

6.7 Nuclear, Biological, and Chemical (NBC) Weapons Testing ............................. 6-8

6.8 Nuclear Hardness and Survivability (NHS) Testing............................................ 6-9

6.9 Summary................................................................................................................ 6-9

MODULE II — Development Test and Evaluation

Chapter 7 — Introduction to Development Test and Evaluation

7.1 Introduction ........................................................................................................... 7-1

7.2 DT&E and the System Acquisition Cycle ........................................................... 7-1

7.3 DT&E Responsibilities ......................................................................................... 7-4

7.4 Test Program Integration ...................................................................................... 7-5

7.5 Areas of DT&E Focus .......................................................................................... 7-6

7.6 System Design for Testing ................................................................................... 7-7

Page 9: DAU T&E Management Guide.pdf

ix

7.7 Impact of Warranties on T&E .............................................................................. 7-8

7.8 DT&E of Limited Procurement Quantity Programs ........................................... 7-9

7.9 Summary................................................................................................................ 7-9

Chapter 8 — DT&E Support of Technical Reviews and Milestone Decisions

8.1 Introduction ........................................................................................................... 8-1

8.2 DT&E and the Review Process ........................................................................... 8-1

8.3 Configuration Change Control ............................................................................. 8-6

8.4 Summary................................................................................................................ 8-6

Chapter 9 — Combined and Concurrent Testing

9.1 Introduction ........................................................................................................... 9-1

9.2 Combining Development Test and Operational Test ........................................... 9-1

9.3 Concurrent Testing ................................................................................................ 9-3

9.4 Advantages and Limitations ................................................................................. 9-4

9.5 Summary................................................................................................................ 9-4

Chapter 10 — Production-Related Testing Activities

10.1 Introduction ......................................................................................................... 10-1

10.2 Production Management ..................................................................................... 10-1

10.3 Production Readiness Review (PRR) ................................................................. 10-2

10.4 Qualification Testing ........................................................................................... 10-2

10.5 Transition to Production ..................................................................................... 10-2

10.6 Low Rate Initial Production (LRIP) .................................................................. 10-4

10.7 Production Acceptance Test and Evaluation (PAT&E) ...................................... 10-4

10.8 Summary.............................................................................................................. 10-5

MODULE III — Operational Test and Evaluation

Chapter 11 — Introduction to Operational Test and Evaluation

11.1 Introduction ......................................................................................................... 11-1

11.2 Purpose of OT&E ............................................................................................... 11-1

11.3 Test Participants .................................................................................................. 11-1

11.4 Types of OT&E ................................................................................................... 11-2

11.5 Test Planning ....................................................................................................... 11-4

11.6 Test Execution ..................................................................................................... 11-5

11.7 Test Reporting ..................................................................................................... 11-5

11.8 Summary.............................................................................................................. 11-7

Page 10: DAU T&E Management Guide.pdf

x

Chapter 12 — OT&E to Support Decision Reviews

12.1 Introduction ......................................................................................................... 12-1

12.2 Concept Refinement (CR) and Technology Development (TD) ...................... 12-1

12.3 System Development and Demonstration (SDD) .............................................. 12-2

12.4 Production and Deployment (P&D) ................................................................... 12-4

12.5 Operations and Support (O&S) .......................................................................... 12-4

12.6 Summary.............................................................................................................. 12-5

MODULE IV — Test and Evaluation Planning

Chapter 13 — Evaluation

13.1 Introduction ......................................................................................................... 13-1

13.2 Difference Between “Test” and “Evaluation” .................................................... 13-1

13.3 The Evaluation Process ....................................................................................... 13-1

13.4 Issues and Criteria .............................................................................................. 13-3

13.5 Measures of Effectiveness (MOEs) .................................................................... 13-5

13.6 Evaluation Planning ............................................................................................ 13-5

13.7 Evaluating Development and Operational Tests ................................................ 13-6

13.8 Summary.............................................................................................................. 13-8

Chapter 14 — Modeling and Simulation Support to T&E

14.1 Introduction ......................................................................................................... 14-1

14.2 Types of Models and Simulations ...................................................................... 14-1

14.3 Validity of Modeling and Simulation ................................................................ 14-3

14.4 Support to Test Design and Planning ................................................................ 14-4

14.5 Support to Test Execution .................................................................................. 14-7

14.6 Support to Analysis and Test Reporting ............................................................ 14-9

14.7 Simulation Integration......................................................................................... 14-9

14.8 Simulation Planning ............................................................................................ 14-9

14.9 Summary............................................................................................................ 14-10

Chapter 15 — Test Resources

15.1 Introduction ......................................................................................................... 15-1

15.2 Obtaining Test Resources ................................................................................... 15-1

15.3 Test Resource Planning....................................................................................... 15-4

15.4 Test Resource Funding ....................................................................................... 15-8

15.5 Summary............................................................................................................ 15-10

Page 11: DAU T&E Management Guide.pdf

xi

Chapter 16 — Test and Evaluation Master Plan

16.1 Introduction ......................................................................................................... 16-1

16.2 TEMP Development ............................................................................................ 16-1

16.3 TEMP Format ...................................................................................................... 16-5

16.4 Summary.............................................................................................................. 16-7

MODULE V — Specialized Testing

Chapter 17 — Software Systems Testing

17.1 Introduction ......................................................................................................... 17-1

17.2 Definitions ........................................................................................................... 17-2

17.3 Purpose of Software Test and Evaluation .......................................................... 17-2

17.4 Software Development Process .......................................................................... 17-3

17.5 T&E in the Software Life Cycle ....................................................................... 17-4

17.6 Summary.............................................................................................................. 17-8

Chapter 18 — Testing for Vulnerability and Lethality

18.1 Introduction ......................................................................................................... 18-1

18.2 Live Fire Testing ................................................................................................. 18-1

18.3 Testing for Nuclear Hardness and Survivability ............................................... 18-5

18.4 Summary.............................................................................................................. 18-7

Chapter 19 — Logistics Infrastructure T&E

19.1 Introduction ......................................................................................................... 19-1

19.2 Planning for Logistics Support System T&E .................................................... 19-1

19.3 Conducting Logistics Support System T&E...................................................... 19-4

19.4 Limitations to Logistics Support System T&E ................................................. 19-8

19.5 Summary.............................................................................................................. 19-8

Chapter 20 — EC/C4ISR Test and Evaluation

20.1 Introduction ......................................................................................................... 20-1

20.2 Testing EC Systems ............................................................................................ 20-1

20.3 Testing of C4ISR Systems .................................................................................. 20-3

20.4 Trends in Testing C4I Systems ........................................................................... 20-5

20.5 T&E of Surveillance and Reconnaissance Systems .......................................... 20-6

20.6 Summary.............................................................................................................. 20-7

Page 12: DAU T&E Management Guide.pdf

xii

Chapter 21 — Multi-Service Tests

21.1 Introduction ......................................................................................................... 21-1

21.2 Background .......................................................................................................... 21-1

21.3 Test Program Responsibilities ............................................................................ 21-2

21.4 Test Team Structure ............................................................................................ 21-3

21.5 Test Planning ....................................................................................................... 21-3

21.6 Discrepancy Reporting ........................................................................................ 21-3

21.7 Test Reporting ..................................................................................................... 21-4

21.8 Summary.............................................................................................................. 21-4

Chapter 22 — International Test and Evaluation Programs

22.1 Introduction ......................................................................................................... 22-1

22.2 Foreign Comparative Test Program .................................................................... 22-1

22.3 NATO Comparative Test Program ...................................................................... 22-2

22.4 T&E Management in Multinational Programs .................................................. 22-2

22.5 U.S. and NATO Acquisition Programs .............................................................. 22-4

22.6 Summary.............................................................................................................. 22-4

Chapter 23 — Commercial and Non-Development Items

23.1 Introduction ......................................................................................................... 23-1

23.2 Market Investigation and Procurement .............................................................. 23-3

23.3 Commercial Item and NDI Testing .................................................................... 23-3

23.4 Resources and Funding....................................................................................... 23-5

23.5 Summary.............................................................................................................. 23-5

Chapter 24 — Testing the Special Cases

24.1 Introduction ......................................................................................................... 24-1

24.2 Testing with Limitations ..................................................................................... 24-1

24.3 Space System Testing ......................................................................................... 24-3

24.4 Operations Security and T&E ............................................................................ 24-5

24.5 Advanced Concept Technology Demonstrations................................................ 24-5

24.6 Summary.............................................................................................................. 24-5

Chapter 25 — Best Practices in T&E of Weapon Systems

25.1 Introduction ......................................................................................................... 25-1

25.2 Specific Weapon Systems Testing Checklist ..................................................... 25-2

Page 13: DAU T&E Management Guide.pdf

xiii

APPENDICES

Appendix A – Acronyms and Their Meanings ................................................................. A-1

Appendix B – DoD Glossary of Test Terminology .......................................................... B-1

Appendix C – Test-Related Data Item Descriptions ......................................................... C-1

Appendix D – Bibliography ............................................................................................... D-1

Appendix E – Index ............................................................................................................ E-1

Appendix F – Points of Contact for Service Test and Evaluation Courses ..................... F-1

LIST OF FIGURES

Figure 1-1. The 5000 Model ............................................................................................... 1-2

Figure 1-2. Life Cycle Cost Decision Impact and Expenditures ...................................... 1-3

Figure 2-1. Testing and the Acquisition Process ................................................................ 2-2

Figure 2-2. Requirements to Design ................................................................................... 2-4

Figure 2-3. The Systems Engineering Process ................................................................... 2-6

Figure 2-4. Systems Engineering Process and Test and Evaluation.................................. 2-8

Figure 2-5. Design Reviews .............................................................................................. 2-10

Figure 2-6. DoD Test and Evaluation Process ................................................................. 2-11

Figure 3-1. DoD Test and Evaluation Organization ........................................................... 3-3

Figure 3-2. Army Test and Evaluation Organization ......................................................... 3-5

Figure 3-3. Navy Test and Evaluation Organization .......................................................... 3-8

Figure 3-4. Air Force Test and Evaluation Organization ................................................... 3-9

Figure 3-5. Test and Evaluation Executive Agent Structure ............................................ 3-11

Figure 4-1. Lessons Learned from OT&E for the PM ...................................................... 4-6

Figure 5-1. Test-Related Documentation ............................................................................ 5-2

Figure 5-2. Requirements Definition Process ..................................................................... 5-3

Figure 5-3. Test Program Documentation ........................................................................... 5-6

Figure 6-1. Testing During Acquisition .............................................................................. 6-2

Figure 6-2. Sample Hierarchy of Questions Addressed in OT&E .................................... 6-5

Figure 7-1. Contractor’s Integrated Test Requirements ...................................................... 7-2

Figure 7-2. Design for Testing Procedures ......................................................................... 7-8

Figure 8-1. Relationship of DT&E to the Acquisition Process ......................................... 8-1

Figure 8-2. Technical Review Timeline .............................................................................. 8-3

Figure 8-3. Specifications Summary ................................................................................... 8-4

Figure 11-1. Organizational Breakdown of the I-81mm

Mortar Operational Test Directorate ...................................................... 11-6

Figure 12-1. OT&E Related to the Milestone Process ...................................................... 12-2

Figure 12-2. Sources of Data .............................................................................................. 12-3

Page 14: DAU T&E Management Guide.pdf

xiv

Figure 13-1. Functional Block Diagram of the Evaluation Process ................................. 13-3

Figure 13-2. Dendritic Approach to Test and Evaluation .................................................. 13-7

Figure 14-1. The Simulation Spectrum .............................................................................. 14-3

Figure 14-2. Values of Selected Criteria Conducive to Modeling and Simulation .......... 14-4

Figure 14-3. Modeling and Simulation Application in Test and Evaluation .................... 14-5

Figure 14-4. STEP Process .................................................................................................. 14-6

Figure 15-1. DoD Major Range and Test Facility Base .................................................... 15-3

Figure 17-1. The Error Avalanche ....................................................................................... 17-3

Figure 17-2. System Development Process ........................................................................ 17-4

Figure 17-3. Spiral Model of AIS Development Process .................................................. 17-5

Figure 18-1. Live Fire Test and Evaluation Planning Guide ............................................. 18-3

Figure 19-1. Logistics Supportability Objectives in the T&E Program............................ 19-2

Figure 20-1. Integrated EC Testing Approach .................................................................... 20-2

Figure 20-2. EC Test Process Concept ............................................................................... 20-3

Figure 20-3. EC Test Resource Categories ......................................................................... 20-4

Figure 20-4. The Evolutionary Acquisition Process .......................................................... 20-6

Figure 21-1. Simple Multi-Service OT&E Test Team Composition ................................. 21-2

Figure 23-1. The Spectrum of Technology Maturity ......................................................... 23-1

LIST OF TABLES

Table 5-1. Sample Test Plan Contents............................................................................... 5-8

Table 6-1. Differences Between DT&E and IOT&E ........................................................ 6-4

Table 8-1. Technical Reviews and Audits ......................................................................... 8-2

Table 9-1. Combined vs. Concurrent Testing: Advantages and Limitations ................... 9-2

Table 10-1. PRR Guidelines Checklist .............................................................................. 10-3

Table 13-1. Sample Evaluation Plan ................................................................................. 13-2

Table 15-1. TEMP Test Resource Summary Section ....................................................... 15-5

Table 16-1. Test and Evaluation Master Plan Format ...................................................... 16-6

Table 18-1. Relationships Between Key Concepts ........................................................... 18-1

Table 18-2. Types of Live Fire Testing ............................................................................. 18-2

Table 18-3. Nuclear Hardness and Survivability Assessment Activities ......................... 18-6

Page 15: DAU T&E Management Guide.pdf

IIMODULE

MANAGEMENT OFTEST AND EVALUATION

Test and Evaluation is a management tool and an integral partof the development process. This module will address the policystructure and oversight mechanisms in place for test andevaluation.

Page 16: DAU T&E Management Guide.pdf
Page 17: DAU T&E Management Guide.pdf

1-1

11IMPORTANCE OF

TEST AND EVALUATION

1.1 INTRODUCTION

The Test and Evaluation (T&E) process is anintegral part of the Systems Engineering Process(SEP), which identifies levels of performance andassists the developer in correcting deficiencies.It is a significant element in the decision-makingprocess, providing data that support trade-offanalysis, risk reduction, and requirements refine-ment. Program decisions on system performancematurity and readiness to advance to the nextphase of development take into considerationdemonstrated performance. The issue of para-mount importance to the servicemember user issystem performance; i.e., will it fulfill the mis-sion. The T&E process provides data that tell theuser how well the system is performing duringdevelopment and if it is ready for fielding. TheProgram Manager (PM) must balance the risksof cost, schedule, and performance to keep theprogram on track to production and fielding. Theresponsibility of decision-making authoritiescenters on assessing risk tradeoffs. As stated inDepartment of Defense Directive (DoDD) 5000.1,The Defense Acquisition System, “Test and evalu-ation shall be integrated throughout the defenseacquisition process. Test and evaluation shall bestructured to provide essential information to de-cision makers, assess attainment of technical per-formance parameters, and determine whether sys-tems are operationally effective, suitable,survivable, and safe for intended use. The con-duct of test and evaluation, integrated with mod-eling and simulation, shall facilitate learning, as-sess technology maturity and interoperability,facilitate integration into fielded forces, and con-firm performance against documented capability

needs and adversary capabilities as described inthe system threat assessment.”1

1.2 TESTING AS A RISK MANAGEMENTTOOL

Correcting defects in weapons has been estimatedto add from 10 percent to 30 percent to the costof each item.2 Such costly redesign and modifi-cation efforts can be reduced if carefully plannedand executed T&E programs are used to detectand fix system deficiencies early in the acquisi-tion process (Figure 1-1). Fixes instituted duringearly work efforts (Systems Integration (SI)) inthe System Development and Demonstration(SDD) Phase cost significantly less than thoseimplemented after the Critical Design Review(CDR), when most design decisions have alreadybeen made.

T&E results figure prominently in the decisionsreached at design and milestone reviews. How-ever, the fact that T&E results are required atmajor decision points does not presuppose thatT&E results must always be favorable. The finaldecision responsibility lies with the decisionmaker who must examine the critical issues andweigh the facts. Only the decision maker candetermine the weight and importance that is tobe attributed to a system’s capabilities and short-comings and the degree of risk that can beaccepted. The decision-making authority will beunable to make this judgment without a solid baseof information provided by T&E. Figure 1-2illustrates the Life Cycle Cost (LCC) of a systemand how decisions impact program expenditures.

Page 18: DAU T&E Management Guide.pdf

1-2

Fig

ure

1-1

. Th

e 50

00 M

od

el

Tech

nolo

gy

Opp

ortu

nitie

s &

User

Nee

ds

CSy

stem

In

tegr

atio

nSy

stem

De

mo

Syst

em D

ev &

Dem

onst

ratio

n

IOC

THE

5000

MO

DEL

BB

A

DRR

FRPD

R

BA 3

BA 5

BA 5

/Pro

cPr

oc/O

pera

tions

& M

aint

enan

ceFu

ndin

gBA

4

LRIP

Ope

ratio

ns &

Su

ppor

t

Requ

irem

ents

ICD

CPD

Prog

ram

Out

year

Fun

ding

Dem

onst

rate

d sy

stem

Appr

oved

CPD

& a

ssur

ed in

tero

pera

bilit

yAf

ford

abili

ty a

sses

smen

tSt

rate

gy in

pla

ce fo

r evo

lutio

nary

app

roac

h,

prod

uctio

n re

adin

ess,

and

sup

porta

bilit

y

MS

C EX

IT C

RITE

RIA

•M

ultip

le e

ntry

poi

nts

poss

ible

de

pend

ing

on

tech

nica

l/con

cept

mat

urity

•Th

ree

basi

c op

tions

at e

ach

deci

sion

poi

nt:

Proc

eed

into

ne

xt p

hase

; do

addi

tiona

l wor

k;

term

inat

e ef

fort

•Re

view

sar

e in

-pha

se

deci

sion

/pro

gres

s po

ints

hel

d as

nec

essa

ry

•Pa

per s

tudi

es o

f al

tern

ativ

e co

ncep

ts fo

r m

eetin

g a

mis

sion

•Ex

it cr

iteria

: Sp

ecifi

c co

ncep

t to

be p

ursu

ed

& te

chno

logy

exi

sts.

•De

velo

pmen

t of

subs

yste

ms/

com

pone

nts

that

m

ust b

e de

mon

stra

ted

befo

re

inte

grat

ion

into

a s

yste

m•

Conc

ept/t

ech

dem

onst

ratio

n of

ne

w s

yste

m c

once

pts

•Ex

it cr

iteria

: Sy

stem

ar

chite

ctur

e &

tech

nolo

gy

mat

ure.

•Sy

stem

inte

grat

ion

of

dem

onst

rate

d su

bsys

tem

s an

d co

mpo

nent

s•

Redu

ctio

n of

inte

grat

ion

risk

•Ex

it cr

iterio

n: S

yste

m

dem

onst

ratio

n in

a re

leva

nt

envi

ronm

ent (

e.g.

, firs

t fli

ght).

Tech

nolo

gy

Deve

lopm

ent

Syst

emIn

tegr

atio

n•

Com

plet

e de

velo

pmen

t•

Dem

o en

gine

erin

g de

velo

pmen

t mod

els

•Co

mbi

ned

DT/O

T•

Exit

crite

rion:

Sys

tem

de

mon

stra

tion

in a

n op

erat

iona

l en

viro

nmen

t.

LRIP

•IO

T&E,

LFT

&E o

f pr

od-re

p ar

ticle

s•

Crea

te m

anuf

actu

ring

capa

bilit

y•

LRIP

•Ex

it cr

iterio

n: B

-LRI

P re

port.

Conc

ept

Refin

emen

tRa

te P

rod

&De

ploy

men

t•

Full

rate

pro

duct

ion

•De

ploy

men

t of s

yste

m

Syst

emDe

mon

stra

tion

Cont

inuo

us c

omm

unic

atio

n w

ith u

sers

Early

& c

ontin

uous

test

ing

BLO

CK

II

BLO

CK

III

Sing

le S

tep

or

Evol

utio

n to

Ful

l C

apab

ility

All v

alid

ated

by

JRO

C

(BA

1 &

2)

Con

cept

Ref

inem

ent

Tech

nolo

gyD

evel

opm

ent

Pre-

Syst

ems

Acq

uisi

tion

Conc

ept

Deci

sion

Rate

Pro

duct

ion

&De

ploy

men

t

Prod

uctio

n &

Depl

oym

ent

CDD

•M

S A:

Tec

h De

vel

•M

S B:

Beg

in S

yste

m

Deve

lopm

ent

•M

S C:

Com

mitm

ent

to p

rodu

ctio

n .

Page 19: DAU T&E Management Guide.pdf

1-3

Figure 1-2. Life Cycle Cost Decision Impact and Expenditures

A Defense Science Board (DSB) 1999 Task Forcefocused on a broad overview of the state of T&Ewithin the Department of Defense (DoD).3 Thisgroup made the following observations about theT&E process:

• The focus of T&E should be on how to bestsupport the acquisition process;

• T&E planning with Operational Test (OT) per-sonnel should start early in the acquisitioncycle;

• Distrust remains between the development andtest communities;

• Contractor testing, developmental testing, andoperational testing have some overlappingfunctions;

• Ensuring the test data are independently evalu-ated is the essential element, not the taking ofthe data itself;

• Responses to perceived test “failures” are ofteninappropriate and counterproductive.

The DSB Task Force (1983) developed a set oftemplates for use in establishing and maintain-ing low-risk programs. Each template describedan area of risk and then specif ied technicalmethods for reducing that risk. PMs and test man-agers may wish to consult these templates forguidance in reducing the risks frequently associ-ated with test programs. The DoD manual Tran-sition from Development to Production containssample risk management templates.4

SOURCE: Defense Acquisition University.

TOTALEXPENDITURESFOR LIFE-CYCLE

IMPACTOF DECISIONS

ON LIFE-CYCLE COSTS(% COMMITMENT)

PERCENTAGEOF TOTAL

PERCENTAGES

ACQUISITIONPROGRAM MILESTONES

PHASE: TD SDD P&D OPNS & SUPPORT

RDT&E

PRODUCTION

OPERATIONS & SUPPORT

A B C FRPDR

25

50

75

100

CR

Page 20: DAU T&E Management Guide.pdf

1-4

1.3 THE T&E CONTRIBUTION ATMAJOR MILESTONES

T&E progress is monitored by the Office of theSecretary of Defense (OSD) throughout theacquisition process. Their oversight extends toMajor Defense Acquisition Programs (MDAPs)or designated acquisitions. T&E officials withinOSD render independent assessments to the De-fense Acquisition Board (DAB), the DefenseAcquisition Executive (DAE), and the Secretaryof Defense (SECDEF) at each system milestonereview. These assessments are based on thefollowing T&E information:

• The Test and Evaluation Master Plan (TEMP)and more detailed supporting documentsdeveloped by responsible Service activities;

• Service test agency reports and briefings;

• T&E, Modeling and Simulation (M&S), anddata from other sources such as Service PMs,laboratories, industry developers, studies, andanalyses.

At Milestone B, the OSD T&E assessmentsreflect an evaluation of system concepts and tech-nology alternatives using early performanceparameter objectives and thresholds found in anapproved preliminary TEMP. At Milestone C,assessments include an evaluation of previouslyexecuted test plans and test results. At the FullRate Production Decision Review (FRPDR),assessments include consideration of the opera-tional effectiveness and suitability evaluations ofweapon systems.

A primary contribution made by T&E is thedetection and reporting of deficiencies that mayadversely impact the performance capability oravailability/supportability of a system. A defi-ciency reporting process is used throughout theacquisition process to report, evaluate, and tracksystem deficiencies and to provide the impetus

for corrective actions that improve performanceto desired levels.

1.3.1 T&E Contributions Prior toMilestone B

During Concept Ref inement (CR) andTechnology Development (TD) activities prior toMilestone B, laboratory testing and M&S are con-ducted by the contractors and the developmentagency to demonstrate and assess the capabili-ties of key subsystems and components. The testand simulation designs are based on theoperational needs documented in the Initial Ca-pabilities Document (ICD). Studies, analyses,simulation, and test data are used by the devel-opment agency to explore and evaluate alterna-tive concepts proposed to satisfy the user’s needs.Also during this period, the Operational TestAgency (OTA) monitors CR and TD activities togather information for future T&E planning andto provide effectiveness and suitability input de-sired by the PM. The OTA also conducts EarlyOperational Assessments (EOAs), as feasible, toassess the operational impact of candidate tech-nical approaches and to assist in selecting pre-ferred alternative system concepts.

The development agency prepares the TechnologyDevelopment Strategy (TDS) with its early T&Estrategy for Development Test and Evaluation(DT&E) and M&S. The TDS presents T&E plansfor system design(s) engineering and performanceevaluations. The OTA may provide an EOA. Thisinformation is incorporated into the PM’s TEMPthat documents the program’s T&E strategy thatsupports the Milestone B decision to proceed tothe next phase.

1.3.2 T&E Contributions Prior toMilestone C

During the SDD Phase, concepts approved forprototyping form the baseline that is used fordetailed test planning. The design is matured into

Page 21: DAU T&E Management Guide.pdf

1-5

an Engineering Development Model (EDM),which is tested in its intended environment priorto Milestone C.

In SI the development agency conducts DT&E toassist with engineering design, system develop-ment, risk identification, and to evaluate thecontractor’s ability to attain desired technical per-formance in system specifications and achieve pro-gram objectives. The DT&E includes T&E of com-ponents, subsystems, and prototype developmentmodels. T&E of functional compatibility, interop-erability, and integration with fielded and devel-oping equipment and systems is also included.During this phase of testing, adequate DT&E isaccomplished to ensure engineering is reasonablycomplete (including survivability/vulnerability,compatibility, transportability, interoperability, re-liability, maintainability, safety, human factors, andlogistics supportability). Also, this phase confirmsthat all significant design problems have been iden-tified and solutions to these problems are in hand.

The Service Operational Test and Evaluation(OT&E) agency should conduct an EOA for theDesign Readiness Review (DRR) to estimate thesystem’s potential to be operationally effective andsuitable; identify needed modifications; and pro-vide information on tactics, doctrine,organization, and personnel requirements. Theearly OT&E program is accomplished in anenvironment containing limited operational realism.Typical operational and support personnel areused to obtain early estimates of the user’s capa-bility to operate and maintain the system. Someof the most important products of user assess-ments of system maintainability and supportabilityare human factors and safety issues.

In Systems Demonstration, the objective is todesign, fabricate, and test a preproduction systemthat closely approximates the final product. T&Eactivities of the EDM during this period yieldmuch useful information. For example, dataobtained during EDM T&E can be used to assistin evaluating the system’s maintenance training

requirements and the proposed training program.Test results generated during EDM T&E alsosupport the user in ref ining and updatingemployment doctrine and tactics.

During Systems Demonstration, T&E is con-ducted to satisfy the following objectives:

(1) As specified in program documents, assessthe critical technical issues:

(a) Determine how well the developmentcontract specifications have been met;

(b) Identify system technical deficienciesand focus on areas for correctiveactions;

(c) Determine whether the system is com-patible, interoperable, and can be inte-grated with existing and plannedequipment or systems;

(d) Estimate the Reliability, Availability, andMaintainability (RAM) of the systemafter it is deployed;

(e) Determine whether the system is safe andready for Low Rate Initial Production(LRIP);

(f) Evaluate effects on performance of anyconfiguration changes caused by cor-recting deficiencies, modifications, orProduct Improvements (PIs);

(g) Assess human factors and identifylimiting factors.

(2) Assess the technical risk and evaluate thetradeoffs among specifications, operationalrequirements, LCCs, and schedules;

(3) Assess the survivability, vulnerability, andlogistics supportability of the system;

Page 22: DAU T&E Management Guide.pdf

1-6

(4) Verify the accuracy and completeness of thetechnical documentation developed tomaintain and operate the weapons system;

(5) Gather information for training programsand technical training materials needed tosupport the weapons system;

(6) Provide information on environmentalissues for use in preparing environmentalimpact assessments;

(7) Determine system performance limitationsand safe operating parameters.

Thus, T&E activities intensify during this phaseand make significant contributions to the overallacquisition decision process.

The development agency evaluates the results ofT&E for review by the Service headquarters andthe Service acquisition review council prior tosystem acquisition review by the MilestoneDecision Authority (MDA). The evaluation in-cludes the results of testing and supportinginformation, conclusions, and recommendationsfor further engineering development. At the sametime, the OT&E agency prepares an IndependentOperational Assessment (IOA), which containsestimates of the system’s potential operationaleffectiveness and suitability. The OperationalAssessment (OA) provides a permanent recordof OT&E events, an audit trail of OT&E data,test results, conclusions, and recommendations.This information is used to prepare for MilestoneC and supports a recommendation of whether thedesign and performance of the system indevelopment justifies proceeding into LRIP.

1.3.3 T&E Contributions Prior to Full RateProduction Decision Review

The development agency transitions the finaldesign to LRIP while fixing and verifying any

technical problems discovered during the finaltesting of the EDM in its intended environment.The maturity of the hardware and software con-figurations and logistics support system availablefrom LRIP are assessed when the developmentagency considers certifying the system’s readi-ness for Initial Operational Test and Evaluation(IOT&E).

IOT&E is conducted prior to the productiondecision at FRPDR to:

(1) Estimate the operational effectiveness andsuitability of the system;

(2) Identify operational deficiencies;

(3) Evaluate changes in production configuration;

(4) Provide information for developing andrefining logistics support requirements forthe system and training, tactics, techniques,and doctrine;

(5) Provide information to refine Operationsand Support (O&S) cost estimates and iden-tify system characteristics or deficienciesthat can significantly impact O&S costs;

(6) Determine whether the technical publica-tions and support equipment are adequatein the operational environment.

Before the certification of readiness for IOT&E,the developer should have obtained the JointInteroperability Test Command’s (JITC)’s certi-fication of interoperability for the system com-ponents. In parallel with IOT&E, Live Fire Testand Evaluation (LFT&E) may be used to evalu-ate vulnerability or lethality of a weapon systemas appropriate and as required by public law. ThePM’s briefing and the Beyond Low Rate InitialProduction (BLRIP) report address the risks ofproceeding into Full Rate Production (FRP).

Page 23: DAU T&E Management Guide.pdf

1-7

1.3.4 T&E Contributions After the Full RateProduction Decision Review

After FRPDR, when the FRP decision is normallymade, T&E activities continue to provide impor-tant insights. Tests described in the TEMP butnot conducted during earlier phases are com-pleted. The residual DT&E may include extremeweather testing and testing corrected deficiencies.System elements are integrated into the finaloperational configuration, and development test-ing is completed when all system performancerequirements are met. During FRP, governmentrepresentatives normally monitor or conduct theProduction Acceptance Test and Evaluation(PAT&E). Each system is verified by PAT&E forcompliance with the requirements andspecifications of the contract.

Post-production testing requirements may resultfrom an acquisition strategy calling for incrementchanges to accommodate accumulated engineer-ing changes or the application of PreplannedProduct Improvements (P3Is). This will allowparallel development of high-risk technology andmodular insertion of system upgrades into pro-duction equipment. Technology breakthroughsand significant threat changes may require systemmodifications. The development of the modifi-cations will require development testing; if sys-tem performance is significantly changed, somelevel of operational testing may be appropriate.

OT&E activities continue after the FRP decisionin the form of Follow-on Operational Test andEvaluation (FOT&E). The initial phase of FOT&Emay be conducted by either the OT&E agency oruser commands, depending on Service directives.This verifies the operational effectiveness andsuitability of the production system, determinesif deficiencies identified during the IOT&E havebeen corrected, and evaluates areas not tested dur-ing IOT&E due to system limitations. AdditionalFOT&E may be conducted over the life of thesystem to refine doctrine, tactics, techniques, and

training programs, and to evaluate future incre-ments, modifications, and upgrades.

The OT&E agency prepares an OA report at theconclusion of each FOT&E. This report recordstest results, describes the evaluation accomplishedto satisfy critical issues and objectives establishedfor FOT&E, and documents its assessment ofdeficiencies resolved after SDD. Deficiencies thatare not corrected are recorded.

A final report on FOT&E may also be preparedby the using command test team, emphasizingthe operational utility of the system when oper-ated, maintained, and supported by operationalpersonnel using the concepts specified for the sys-tem. Specif ic attention is devoted to thefollowing:

(1) The degree to which the system accom-plishes its missions when employed byoperational personnel in a realistic scenariowith the appropriate organization, doctrine,threat (including countermeasures andnuclear threats), environment, and usingtactics and techniques developed duringearlier FOT&E;

(2) The degree to which the system can beplaced in operational field use, with spe-cific evaluations of availability, compatibil-ity, transportability, interoperability, reliabil-ity, wartime usage rates, maintainability,safety, human factors, manpower support-ability, logistics supportability, and trainingrequirements;

(3) The conditions under which the system wastested including the natural weather andclimatic conditions, terrain effects, battle-f ield disturbances, and enemy threatconditions;

(4) The ability of the system to perform itsrequired functions for the duration of aspecified mission profile;

Page 24: DAU T&E Management Guide.pdf

1-8

(5) System weaknesses such as the vulnerabilityof the system to exploitation by counter-measures techniques and the practicality andprobability of an adversary exploiting thesusceptibility of a system in combat.

A specific evaluation of the personnel and logis-tics changes needed for the effective integrationof the system into the user’s inventory is alsomade. These assessments provide essential inputfor the later acquisition phases of the systemdevelopment cycle.

1.4 SUMMARY

“Risk management,” according to Transition fromDevelopment to Production, “is the means bywhich the program areas of vulnerability andconcern are identified and managed.”5 T&E isthe discipline that helps to illuminate those areasof vulnerability. The importance of T&E in theacquisition process is summarized well in a July2000 General Accounting Office (GAO) ReportNSIAD-00-199, Best Practices: A More Con-structive Test Approach is Key to Better WeaponSystem Outcomes.6 The summary serves tounderscore the importance of the T&E processas a whole:

• Problems found late in development signalweaknesses in testing and evaluation;

• Early testing to validate product knowledge isa best practice;

• Different incentives make testing a more con-structive factor in commercial programs thanin weapon system programs.

“To lessen the dependence on testing late indevelopment and to foster a more constructiverelationship between program managers andtesters, GAO recommends that the Secretary ofDefense instruct acquisition managers to struc-ture test plans around the attainment of increas-ing levels of product maturity, orchestrate the rightmix of tools to validate these maturity levels, andbuild and resource acquisition strategies aroundthis approach. GAO also recommends that vali-dation of lower levels of product maturity not bedeferred to the third level. Finally, GAO recom-mends that the Secretary require that weaponsystems demonstrate a specified level of productmaturity before major programmatic approvals.”7

Page 25: DAU T&E Management Guide.pdf

1-9

ENDNOTES

1. DoDD 5000.1, The Defense AcquisitionSystem, May 12, 2003, p. 8.

2. BDM Corporation, Functional Description ofthe Acquisition Test and Evaluation Process,July 8, 1983.

3. Defense Science Board Task Force Report,Solving the Risk Equation in Transitioningfrom Development to Production, May 25,1983 (later published as DoD Manual4245.7).

4. DoD 4245.7-M, Transition from Developmentto Production, September 1985.

5. Ibid.

6. GAO/NSIAD-00-199, Best Practices: A moreConstructive Test Approach is Key to BetterWeapon System Outcomes, July 2000, p. 11.

7. Ibid.

Page 26: DAU T&E Management Guide.pdf

1-10

Page 27: DAU T&E Management Guide.pdf

2-1

22THE TEST AND

EVALUATION PROCESS

2.1 INTRODUCTION

The fundamental purpose of Test and Evaluation(T&E) in a defense system’s development andacquisition program is to identify the areas of riskto be reduced or eliminated. During the earlyphases of development, T&E is conducted to dem-onstrate the feasibility of conceptual approaches,evaluate design risk, identify design alternatives,compare and analyze tradeoffs, and estimate sat-isfaction of operational requirements. As a sys-tem undergoes design and development, the it-erative process of testing moves gradually froma concentration on Development Test and Evalu-ation (DT&E), which is concerned chiefly withattainment of engineering design goals, to increas-ingly comprehensive Operational Test and Evalu-ation (OT&E), which focuses on questions of op-erational effectiveness, suitability, andsurvivability. Although there are usually separateDevelopment Test (DT) and Operational Test (OT)events, DT&E and OT&E are not necessarilyserial phases in the evolution of a weapon sys-tem development. Combined or concurrent DTand OT are encouraged when appropriate, i.e.,conferring possible cost or time savings.1

T&E has its origins in the testing of hardware.This tradition is heavily embedded in its vocabu-lary and procedures. The advent of software-intensive systems has brought new challenges totesting, and new approaches are discussed inChapter 17 of this guide. Remaining constantthroughout the T&E process, whether testinghardware or software, is the need for thorough,logical, systematic, and early test planningincluding feedback of well-documented and

unbiased T&E results to system developers, users,and decision makers.

T&E has many useful functions and providesinformation to many customers. T&E givesinformation to: developers for identifying andresolving technical difficulties; decision makersresponsible for procuring a new system and forthe best use of limited resources; and to opera-tional users for refining requirements and sup-porting development of effective tactics, doctrine,and procedures.

2.2 DEFENSE SYSTEM ACQUISITIONPROCESS

The defense system acquisition process wasrevised significantly in 2003 to change its primaryobjective to acquisition of quality products thatsatisfy user needs with measurable improvementsto mission capability and operational support, in atimely manner, and at a fair and reasonable price.As it is now structured, the defense system lifecycle is to replicate the preferred acquisition strat-egy of an Evolutionary Acquisition (EA) processthat uses either incremental or spiral developmentprocesses. The three major elements—pre-systemacquisition, system acquisition, and sustain-ment—may include the following five phases:

(1) Concept Refinement (CR)

(2) Technology Development (TD)

(3) System Development and Demonstration(SDD)

Page 28: DAU T&E Management Guide.pdf

2-2

(4) Production and Deployment (P&D)

(5) Operations and Support (O&S).

As Figure 2-1 shows, these phases are separatedby key decision points when a Milestone Deci-sion Authority (MDA) reviews a program andauthorizes advancement to the next phase in thecycle. Thus T&E planning and test results playan important part in the MDA review process.

The following description of the defense systemacquisition process, summarized from Depart-ment of Defense Instruction (DoDI) 5000.2, Op-eration of the Defense Acquisition System2, showshow T&E fits within the context of the largerprocess.

2.2.1 Concept Refinement and TechnologyDevelopment

A defense pre-system acquisition process beginswith the requirements process identification ofa material need and the development of the Ini-tial Capabilities Document (ICD). A decision ismade to start CR, and the Technology Develop-ment Strategy (TDS) evolves with the initial testplanning concepts. CR ends when the MDA ap-proves the preferred solution resulting from theAnalysis of Alternatives (AoA), identifyingMeasures of Effectiveness (MOEs), and ap-proves the associated TDS. The TD Phase fol-lows a Milestone A decision during which therisks of the relevant technologies of the alterna-tive selected for satisfying the user’s needs are

Figure 2-1. Testing and the Acquisition Process

CONCEPTREFINEMENT

TECHNOLOGYDEVELOPMENT

SYSTEM DEV &DEMONSTRATION

PRODUCTION &DEPLOYMENT

FRPDRMILESTONEC

MILESTONEB

MILESTONEAA

LOW RATEINITIAL

PRODUCTION(LRIP)

OPERATIONALASSESSMENT

EARLY OPERATIONALASSESSMENT (EOA)

LFT REPORT

FOT&E

LFT&E

IOT&E

FULL RATEPRODUCTION

APPROVAL

APPROVALFORLRIP

APPROVENEW

START

TECHNOLOGYDEVELOPMENT

TEMPUPDATE

TEMPUPDATE

PRELIMINARYTEMP

OSD ASSESSMENT ASSESSMENT ASSESSMENT

DEVELOPMENT TEST AND EVALUATION (DT&E)

OPERATIONAL TEST AND EVALUATION (OT&E)

OPERATIONS& SUPPORT

TEST/ EVALUATIONSTRATEGY

Page 29: DAU T&E Management Guide.pdf

2-3

investigated. Shortly after the milestone deci-sion, an integrated team begins transitioning thetest planning in the TDS into the evaluation strat-egy for formulation of a Test and EvaluationMaster Plan (TEMP). The TD effort concludeswith a decision review at Milestone B when anaffordable increment of militarily useful capa-bility has been identified, the technology for thatincrement has been demonstrated in a relevantenvironment, and a system can be developed forproduction within a short time frame (normallyless than 5 years) or the MDA decides to termi-nate the effort. Typical T&E-related documentsat the Milestone B review are the AcquisitionDecision Memorandum (ADM) (exit criteria),ICD, Capability Development Document (CDD)(performance parameters), Acquisition Strategy,System Threat Assessment (STA), an Early Op-erational Assessment (EOA), and the TEMP. Ad-ditional program management documents pre-pared before Milestone B include: the AoA,Independent Cost Estimate (ICE), and the con-cept baseline version of the Acquisition ProgramBaseline (APB), which summarizes the weapon’sfunctional specifications, performance param-eters, and cost and schedule objectives.

The program office for major programs (Officeof the Secretary of Defense (OSD) oversight)must give consideration to requesting a waiverfor full-up system level Live Fire Testing (LFT)and identification of Low Rate Initial Production(LRIP) quantities for Initial Operational Test andEvaluation (IOT&E).

2.2.2 System Development andDemonstration

The Milestone B decision is program initiationfor systems acquisition and establishes broadobjectives for program cost, schedule, and tech-nical performance. After the Milestone B deci-sion for a program start, the Systems Integration(SI) work effort begins during which a selectedconcept, typically brassboard or early prototype,is refined through systems engineering, analysis,

and design. Systems engineering must manageall requirements as an integrated set of designconstraints that are allocated down through thevarious levels of design (Figure 2-2). This workeffort ends when the integration of the systemcomponents has been demonstrated through ad-equate developmental testing of prototypes. TheDesign Readiness Review (DRR) decision evalu-ates design maturity and readiness to either enterinto System Demonstration or make a change tothe acquisition strategy. The System Demonstra-tion work effort is intended to demonstrate theability of the system to operate in a useful wayconsistent with the approved Key PerformanceParameters (KPPs). Work advances the design toan Engineering Development Model (EDM) thatis evaluated for readiness to enter LRIP. “Suc-cessful development test and evaluation to assesstechnical progress against critical technical pa-rameters, early operational assessments, and,where proven capabilities exist, the use of mod-eling and simulation to demonstrate system inte-gration are critical during this effort. Deficien-cies encountered in testing prior to Milestone Cshall be resolved prior to proceeding beyondLRIP.”3 The EDM should have demonstrated ac-ceptable performance in DT&E and the Opera-tional Assessment (OA), with acceptable interop-erability and operational supportability.

Preparation for the Milestone C decision estab-lishes more refined cost, schedule, and perfor-mance objectives and thresholds, and the Capa-bility Production Document (CPD) establishes thecriteria for testing of LRIP items. Documentsinteresting to the T&E manager at the time ofthe Milestone C review include the ADM (exitcriteria), updated TEMP, updated STA, AoA,Development Baseline, development testingresults, and the OA.

2.2.3 Production and Deployment

During the LRIP work effort, the purpose is toachieve an operational capability that satisfiesmission needs. The selected system design and

Page 30: DAU T&E Management Guide.pdf

2-4

Fig

ure

2-2

. Req

uir

emen

ts t

o D

esig

n

CAPA

BILI

TIES

CAPA

BILI

TIES

FUNC

TIO

NSFU

NCTI

ONS

PRIO

RITI

ESPR

IORI

TIES

RELI

ABIL

ITY

RELI

ABIL

ITY

MAI

NTAI

NABI

LITY

MAI

NTAI

NABI

LITY

SUPP

ORT

ABIL

ITY

SUPP

ORT

ABIL

ITY

PRO

DUCI

BILI

TYPR

ODU

CIBI

LITY

OPE

RATI

ONS

OPE

RATI

ONS

MAI

NTEN

ANCE

MAI

NTEN

ANCE

LOG

ISTI

CS

LIFE

CYC

LE C

OST

/ TO

TAL

OW

NER

SHIP

CO

STLI

FE C

YCLE

CO

ST /

TOTA

L O

WNE

RSH

IP C

OST

EFFI

CIEN

CY

PRO

CES

S

AVAI

LABI

LITY

SYST

EM

EFFE

CTIV

ENES

S

TECH

NICA

L

SYST

EM

EFFE

CTIV

ENES

S

PERF

ORM

ANCE

SYST

EM

AFFO

RDAB

LEO

PERA

TIO

NAL

EFFE

CTIV

ENES

S

OPE

RATI

ONS

: INT

ERO

PERA

BILI

TY, I

NFO

RMAT

ION

ASSU

RAN

CE,

SUR

VIVA

BILI

TY &

SU

SCEP

TIBI

LITY

, HSI

, SYS

TEM

SEC

URIT

Y,AN

TI-T

AMPE

RM

AINT

ENAN

CE: C

ORR

OSI

ON

PREV

ENTI

ON

AND

CONT

ROL,

AC

CESS

IBIL

ITY,

INTE

ROPE

RABI

LITY

, UNI

QUE

IDEN

TIFI

CATI

ON

OF

ITEM

SLO

GIS

TICS

: SU

PPO

RTAB

ILIT

YPR

ODU

CIBI

LITY

: VAL

UE E

NGIN

EERI

NG, Q

UALI

TY,

MAN

UFAC

TURI

NG C

APAB

ILIT

Y

PRO

CES

S EF

FICI

ENCY

OPE

RATI

ONS

: INT

ERO

PERA

BILI

TY, I

NFO

RMAT

ION

ASSU

RAN

CE,

SUR

VIVA

BILI

TY &

SU

SCEP

TIBI

LITY

, HSI

, SYS

TEM

SEC

URIT

Y,AN

TI-T

AMPE

RM

AINT

ENAN

CE: C

ORR

OSI

ON

PREV

ENTI

ON

AND

CONT

ROL,

AC

CESS

IBIL

ITY,

INTE

ROPE

RABI

LITY

, UNI

QUE

IDEN

TIFI

CATI

ON

OF

ITEM

SLO

GIS

TICS

: SU

PPO

RTAB

ILIT

YPR

ODU

CIBI

LITY

: VAL

UE E

NGIN

EERI

NG, Q

UALI

TY,

MAN

UFAC

TURI

NG C

APAB

ILIT

Y

PRO

CES

S EF

FICI

ENCY

IMPO

RTAN

T SY

STEM

S EN

GIN

EERI

NG D

ESIG

N CO

NSID

ERAT

IONS

“THE

FIS

HBO

NE”

RELI

ABIL

ITY/

MAI

NTAI

NABI

LITY

/SU

PPO

RTAB

ILIT

Y:

RAM

, CO

TSPR

ODU

CIBI

LITY

: VAL

UE E

NGIN

EERI

NG, Q

UALI

TY,

MAN

UFAC

TURI

NG C

APAB

ILIT

Y

SYST

EM A

VAIL

ABIL

ITY

RELI

ABIL

ITY/

MAI

NTAI

NABI

LITY

/SU

PPO

RTAB

ILIT

Y:

RAM

, CO

TSPR

ODU

CIBI

LITY

: VAL

UE E

NGIN

EERI

NG, Q

UALI

TY,

MAN

UFAC

TURI

NG C

APAB

ILIT

Y

SYST

EM A

VAIL

ABIL

ITY

ARCH

ITEC

TURA

LIM

PACT

S O

N SY

STEM

OPE

N SY

STEM

S DE

SIG

NIN

TERO

PERA

BILI

TY

DESI

GN

CONS

IDER

ATIO

NS

SUR

VIVA

BILI

TY &

SUSC

EPTI

BLIT

YSO

FTW

ARE

ANTI

-TAM

PER

HSI

ACCE

SSIB

ILIT

YIN

SEN

SITI

VE M

UNIT

IONS

SYST

EM S

ECUR

ITY

INFO

RMAT

ION

ASSU

RANC

ECO

RRO

SIO

N PR

EVEN

TIO

NCO

TSDI

SPO

SAL

AND

DEM

ILIT

ARIZ

ATIO

NEN

VIRO

NMEN

T, S

AFET

Y,O

CCUP

ATIO

NAL

HEAL

TH

DESI

GN

TOO

LSM

ODE

LING

& S

IMUL

ATIO

N

SYST

EM P

ERFO

RMAN

CE

SOU

RC

E: S

kala

mer

a, R

ober

t J.

Nov

embe

r 20

04. “

Impl

emen

ting

OSD

Sys

tem

s En

gine

erin

g Po

licy”

brie

fing

to t

he U

SD(A

T&L)

.

Page 31: DAU T&E Management Guide.pdf

2-5

its principal items of support are fabricated asproduction configuration models. Test articlesnormally are subjected to qualification testing,full-up LFT, and IOT&E. This work effort endswith the Full Rate Production Decision Review(FRPDR) marking entry into Full Rate Production(FRP) and deployment of the system for InitialOperational Capability (IOC). Key documents forthe T&E manager at the time of the FRPDR arethe updated TEMP, development testing results,the Service IOT&E report, and Live Fire Test Re-port. For Acquisition Category (ACAT) I and des-ignated oversight programs, the Director of Op-erational Test and Evaluation (DOT&E) isrequired by law to document the assessment ofthe adequacy of IOT&E and the reported opera-tional effectiveness and suitability of the system.This is done in the Beyond LRIP (BLRIP) Re-port. Also mandated by law is the requirementfor the DOT&E to submit the Live Fire Test Re-port prior to the program proceeding beyondLRIP. These DOT&E Reports may be submittedas a single document.

2.2.4 Operations and Support

The production continues at full rate allowingcontinued deployment of the system to operat-ing locations and achievement of Full Opera-tional Capability (FOC). This phase may includemajor modifications to the production configu-ration, increment upgrades, and related Follow-on Operational Test and Evaluation (FOT&E)(OA). Approval for major modifications shouldidentify the actions and resources needed toachieve and maintain operational readiness andsupport objectives. The high cost of changes mayrequire initiation of the modification as a newprogram. To determine whether major upgrades/modifications are necessary or deficiencies war-rant consideration of replacement, the MDA mayreview the impact of proposed changes on sys-tem operational effectiveness, suitability, andreadiness.

2.3 T&E AND THE SYSTEMSENGINEERING PROCESS (SEP)

In the early 1970s, Department of Defense (DoD)test policy became more formalized and placedgreater emphasis on T&E as a continuing func-tion throughout the acquisition cycle. These poli-cies stressed the use of T&E to reduce acquisitionrisk and provide early and continuing estimatesof system operational effectiveness and opera-tional suitability. To meet these objectives, ap-propriate test activities had to be fully integratedinto the overall development process. From a sys-tems engineering perspective, test planning, test-ing, and analysis of test results are integral partsof the basic product definition process.

Systems engineering, as defined in the DoDcontext, is an interdisciplinary approach toevolve and verify an integrated and optimallybalanced set of product and process designs thatsatisfy user needs and provide information formanagement decisionmaking (Figure 2-3).

A system’s life cycle begins with the user’s needs,which are expressed as constraints, and therequired capabilities needed to satisfy missionobjectives. Systems engineering is essential in theearliest planning period, in conceiving the systemconcept and defining performance requirementsfor system elements. As the detailed design isprepared, systems engineers ensure balancedinfluence of all required design specialties,including testability. They resolve interface prob-lems, perform design reviews, perform trade-offanalyses, and assist in verifying performance.

The days when one or two individuals coulddesign a complex system, especially a huge,modern-age weapon system, are in the past.Modern systems are too complex for a smallnumber of generalists to manage; systems requirein-depth knowledge of a broad range of areas andtechnical disciplines. System engineers coordinatethe many specialized engineers involved in the con-current engineering process through Integrated

Page 32: DAU T&E Management Guide.pdf

2-6

Fig

ure

2-3

. Th

e S

yste

ms

En

gin

eeri

ng

Pro

cess

SOU

RC

E: S

kala

mer

a, R

ober

t J.

Nov

embe

r 20

04. “

Impl

emen

ting

OSD

Sys

tem

s En

gine

erin

g Po

licy”

brie

fing

to t

he U

SD(A

T&L)

.

SYST

EMS

ENG

INEE

RING

MUS

T M

ANAG

E AL

LRE

QUI

REM

ENTS

CHO

ICES

IN A

LL P

HASE

S

•SYS

PER

FORM

ANCE

SPE

C•E

XIT

CRIT

ERIA

•VAL

IDAT

ED S

YS S

UPPO

RT &

•M

AINT

ENAN

CE O

BJEC

TIVE

S &

•RE

QUI

REM

ENTS

•APB

•CD

D

•SE

P •

ISP

TEM

P

•INI

TIAL

PRO

D BA

SELI

NE•T

EST

REPO

RTS

TEM

P•E

LEM

ENTS

OF

PRO

DUCT

SUP

PORT

•RIS

K AS

SESS

MEN

T•S

EP

TRA

PESH

E•I

NPUT

S TO

:-C

PD

-S

TA

-I

SP

-CO

ST/M

ANPO

WER

EST

.

FCA

INPU

TS

OUT

PUTS

INTE

RPRE

T U

SER

NEE

DS,

REF

INE

SYST

EMPE

RFO

RM

ANCE

SPE

CS &

ENVI

RO

NM

ENTA

L C

ONS

TRA

INTS

INTE

RPRE

T U

SER

NEE

DS,

REF

INE

SYST

EMPE

RFO

RM

ANCE

SPE

CS &

ENVI

RO

NM

ENTA

L C

ONS

TRA

INTS

DEV

ELO

P SY

STEM

FUN

CTIO

NAL

SPE

CS &

SYST

EM V

ERIF

ICAT

ION

PLA

N

DEV

ELO

P SY

STEM

FUN

CTIO

NAL

SPE

CS &

SYST

EM V

ERIF

ICAT

ION

PLA

NSRR

EVO

LVE

FUN

CTIO

NAL

PER

FOR

MAN

CE S

PECS

INTO

C

I FU

NCT

ION

AL (D

ESIG

N T

O)

SPEC

S A

ND

CI V

ERIF

ICA

TIO

N P

LAN

EVO

LVE

FUN

CTIO

NAL

PER

FOR

MAN

CE S

PECS

INTO

C

I FU

NCT

ION

AL (D

ESIG

N T

O)

SPEC

S A

ND

CI V

ERIF

ICA

TIO

N P

LAN

SFR

SFR

EVO

LVE

CI F

UNC

TIO

NAL

SPEC

S IN

TO P

RO

DUC

T(B

UIL

D T

O) D

OC

UM

ENTA

TIO

NA

ND

INSP

ECTI

ON

PLA

N

EVO

LVE

CI F

UNC

TIO

NAL

SPEC

S IN

TO P

RO

DUC

T(B

UIL

D T

O) D

OC

UM

ENTA

TIO

NA

ND

INSP

ECTI

ON

PLA

N

PDR

PDR

FAB

RIC

ATE,

ASS

EMB

LE,

CO

DE T

O “

BU

ILD

-TO

”D

OCU

MEN

TATI

ON

CDR

CDR

IND

IVID

UA

L C

IVE

RIF

ICA

TIO

N

DT&

E

INTE

GR

ATED

DT&

E, L

FT&E

&

EOAS

VER

IFY

PERF

OR

MAN

CE

CO

MPL

IANC

E TO

SPE

CS

TRR

CO

MB

INED

DT&

E/O

T&E/

LFT&

ED

EMO

NST

RATE

SYS

TEM

TO

SPEC

IFIE

D U

SER

NEE

DS &

ENVI

RO

NM

ENTA

L C

ONS

TRA

INTS

SVR

SVR

PRR

TRA

DES

TRA

DES

SYST

EM D

T&E,

LFT

&E

& O

AS,

VER

IFY

SYST

EM F

UNC

TIO

NAL

ITY

& C

ON

STR

AIN

TS C

OM

PLIA

NCE

TO S

PECS

Page 33: DAU T&E Management Guide.pdf

2-7

Product and Process Development (IPPD). Inte-grated Product Teams (IPTs) are responsible forthe integration of the components into a system.

Through interdisciplinary integration, a systemsengineer manages the progress of productdefinition from system level to configuration-item level, detailed level, deficiency correction,and modifications/Product Improvements (PIs).Test results provide feedback to analyze the de-sign progress toward performance goals. Toolsof systems engineering include design reviews,configuration management, simulation, Techni-cal Performance Measurement (TPM), trade-offanalysis, and specifications.

What happens during systems engineering? Theprocess determines what specialists are required,what segments and Non-Developmental Items(NDIs) are used, design performance limits,trade-off criteria, how to test, when to test, howto document (specifications), and what manage-ment controls to apply (TPM and designreviews).

Development and operational testing support thetechnical reviews by providing feedback to theSEP. More information on the reviews is con-tained in Chapter 8.

2.3.1 The Systems Engineering Process

The SEP is the iterative logical sequence ofanalysis, design, test, and decision activities thattransforms an operational need into the descrip-tions required for production and fielding of alloperational and support system elements. Thisprocess consists of four activities. They includerequirements analysis, functional analysis andallocation, synthesis, and verification of perfor-mance (T&E), which support decisions ontradeoffs and formalize the description of systemelements. The system engineering plan is de-scribed in Chapter 16 of the Defense AcquisitionUniversity guide to Systems Engineering Funda-mentals, January 2001.

The requirements analysis activity is a processused by the program office, in concert with theuser, to establish and refine operational and designrequirements that result in the proper balancebetween performance and cost within affordabilityconstraints. Requirements analysis shall be con-ducted iteratively with Functional Analysis/Allocation (FA/A) to develop and refine system-level functional and performance requirements,external interfaces, and provide traceability amonguser requirements and design requirements.

The FA/A activity identifies what the system,component, or part must do. It normally worksfrom the top downward ensuring requirementstraceability and examining alternative concepts.This is done without assuming how functionswill be accomplished. The product is a series ofalternative Functional Flow Block Diagrams (FF-BDs). A functional analysis can be applied atevery level of development. At the system level,it may be a contractor or Service effort. Duringthe CR and TD phases, developmental testersassist the functional analysis activity to help de-termine what each component’s role will be aspart of the system being developed. Performancerequirements are allocated to systemcomponents.

The synthesis activity involves invention—con-ceiving ways to do each FFBD task—to answerthe “how” question. Next, the physical interfacesimplied by the “how” answers are carefully iden-tified (topological or temporal). The answers mustreflect all technology selection factors. Synthesistools include Requirements Allocation Sheets(RASs), which translate functional statements intodesign requirements and permit a long and com-plex interactive invention process with control,visibility, and requirements traceability. Develop-mental testers conduct prototype testing todetermine how the components will performassigned functions to assist this synthesis activity.

The verification loop and decision activity allowstradeoff of alternative approaches to “how.” This

Page 34: DAU T&E Management Guide.pdf

2-8

activity is conducted in accordance with decisioncriteria set by higher-level technical requirementsfor such things as: Life Cycle Costs (LCCs);effectiveness; Reliability, Availability, andMaintainability (RAM); risk limits; and sched-ule. It is repeated at each level of development.The verification and decision activity is assistedby developmental testers during the later SDDphase when competitive testing between alterna-tive approaches is performed.

The final activity is a description of systemelements. Developing as the result of previous ac-tivities and as the final system design is deter-mined, this activity takes form when specificationsare verified through testing and when reviewed inthe Physical Configuration Audit (PCA) and Func-tional Configuration Audit (FCA). Operationaltesters may assist in this activity. They conductoperational testing of the test items/systems to helpdetermine the personnel, equipment, facilities, soft-ware, and technical data requirements of the newsystem when used by typical military personnel.

Figure 2-4, Systems Engineering Process and Testand Evaluation, depicts the activities and theirinteractions.

2.3.2 Technical Management Planning

The technical management planning incorporatestop-level management planning for the integra-tion of all system design activities. Its purpose isto develop the organizational mechanisms fordirection and control, and identify personnel forthe attainment of cost, performance, and scheduleobjectives. Planning defines and describes thetype and degree of system engineering manage-ment, the SEP, and the integration of relatedengineering programs. The design evolutionprocess forms the basis for comprehensive T&Eplanning.

The TEMP must be consistent with technicalmanagement planning. The testing program out-lined in the TEMP must provide the TPMs datarequired for all design decision points, audits,

Figure 2-4. Systems Engineering Process and Test and Evaluation

TESTING DEFINED IN PERFORMED AGAINST PARTICIPANTS

PROJ ENGRDPMLFUNC ENGRKr/GOVT TEAM

SPECIFICATIONMISSION PROFILE/ENVIRONMENT

SOWCONTRACTSPECSWBS

QUALIFICATIONACCEPTANCE

SYSTEMDT & OT

TEMPDT/OT PLANS

STA/CDD/CPDOPS CONCEPTMAINT CONCEPT

DT&E/DOT&EDT/OT PERSONNELUSER/SUPT PERSTEST IPT

SYSTEMS ENGINEERING PROCESS

TEST AND EVALUATION

Page 35: DAU T&E Management Guide.pdf

2-9

and reviews that are a part of the system’sengineering process. The configuration manage-ment process controls the baseline for the testprograms and incorporates design modificationsto the baseline determined to be necessary byT&E.

The TEMP and technical management planningmust be traceable to each other. The systemdescription in the TEMP must be traceable tosystems engineering documentation such as theFFBDs, the RASs, and the Test RequirementsSheets (TRSs). Key functions and interfaces ofthe system with other systems must be describedand correlated with the systems engineering docu-mentation and the system specification. Techni-cal thresholds and objectives include specific per-formance requirements that become test planninglimits. They must be traceable through theplanned systems engineering documentation andcan be correlated to the content of the TPM Pro-gram. For example, failure criteria for reliabilitythresholds during OT&E testing must be delin-eated and agreed upon by the Program Manager(PM) and the Operational Test Director (OTD),and reflected in the TEMP.

2.3.3 Technical Performance Measurement

TPM identifies critical technical parameters thatare at a higher level of risk during design. It tracksT&E data, makes predictions about whether theparameter can achieve final technical successwithin the allocated resources, and assists inmanaging the technical program.

The TPM Program is an integral part of the T&Eprogram. The TPM is defined as product designassessment and forms the backbone of the devel-opment testing program. It estimates, throughengineering analyses and tests, the values ofessential performance parameters of the currentprogram design. It serves as a major input in thecontinuous overall evaluation of operational ef-fectiveness and suitability. Design reviews areconducted to measure the systems engineering

progress. For more information, see Chapter 8.Figure 2-5 depicts the technical reviews thatusually take place during the SEP and the relatedspecification documents.

2.3.4 System Baselining and T&E

The SEP establishes phase baselines throughoutthe acquisition cycle. These baselines (functional,allocated, product) can be modified with the re-sults of engineering and testing. The testing usedto prove the technical baselines is rarely the sameas the operational testing of requirements.

Related to the baseline is the process of configu-ration management. Configuration managementbenefits the T&E community in two ways.Through configuration management, the baselineto be used for testing is determined. Also, changesthat occur to the baseline as a result of testingand design reviews are incorporated into the testarticle before the new phase of testing (to preventretest of a bad design).

2.4 DEFINITIONS

T&E is the deliberate and rational generation ofperformance data, which describes the nature ofthe emerging system and the transformation ofdata into information useful for the technical andmanagerial personnel controlling its development.In the broad sense, T&E may be defined as allphysical testing, modeling, simulation, experi-mentation, and related analyses performed duringresearch, development, introduction, and employ-ment of a weapon system or subsystem. TheGlossary of Defense Acquisition Acronyms andTerms, produced by the Defense AcquisitionUniversity def ines “Test” and “Test andEvaluation” as follows:4

A “test” is any program or procedure thatis designed to obtain, verify, or providedata for the evaluation of any of the fol-lowing: 1) progress in accomplishingdevelopmental objectives; 2) the

Page 36: DAU T&E Management Guide.pdf

2-10

Fig

ure

2-5

. Des

ign

Rev

iew

sFUNC

TIO

NAL

CONF

IGUR

ATIO

NAU

DIT

HWCI

FUNC

TIO

NAL

CONF

IGUR

ATIO

NAU

DIT

HWCI

FORM

ALQ

UALI

FICA

TIO

NPR

EVIE

WHW

CI

FORM

ALQ

UALI

FICA

TIO

NPR

EVIE

WHW

CI

PRO

GRA

MAC

TIVI

TY

TEST

TECH

NIC

AL

REV

IEW

S &

AUD

ITS

SPEC

IFIC

ATIO

NSAN

D O

THER

PRO

DUCT

S

NO

TE: A

CTU

AL

TIM

E PH

ASI

NG

OF

AC

TIVI

TIES

MU

ST B

ETA

ILO

RED

TO

EAC

HPR

OG

RAM

ALT

ERNA

TIVE

SYST

EMR

EVIE

W

FUNC

TIO

NA

LR

EVIE

WSO

FTW

ARE

SPEC

REV

IEW

HWCI

DEVE

LOPM

ENT

SPEC

HWCI

DEVE

LOPM

ENT

SPEC

SYST

EMFU

NCTI

ON

AL

IDEN

TIFI

CATI

ON

SYST

EMFU

NCTI

ON

AL

BASE

LIN

E

DEV

ELO

PMEN

TAL

CO

NFI

GUR

ATI

ON

(CSC

I ON

LY)

PRO

DUC

TBA

SE L

INE

ALL

OCA

TED

BASE

LIN

E

IRS

ITEM

SPEC

LEG

END

CSCI

-ID

D -

COM

PUTE

R SO

FTW

ARE

CONF

IGUR

ATIO

N IT

EMIN

TERF

ACE

DECI

SIO

NDO

CUM

ENT

SPEC

IFIC

ATIO

NDA

TA B

ASE

DESI

GN

DOCU

MEN

T

INTE

RFAC

E RE

QUI

REM

ENTS

SPEC

IFIC

ATIO

NHA

RDW

ARE

CONF

IGUR

ATIO

N IT

EM

SPEC

-DB

DD -

IRS

-HW

CI -

ALL

OCA

TED

IDEN

TIFI

CATI

ON

SOFT

WAR

EPR

ODU

CT

SPEC

SOFT

WAR

EPR

ODU

CT

SPEC

PERF

ORM

ANCE

SOFT

WAR

ELI

STIN

GSO

FTW

ARE

LIST

ING

HW

CI

PRO

DUC

TSP

EC

DRAF

THW

CIDE

VELO

PMEN

TSP

EC

DRAF

THW

CIDE

VELO

PMEN

TSP

EC

SOU

RCE

AND

OB

JEC

TC

OD

E

TEST

REA

DIN

ESS

REV

IEW

FUNC

TIO

NA

LC

ON

FIG

URA

TIO

NAU

DIT

CSC

I

PHYS

ICA

LC

ON

FIG

URA

TIO

NAU

DIT

CSC

I

PREL

IMD

ESIG

NR

EVIE

WC

SCI

CRIT

ICA

LD

ESIG

NR

EVIE

WC

SCI

TECH

NO

LOG

YD

EVEL

OPM

ENT

SYST

EM D

EVEL

OPM

ENT

AND

DEM

ON

STRA

TIO

N

PREP

ARE

HW

CI

SUB

SYST

EMTE

STS

PRO

DUC

TIO

N A

NDD

EPLO

YMEN

T

PREP

ARE

HW

CI T

EST

PRO

CED

URE

PREP

ARE

HW

CI

TEST

PLA

N

PREL

IMD

ESIG

NR

EVIE

WH

WC

I

CRIT

ICA

LD

ESIG

NR

EVIE

WH

WC

I

PREP

ARE

SOFT

WAR

ETE

ST P

LAN

PRO

TOTY

PETE

STIN

G

DRAF

TSY

STEM

SPEC

DRAF

TSY

STEM

SPEC

SYST

EMPE

RFO

RMAN

CESP

EC

SYST

EMPE

RFO

RMAN

CESP

EC

CONF

IGUR

ATIO

NPH

YSIC

AL

AUDI

T HW

CICO

NFIG

URAT

ION

PHYS

ICAL

AUDI

T HW

CI

PHYS

ICAL

AUDI

T HW

CI

SYST

EMFU

NCTI

ONA

LCO

NFIG

URAT

ION

AUDI

T

SYST

EMFU

NCTI

ONA

LCO

NFIG

URAT

ION

AUDI

T

SYST

EMPH

YSIC

ALCO

NFIG

URAT

ION

AUDI

T

SYST

EMPH

YSIC

ALCO

NFIG

URAT

ION

AUDI

T

SYST

EMFO

RMAL

QUA

LIFI

CATI

ON

REVI

EW

SYST

EMFO

RMAL

QUA

LIFI

CATI

ON

REVI

EW

PER

FOR

MSY

STEM

TEST

S

SOFT

WAR

ERE

QUI

REM

ENTS

SPEC

SOFT

WAR

ERE

QUI

REM

ENTS

SPEC

DBD

DID

DSO

FTW

ARE

DET

AIL

EDD

ESIG

ND

OCU

MEN

T

SYST

EMRE

QUI

REM

ENTS

REVI

EW

SYST

EMRE

QUI

REM

ENTS

REVI

EW

SOFT

WAR

ETO

P LE

VEL

DES

IGN

DO

CUM

ENT

ITEM

PER

FOR

MSP

ECPR

OCE

SSM

ATE

RIAL

CO

NCEP

TR

EFIN

EMEN

T

PREP

ARE

TEST

AND

EVAL

UATI

ON

STRA

TEG

Y

ITEM

DET

AIL

PREP

ARE

TEST

AND

EVAL

UATI

ON

MAS

TER

PLAN

PREP

ARE

S/W

TES

TD

ESC

RIP

TIO

N

PREP

ARE

S/W

TES

TPR

OC

EDUR

ES

PREP

ARE

SOFT

WAR

ETE

STS

PREP

ARE

SOFT

WAR

ETE

STR

EPO

RTS

PREP

ARE

SYST

EMTE

STR

EPO

RTS

Page 37: DAU T&E Management Guide.pdf

2-11

performance, operational capability, andsuitability of systems, subsystems, com-ponents, and equipment items; and 3) thevulnerability and lethality of systems,subsystems, components, and equipmentitems.

“Test and Evaluation” is the process bywhich a system or components areexercised and results analyzed to provideperformance-related information. Theinformation has many uses including riskidentification and risk mitigation andempirical data to validate models andsimulations. T&E enables an assessmentof the attainment of technical perfor-mance, specifications, and system matu-rity to determine whether systems areoperationally effective, suitable, and sur-vivable for intended use, and/or lethality.

2.5 THE DOD T&E PROCESS

The DoD Test and Evaluation Process (Figure 2-6) is an iterative five-step process that providesanswers to critical T&E questions for decisionmakers at various times during a system acquisi-tion. The T&E process begins during theformative stages of the program with the T&ECoordination Function, in which the informationneeds of the various decision makers are formu-lated in conjunction with the development of theprogram requirements, acquisition strategy, andAoAs.

Given certain foundation documentation, Step 1is the identification of T&E information requiredby the decision maker. The required informationusually centers on the current system under test,which may be in the form of concepts, proto-types, EDMs, or production representative/production systems, depending on the acquisition

Figure 2-6. DoD Test and Evaluation Process

PRE-TESTANALYSIS

EVALUATIONPLAN

EXPECTED

OUTCOMES

TEST PLANNINGRUN SIMULATIONCONDUCT TESTGATHER DATAANALYZE DATATEST REPORT

STEP 5

STEP 1 STEP 2

STEP 3

STEP 4

FEEDBACK-

MEASURED

OUTCOMES

DECISION

MODELING AND SIMULATION

ISSUE TO BEDECIDED

MILITARY NEEDSCDD/CPD

AOAPROTOTYPES

SYSTEM DESIGNSYSTEM SPECIFICATION

PRODUCTION ARTICLES: HW/SWTHREATS/TARGETS

SOFTWARE UPDATESMAJOR MODS

SYST

EMA

CQU

ISIT

I ON

PRO

CESS

ID CRITICALISSUES AND

DATA RQMTS

POST-TEST SYNTHESISCOMPARE RESULTS/PREDICTIONS

EVALUATION REPORT

BALANCE T&E RESULTS WITHOTHER PROGRAM INFO

Page 38: DAU T&E Management Guide.pdf

2-12

phase. The required information consists of per-formance evaluations of effectiveness and suit-ability, providing insights into how well thesystem meets the user’s needs at a point in time.

Step 2 is the pre-test analysis of the evaluationobjectives from Step 1 to determine the types andquantities of data needed, the results expected oranticipated from the tests, and the analytical toolsneeded to conduct the tests and evaluations. Theuse of validated models and simulation systemsduring pre-test analysis can aid in determining:how to design test scenarios; how to set up thetest environment; how to properly instrument thetest; how to staff and control test resources; howbest to sequence the test trials; and how toestimate outcomes.

Step 3, test activity and data management, is theactual test activity planning. Tests are conductedand data management for data requirements isidentified in Step 2. T&E managers determinewhat valid data exist in historical files that canbe applied and what new data must be developedthrough testing. The necessary tests are plannedand executed to accumulate sufficient data tosupport analysis. Data are screened for complete-ness, accuracy, and validity before being used forStep 4.

Step 4, post-test synthesis and evaluation, is thecomparison of the measured outcomes (test data)from Step 3 with the expected outcomes fromStep 2, tempered with technical and operationaljudgment. This is where data are synthesized intoinformation. When the measured outcomes differ

from the expected outcomes, the test conditionsand procedures must be reexamined to determineif the performance deviations are real or were theresult of test conditions, such as lack of fidelityin computer simulation, insufficient or incorrecttest support assets, instrumentation error, or faultytest processes. The assumptions of tactics, op-erational environment, systems performance pa-rameters, and logistics support must have beencarefully chosen, fully described, and documentedprior to test. Modeling and Simulation (M&S)may normally be used during the data analysis toextend the evaluation of performance effective-ness and suitability.

Step 5 is when the decision maker weighs theT&E information against other programmatic in-formation to decide a proper course of action.This process may identify additional requirementsfor test data and iterate the DoD T&E processagain.

2.6 SUMMARY

T&E is an engineering tool used to identifytechnical risk throughout the defense systemacquisition cycle and a process for verifying per-formance. This iterative cycle consists of acquisi-tion phases separated by discrete milestones. TheDoD T&E process consists of developmental andoperational testing that is used to supportengineering design and programmatic reviews. ThisT&E process forms an important part of the SEPused by system developers and aids in the decisionprocess used by senior decision authorities in DoD.

Page 39: DAU T&E Management Guide.pdf

2-13

ENDNOTES

1. DoDI 5000.2, Operation of the DefenseAcquisition System, May 12, 2003.

2. Ibid.

3. Ibid.

4. Defense Acquisition University Press. TheGlossary of Defense Acquisition Acronymsand Terms, Vol. 11, September 2003.

Page 40: DAU T&E Management Guide.pdf

2-14

Page 41: DAU T&E Management Guide.pdf

3-1

33T&E POLICY STRUCTURE AND

OVERSIGHT MECHANISM

3.1 INTRODUCTION

This chapter provides an overview of the policyand organizations that govern the conduct of Testand Evaluation (T&E) activities within theDepartment of Defense (DoD) and discussescongressional legislation and activities for com-pliance by DoD. It outlines the responsibilitiesof DoD test organizations at the Office of theSecretary of Defense (OSD) and Service levels,and describes related T&E policy.

3.2 THE CONGRESS

Congress has shown a longstanding interest ininfluencing the DoD acquisition process. Duringthe early 1970s, in response to urging by theCongress and recommendations by a Presiden-tial Blue Ribbon Panel on Defense Management,Deputy Secretary of Defense David Packard pro-mulgated a package of policy initiatives that es-tablished the Defense Systems Acquisition Re-view Council (DSARC). The DSARC wasorganized to resolve acquisition issues, wheneverpossible, and to provide recommendations to theSecretary of Defense (SECDEF) on the acquisi-tion of major weapon systems. Also as a resultof the Congressional Directives, the Army andAir Force established independent OperationalTest Agencies (OTAs). The Navy Operational Testand Evaluation Force (OPTEVFOR) was estab-lished in the late 1960s. In 1983, similar con-cerns led Congress to direct the establishment ofthe independent Office of the Director, Opera-tional Test and Evaluation (DOT&E) within OSD.In 1985, a report released by another President’sBlue Ribbon Commission on Defense Manage-ment, this time chaired by David Packard, made

significant recommendations on the managementand oversight of DoD’s acquisition process, spe-cifically, T&E. All the Commission’s recommen-dations have not been implemented, and the fullimpact of these recommendations is not yet real-ized. In Fiscal Year (FY) 87 the Defense Authori-zation Act required Live Fire Testing (LFT) ofweapon systems before the Production Phasebegins. The earmarking of authorizations and ap-propriations for DoD funding, specific testing re-quirements, and acquisition reform legislationcontinues to indicate the will of Congress forDoD implementation.

Congress requires DoD to provide the followingreports that include information on T&E:

• Selected Acquisition Report (SAR). Within thecost, schedule, and performance data in thereport, SAR describes Acquisition Category(ACAT) I system characteristics required andoutlines significant progress and problemsencountered. It lists tests completed and issuesidentified during testing. The Program Man-ager (PM) uses the Consolidated AcquisitionReporting System (CARS) software to preparethe SAR.

• Annual System Operational Test Report. Thisreport is provided by the DOT&E to the SEC-DEF and the committees on Armed Services,National Security, and Appropriations. Thereport provides a narrative and resource sum-mary of all Operational Test and Evaluation(OT&E) and related issues, activities, andassessments. When oversight of LFT wasmoved to DOT&E, this issue was added to thereport.

Page 42: DAU T&E Management Guide.pdf

3-2

• Beyond Low Rate Initial Production (BLRIP)Report. Before proceeding to BLRIP for eachMajor Defense Acquisition Program (MDAP),DOT&E must report to the SECDEF and theCongress. This report addresses the adequacyof OT&E and whether the T&E results confirmthat the tested item or component is effectiveand suitable for combat. When oversight ofLFT was moved to the DOT&E, the Live FireTest Report was added to the BLRIP Reportcontent.

• Foreign Comparative Test (FCT) Report. TheUnder Secretary of Defense for Acquisition,Technology, and Logistics (USD(AT&L))should notify Congress a minimum of 30 daysprior to the commitment of funds for initia-tion of new FCT evaluations of equipment pro-duced by selected allied and friendly foreigncountries.

3.3 OSD OVERSIGHT STRUCTURE

The DoD organization for the oversight of T&Eis illustrated in Figure 3-1. In USD(AT&L), De-velopment Test and Evaluation (DT&E) over-sight is performed by the Deputy Director, De-velopmental Test and Evaluation, DefenseSystems (DT&E/DS), and the DOT&E providesOT&E oversight for the SECDEF. The manage-ment of MDAPs in OSD is performed by the De-fense Acquisition Executive (DAE), who uses theDefense Acquisition Board (DAB) andOverarching Integrated Product Teams (OIPTs)to process information for decisions.

3.3.1 Defense Acquisition Executive (DAE)

The DAE position, established in September1986, is held by the USD(AT&L). As the DAE,the USD(AT&L) uses the DAB and its OIPTs toprovide the senior-level decision process for theacquisition of weapon systems. The DAE respon-sibilities include establishing policies for acqui-sition (including procurement, Research and De-velopment (R&D), logistics, development testing,

and contracts administration) for all elements ofDoD. The DAE’s charter includes the authorityover the Services and defense agencies on policy,procedure, and execution of the weapon systemsacquisition process.

3.3.2 Defense Acquisition Board (DAB)

The DAB is the primary forum used by OSD toprovide advice, assistance, and recommendations,and to resolve issues regarding all operating andpolicy aspects of the DoD acquisition system. TheDAB is the senior management acquisition boardchaired by the DAE and co-chair is the ViceChairman of the Joint Chiefs of Staff (VCJCS).The DAB is composed of the department’s senioracquisition officials, including the DOT&E. TheDAB conducts business through OIPTs andprovides decisions on ACAT ID programs.1

3.3.3 Deputy Director, Developmental Testand Evaluation, Defense Systems(DT&E/DS)

The DT&E/DS serves as the principal staffassistant and advisor to the USD(AT&L) for T&Ematters. The Director, Defense Systems works forthe Deputy Under Secretary of Defense forAcquisition and Technology (DUSD(A&T)) andhas authority and responsibility for all DT&Econducted on designated MDAPs and for FCT.The DT&E/DS is illustrated in Figure 3-1.

3.3.3.1 Duties of the DT&E/DS

Within the acquisition community, the DT&E/DS:

• Serves as the focal point for coordination ofall MDAP Test and Evaluation Master Plans(TEMPs). Recommends approval by the OIPTlead of the DT&E portion of TEMPs;

• Reviews MDAP documentation for DT&Eimplications and resource requirements toprovide comments to the OIPT, USD(AT&L)DAE or DAB;

Page 43: DAU T&E Management Guide.pdf

3-3

Fig

ure

3-1

. Do

D T

est

and

Eva

luat

ion

Org

aniz

atio

n

SEC

RET

ARY

OF

DEF

ENSE

DEP

UTY

DIR

ECTO

R,

TEST

& E

VALU

ATI

ON

ASA

(ALT

)

CO

MM

ANDA

NTO

F TH

EM

AR

INE

CO

RPS

CO

MM

OD

ITY

CO

MM

ANDS

AR

MY

TEST

AN

DEV

ALU

ATIO

NC

OM

MAN

D(A

TEC)

MA

RIN

E C

OR

PSSY

STEM

SC

OM

MAN

D(M

ARC

ORS

YSCO

M)

MA

RIN

EC

ORP

SO

PER

ATIO

NAL

TEST

AN

DEV

ALU

ATIO

NA

CTIV

ITY

(MC

OTE

A)

OPE

RAT

ION

ALTE

ST A

ND

EV

ALU

ATIO

NFO

RCE

(OPT

EVFO

R)

AIR

FO

RCE

OPE

RAT

ION

ALTE

ST A

ND

EVA

LUAT

ION

CEN

TER

(AFO

TEC)

AIR

FO

RCE

MA

TER

IEL

CO

MM

AND

(AFM

C)

NA

VYSY

STEM

SC

OM

MAN

DSSP

AW

AR

NA

VAIR

NA

VSEA

ASN

(RD

&A)

SAF/

AQ

DIR

ECTO

R, O

PERA

TIO

NAL

TEST

AN

D E

VALU

ATIO

N

DU

SA(O

R)

TEM

AN

-91

AF/

TE

CH

IEF

OF

NA

VAL

OPE

RAT

IONS

SEC

RET

ARY

OF

THE

NAV

YSE

CR

ETA

RY O

F TH

E A

RM

Y

CH

IEF

OF

STA

FFO

F TH

E A

RM

Y

CH

IEF

OF

STA

FFO

F TH

E A

IR F

ORC

E

SEC

RET

ARY

OF

THE

AIR

FO

RCE

UN

DER

SEC

RET

ARY

OF

DEF

ENSE

(AT&

L)A

SD(N

II)

DEF

ENSE

SYS

TEM

SU

SD(A

&T)

JOIN

T IN

TERO

PERA

BIL

ITY

TEST

CO

MM

AND

(JIT

C)

DIR

, TES

T RE

SOU

RCE

MA

NAG

EMEN

T C

ENTE

R

Page 44: DAU T&E Management Guide.pdf

3-4

• Monitors DT&E to ensure the adequacy oftesting and to assess test results;

• Provides assessments of the Defense Acqui-sition Executive Summary (DAES) andtechnical assessments of DT&E and SystemsEngineering Processes (SEPs) conducted on aweapon system;

• Provides advice and makes recommendationsto OSD senior leadership, and issues guidanceto the Component Acquisition Executives(CAEs) with respect to DT&E;

• Leads the defense systems efforts for joint test-ing and is a member of the Joint T&EProgram’s Technical Advisory Board for JointDevelopment Test and Evaluation programs.

3.3.3.2 DT&E/DS and Service Reports

During the testing of ACAT I and designatedweapon systems, the DT&E/DS and Servicesinteraction includes the following reportingrequirements:

• A TEMP (either preliminary or updated, asappropriate) must be provided for consider-ation and approval before each milestonereview, starting with Milestone B.

• A technical assessment of Developmental T&Eis provided to the DS and DOT&E listing theT&E results, conclusions, and recommenda-tions prior to a milestone decision or the finaldecision to proceed to BLRIP.

3.3.4 Director, Operational Test andEvaluation (DOT&E)

As illustrated in Figure 3-1, the director reportsdirectly to the SECDEF and has special report-ing requirements to Congress. The DOT&E’sresponsibility to Congress is to provide anunbiased window of insight into the operational

effectiveness, suitability, and survivability of newweapon systems.

3.3.4.1 Duties and Functions of the DOT&E

The specific duties of DOT&E, outlined in DoDDirective (DoDD) 5141.2, Director of Opera-tional Test and Evaluation, include:2

• Obtaining reports, information, advice, andassistance as necessary to carry out assignedfunctions (DOT&E has access to all recordsand data in DoD on acquisition programs);

• Signing the ACAT I and oversight TEMPs forapproval of OT&E and approving the OT&Efunding for major systems acquisition;

• Approving operational test plans on all majordefense acquisition systems and designatedoversight programs prior to system starting ini-tial operational testing (approval in writingrequired before operational testing may begin);oversight extends into Follow-on OperationalTest and Evaluation (FOT&E);

• Providing observers during preparation andconduct of OT&E;

• Analyzing results of OT&E conducted for eachmajor or designated defense acquisition pro-gram and submitting a report to the SECDEFand the Congress on the adequacy of theOT&E performed;

• Presenting the BLRIP report to the SECDEFand to congressional Committees on ArmedServices and Appropriations on the adequacyof Live Fire Test and Evaluation (LFT&E) andOT&E, and analyzing if results confirm thesystem’s operational effectiveness andsuitability;

• Providing oversight and approval of majorprogram LFT.

Page 45: DAU T&E Management Guide.pdf

3-5

Figure 3-2. Army Test and Evaluation Organization

3.3.4.2 DOT&E and Service Interactions

For DoD and DOT&E-designated oversightacquisition programs, the Service provides theDOT&E the following:

• A draft copy of the Operational Test Planconcept for review;

• Significant test plan changes;

• The final Service IOT&E report, which mustbe submitted to DOT&E before the Full RateProduction Decision Review (FRPDR);

• The LFT&E plan for approval, and the ServiceLive Fire Test Report for review.

3.4 SERVICE T&E MANAGEMENTSTRUCTURES

3.4.1 Army T&E OrganizationalRelationship

The Army management structure for T&E isillustrated in Figure 3-2.

3.4.1.1 Army Acquisition Executive

The Assistant Secretary of the Army for Acquisi-tion, Logistics, and Technology (ASA(ALT)) hasthe principal responsibility for all Department ofthe Army (DA) matters and policy related to ac-quisition, logistics, technology, procurement, theindustrial base, and security cooperation.Additionally, the ASA(ALT) serves as the ArmyAcquisition Executive (AAE). The AAE ad-ministers acquisition programs by developing/

SECRETARY OF THE ARMY

OPERATIONALTEST COMMAND

(OTC)

OPERATIONALTEST COMMAND

(OTC)

ARMYEVALUATION

CENTER (AEC)

ARMYEVALUATION

CENTER (AEC)

COMMODITYCOMMANDS

T&EMGT AGENCY

(TEMA)

DUSA(OR)ASA(ALT)

PEO

PM

CHIEF OF STAFF

ARMY T&ECOMMAND (ATEC)

DEVELOPMENTALTEST COMMAND

(DTC)

Page 46: DAU T&E Management Guide.pdf

3-6

promulgating acquisition policies and proceduresas well as appointing, supervising, and evaluat-ing assigned Program Executive Officers (PEOs)and direct reporting PMs.

3.4.1.2 Army T&E Executive

The Deputy Under Secretary of the Army forOperations Research (DUSA(OR)), establishes,reviews, enforces, and supervises Army T&Epolicy and procedures including overseeing allArmy T&E associated with the system research,development, and acquisition of all materiel andCommand, Control, Communications, Comput-ers and Intelligence (C4I)/Information Technol-ogy (IT) systems. The Test and Evaluation Man-agement Agency (TEMA) within the Office ofthe Chief of Staff, Army, provides the DUSA(OR)oversight. As delegated by the AAE, theDUSA(OR) is the sole Headquarters, Departmentof the Army (HQDA) approval authority forTEMPs.

3.4.1.3 Army Test and Evaluation Command

The U.S. Army Test and Evaluation Command(ATEC) supports the systems acquisition, forcedevelopment, and experimentation processesthrough overall management of the Army’s T&Eprograms. In this role, ATEC manages the Army’sdevelopmental and operational testing, all systemevaluation, and management of joint T&E. ATECis the Army’s independent Operational TestActivity (OTA) reporting directly to the ViceChief of Staff, Army (VCSA) through theDirector of the Army Staff (DAS).

3.4.1.3.1 Army Developmental Testers

The U.S. Army Developmental Test Command(DTC) within ATEC has the primary responsi-bility for conducting Developmental Tests (DTs)for the Army. DTC responsibilities include:

• Perform the duties of governmental develop-mental tester for all Army systems except for

those systems assigned to the Communications-Electronics Research, Development, andEngineering Center (CERDEC) of the ArmyMateriel Command (AMC) Research, Devel-opment and Engineering Command (RDE-COM) by HQDA (Deputy Chief of Staff, G-6),Medical Command (MEDCOM), Intelligenceand Security Command (INSCOM), Space andMissile Defense Command (SMDC), and ArmyCorps of Engineers (ACE);

• Provide test facilities and testing expertise insupport of the acquisition of Army and otherdefense systems;

• Operate and maintain the Army’s portion ofthe Major Range and Test Facility Base(MRTFB) (except for the United States ArmyKwajalein Atoll (USAKA) and the High En-ergy Laser System Test Facility (HELSTF)) inaccordance with DoDD 3200.11, Major Rangeand Test Facility Base (MRTFB), May 1, 2002;

• Provide testers with a safety release for sys-tems before the start of pretest training for teststhat use soldiers as test participants;

• Provide safety confirmations for milestoneacquisition decision reviews and the materielrelease decision;

• Manage the Army Test Incident ReportingSystem (ATIRS).

3.4.1.3.2 Army Operational Testers

The U.S. Army Operational Test Command (OTC)within ATEC has the primary responsibility forconducting Operational Tests (OTs) for the Armyand supporting Army participation in Joint T&E.OTC responsibilities include:

• Perform the duties of operational tester for allArmy systems except for those systemsassigned to MEDCOM, INSCOM, SMDC,and ACE;

Page 47: DAU T&E Management Guide.pdf

3-7

• Perform the duties of operational tester forassigned multi-Service OTs and (on a customerservice basis) for Training and Doctrine Com-mand (TRADOC) Force Development Testsand Experimentation (FDTE);

• Provide test facilities and testing expertise insupport of the acquisition of Army and otherdefense systems, and for other customers on acost-reimbursable and as-available basis;

• Program and budget the funds to support OTexcept out of cycle tests (which are usuallypaid for by the proponent);

• Develop and submit OT and FDTE OutlineTest Plans (OTPs) to the Army’s Test Scheduleand Review Committee (TSARC).

3.4.1.3.3 Army Evaluation Center

The U.S. Army Evaluation Center (AEC) is anindependent subordinate element within ATECthat has the primary responsibility for conduct-ing Army system evaluations and system assess-ments in support of the systems acquisition pro-cess. Decision makers use AEC’s independentreport addressing an Army system’s operationaleffectiveness, suitability, and survivability. AECresponsibilities include:

• Perform the duties of system evaluator for allArmy systems except for those systems assignedto MEDCOM, INSCOM, and the commercialitems assigned to ACE; conduct ContinuousEvaluation (CE) on all assigned systems;

• Develop and promulgate evaluation capabilitiesand methodologies;

• Coordinate system evaluation resourcesthrough the TSARC;

• Preview programmed system evaluation re-quirements for possible use of Modeling andSimulation (M&S) to enhance evaluation and

reduce costs; perform manpower and personnelintegration (MANPRINT) assessments in co-ordination with Deputy Chief of Staff (DCS),G-1 Army Research Laboratory-HumanResearch and Engineering Division/Directorate(ARL-HRED);

• Perform the Integrated Logistics Support (ILS)program surveillance for Army systems.Perform independent logistics supportabilityassessments and report them to the ArmyLogistician and other interested members ofthe acquisition community.

3.4.2 Navy T&E Organizational Relationship

The organizational structure for T&E in the Navyis illustrated in Figure 3-3. Within the Navy Secre-tariat, the Secretary of the Navy (SECNAV) has as-signed general and specific Research, Development,Test, and Evaluation (RDT&E) responsibilities tothe Assistant Secretary of the Navy (Research, De-velopment, and Acquisition) (ASN(RDA)) and tothe Chief of Naval Operations (CNO). The CNOhas responsibility for ensuring the adequacy of theNavy’s overall T&E program. The T&E policy andguidance are exercised through the Director of NavyT&E and Technology Requirements (N-91). Staffsupport is provided by the Test and EvaluationDivision (N-912), which has cognizance over plan-ning, conducting, and reporting all T&E associatedwith development of systems.

3.4.2.1 Navy DT&E Organizations

The Navy’s senior systems development authorityis divided among the commanders of the systemcommands with Naval Air Systems Command(NAVAIR) developing and performing DT&E onaircraft and their essential weapon systems; NavalSea Systems Command (NAVSEA) developingand performing DT&E on ships, submarines, andtheir associated weapon systems; and Space andNaval Warfare Systems Command (SPAWAR)developing and performing DT&E on all othersystems. System acquisition is controlled by a

Page 48: DAU T&E Management Guide.pdf

3-8

chartered PM or by the commander of a systemscommand. In both cases, the designated developingagency is responsible for DT&E and for thecoordination of all T&E planning in the TEMP.Developing agencies are responsible for:

• Developing test issues based on the thresholdsestablished by the user in the requirementsdocuments;

• Identifying the testing facilities and resourcesrequired to conduct the DT&E;

• Developing the DT&E test reports and quick-look reports.

3.4.2.2 Navy Operational Test andEvaluation Force

The Commander, Operational Test and Evalua-tion Force (COMOPTEVFOR) commands theNavy’s independent OT&E activity and reportsdirectly to the CNO. The functions of theCOMOPTEVFOR include:

• Establishing early liaison with the developingagency to ensure an understanding of the re-quirements and plans;

• Reviewing acquisition program documentationto ensure that documents are adequate tosupport a meaningful T&E program;

• Planning and conducting realistic OT&E;

Figure 3-3. Navy Test and Evaluation Organization

SECRETARY OF THE NAVY

CHIEF OFNAVAL OPERATIONS

ASST SEC NAVY(RD&A)

DIRECTORATE OF NAVYT&E AND TECHNOLOGYREQUIREMENTS (N-91)

NAVALSYSTEMSCOMMAND

MARINE CORPSSYSTEMS

COMMANDS

OPERATIONALTEST AND

EVALUATIONFORCE

MARINECORPS

OPERATIONALTEST AND

EVALUATIONACTIVITY

PEO

PM

NAVAIR NAVAL AIR WARFARE CENTER

NAVAL COMMAND, CONTROL AND OCEAN SURVEILLANCE CENTER

NAVAL SURFACE WARFARE CENTER

NAVAL UNDERSEA WARFARE CENTER

NAVSEA

SPAWAR

COMMANDANT OF THEMARINE CORPS

Page 49: DAU T&E Management Guide.pdf

3-9

Figure 3-4. Air Force Test and Evaluation Organization

• Developing tactics and procedures for theemployment of systems that undergo OT&E(as directed by the CNO);

• Providing recommendations to the CNO forthe development of new capabilities or theupgrade of ranges;

• Reporting directly to the CNO, the Presidentof the Board of Inspection and Survey (PRES-INSURV) is responsible for conducting accep-tance trials of new ships and aircraftacquisitions and is the primary Navy authorityfor Production Acceptance Test and Evaluation(PAT&E) of these systems;

• Conducting OT&E on aviation systems in con-junction with Marine Corps Operational Testand Evaluation Activity (MCOTEA).

3.4.3 Air Force Organizational Relationships

3.4.3.1 Air Force Acquisition Executive

The Assistant Secretary of the Air Force forAcquisition (SAF/AQ) is the senior-levelauthority for Research, Development, andAcquisition (RD&A) within the Air Force. TheDirector of Space Acquisition (SAF/USA) obtainsspace assets for all Services. As illustrated in Fig-ure 3-4, the SAF/AQ is an advisor to the Secre-tary of the Air Force and interfaces directly withthe DT&E and DOT&E. The SAF/AQ receivesDT&E and OT&E results as a part of the acqui-sition decision process. Within the SAF/AQ struc-ture, there is a military deputy (acquisition) whois the Air Force primary staff officer with respon-sibility for RD&A. This staff officer is the chiefadvocate of Air Force acquisition programs anddevelops the RDT&E budget. Air Force policyand oversight for T&E is provided by a staff

SECRETARY OF THE AIR FORCE

CHIEF OF STAFF

AF/TE

SAF(ACQUISITION)

AF MATERIELCOMMAND (AFMC)

LOGISTICCENTERS

AF FLIGHT TEST CENTER AERONAUTICAL SYSTEMS WRIGHT-PATWARNER-ROBINS

AF DEVELOPMENT TEST CENTER ELECTRONIC SYSTEMS ROMEOKLAHOMA CITY

ARNOLD ENGINEERING &DEVELOPMENT CENTER

SPACE SYSTEMS HANSCOMOGDENHUMAN SYSTEMS EDWARDS

TESTCENTERS

PRODUCTCENTERS AFRL

T&E ORGANIZATIONS

MAJORCOMMANDS(MAJCOM)

AF OPERATIONALTEST AND

EVALUATIONCENTER (AFOTEC)

PEO

PM

EGLIN

KIRTLAND

SAF/USA

Page 50: DAU T&E Management Guide.pdf

3-10

element under the Chief of Staff, Test and Evalu-ation (AF/TE). They process test documentationfor DT&E and OT&E and manage the review ofthe TEMP.

3.4.3.2 Air Force DT&E Organization

The Air Force Materiel Command (AFMC) is theprimary DT&E and acquisition manager. TheAFMC performs all levels of research; developsweapon systems, support systems, and equipment;and conducts all DT&E. The acquisition PMs areunder the Commander, AFMC. Within theAFMC, there are major product divisions, testcenters, and laboratories as well as missile,aircraft, and munitions test ranges.

Once the weapon system is fielded, AFMC retainsmanagement responsibility for developing andtesting system improvements, enhancements, orupgrades. The Air Force Space Command (AFSC)is responsible for DT&E of space and missilesystems.

3.4.3.3 Air Force OT&E Organization

The AF/TE is responsible for supporting andcoordinating the OT&E activities of the AirForce Operational Test and Evaluation Center(AFOTEC).

The Commander, AFOTEC, is responsible to theSecretary of the Air Force and the Chief of Stafffor the independent T&E of all major and non-major systems acquisitions. The Commander issupported by the operational commands andothers in planning and conducting OT&E.

The AFOTEC reviews operational requirements,employment concepts, tactics, maintenanceconcepts, and training requirements before con-ducting OT&E. The operational commandsprovide operational concepts, personnel, andresources to assist AFOTEC in performingOT&E.

3.4.4 Marine Corps OrganizationalRelationship

3.4.4.1 Marine Corps Acquisition Executive

The Deputy Chief of Staff for Research andDevelopment (DCS/R&D), Headquarters MarineCorps, directs the total Marine Corps RDT&Eeffort to support the acquisition of new systems.The DCS/R&D’s position within the General Staffis analogous to that of the Director, T&E, Tech/N-91 in the Navy structure. The DCS/R&D alsoreports directly to the Assistant Secretary of theNavy/Research, Engineering, and Science (ASN/RE&S) in the Navy Secretariat. Figure 3-3 illus-trates the Marine Corps organization for T&Emanagement.

3.4.4.2 Marine Corps DT&E Organizations

The Commanding General, Marine CorpsSystems Command (CG MCSC), is the MarineCorps materiel developing agent and directlyinterfaces with the Navy Systems Commands.The CG MCSC implements policies, procedures,and requirements for DT&E of all systemsacquired by the Marine Corps. The Marine Corpsalso uses DT&E and OT&E performed by otherServices, which may develop systems of interestto the Corps.

3.4.4.3 Marine Corps Operational Test andEvaluation Activity

The Marine Corps Operational Test and Evalua-tion Activity (MCOTEA) is the independentOT&E activity maintained by the Marine Corps.Its function is analogous to that performed byOPTEVFOR in the Navy. The CG MCSC pro-vides direct assistance to MCOTEA in the plan-ning, conducting, and reporting of OT&E. Ma-rine Corps aviation systems are tested byOPTEVFOR. The Fleet Marine Force performstroop T&E of materiel development in an opera-tional environment.

Page 51: DAU T&E Management Guide.pdf

3-11

Figure 3-5. Test and Evaluation Executive Agent Structure

3.5 THE T&E EXECUTIVE AGENTSTRUCTURE

In 1993, the USD(AT&L) approved a T&EExecutive Agent structure to provide the Serviceswith more corporate responsibility for the man-agement and policies that influence the availabil-ity of test resources for the evaluation of DoDsystems in acquisition (Figure 3-5). The DT&E/DS has functional responsibility for the execu-tion of the processes necessary to assure the T&EExecutive Agent structure functions effectively.The DT&E/DS also participates in the Opera-tional Test and Evaluation Coordinating Commit-tee, chaired by the DOT&E. This committee man-ages the OT&E Resources Enhancement Project,and the DT&E/DS draws input to the T&E Ex-ecutive Agent structure for coordination of allT&E resource requirements.

The Board of Directors (BoD) (Service ViceChiefs) is assisted by an Executive Secretariatconsisting of the Army DUSA(OR), the Navy N-91, and the Air Force AF/TE. The BoD providesguidance and decisions on policy and resourceallocation to their subordinate element, the Boardof Operating Directors (BoOD) (Army DTC,NAVAIR 5.0, and AFMC Director of Operations).The BoD also provides program review and ad-vocacy support of the T&E infrastructure to OSDand Congress.

The BoOD is supported by a Secretariat and theDefense Test and Training Steering Group(DTTSG). The DTTSG manages the T&EResources Committee (TERC), the TrainingInstrumentation Resource Investment Committee(TIRIC), and the CROSSBOW Committee. TheDTTSG is instrumental in achieving efficient

USD(AT&L)

PDUSD(AT&L)

BOARD OF DIRECTORS(BoD)

SERVICE VICE CHIEFS OF STAFF

DEFENSE TEST AND TRAININGSTEERING GROUP

BoD EXECUTIVE SECUSA:DUSA(OR)

USN:N-91USAF:AF/TE

BOARD OF OPERATING DIRECTORS(BoOD)

ARMY DTCNAVAIR 5.0AFMC/ DO

RANGE COMMANDERSCOUNCIL

Page 52: DAU T&E Management Guide.pdf

3-12

acquisition and integration of all training andassociated test range instrumentation and thedevelopment of acquisition policy for embed-ded weapon system training and testing capa-bilities. The TERC supports the DTTSG in over-seeing infrastructure requirements developmentfrom a T&E community perspective, both de-velopment testing and operational testing, andmanages OSD funding in executing the CentralT&E Investment Program (CTEIP). The TIRICis chartered to ensure the efficient acquisition ofcommon and interoperable range instrumentationsystems. The CROSSBOW Committee providestechnical and management oversight of theServices’ development and acquisition programsfor threat and threat-related hardware simulators,emitters, software simulations, hybrid represen-tations, and surrogates.

3.6 SUMMARY

An increased emphasis on T&E has placed greaterdemands on the OSD and DoD components tocarefully structure organizations and resources toensure maximum effectiveness. Renewed inter-est by Congress in testing as a way of assessingsystems utility and effectiveness, the report bythe President’s Blue Ribbon Panel on Acquisi-tion Management, and acquisition reform initia-tives have resulted in major reorganizations withinthe Services. These policy changes and reorgani-zations will be ongoing for several years to im-prove the management of T&E resources in sup-port of acquisition programs.

Page 53: DAU T&E Management Guide.pdf

3-13

ENDNOTES

1. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition System, May 12,2003.

2. DoDD 5141.2, Director of Operational Testand Evaluation (DOT&E), May 25, 2000.

Page 54: DAU T&E Management Guide.pdf

3-14

Page 55: DAU T&E Management Guide.pdf

4-1

44PROGRAM OFFICE RESPONSIBILITIES

FOR TEST AND EVALUATION

4.1 INTRODUCTION

In government acquisition programs, thereshould be an element dedicated to the manage-ment of Test and Evaluation (T&E). This ele-ment has the overall test program responsibilityfor all phases of the acquisition process. T&Eexpertise may be available through matrix sup-port or reside in the Program Management Of-fice (PMO) engineering department during theprogram’s early phases. By System Demonstra-tion, the PMO should have a dedicated T&Emanager. In the PMO, the Deputy for T&Ewould be responsible for defining the scope andconcept of the test program, establishing theoverall program test objectives, and managingtest program funds and coordination. The Deputyfor T&E should provide test directors (such as ajoint test director) as required, and coordinatetest resources, facilities, and their support re-quired for each phase of testing. In addition, theDeputy for T&E or a staff member, will be re-sponsible for managing the Test and EvaluationMaster Plan (TEMP), and planning and manag-ing any special test programs required for theprogram. The Deputy for T&E will also review,evaluate, approve, and release for distributioncontractor-prepared test plans and reports, andreview and coordinate all appropriate govern-ment test plans. After the system is produced,the Deputy for T&E will be responsible for sup-porting production acceptance testing and thetest portions of later increments, PreplannedProduct Improvements (P3I) upgrades, or en-hancements to the weapon system/acquisition.If the program is large enough, the Deputy forT&E will be responsible for all T&E directionand guidance for that program.

4.2 RELATIONSHIP TO THE PROGRAMMANAGER

The Program Manager (PM) is ultimately respon-sible for all aspects of the system development,including testing. The Deputy for T&E is nor-mally authorized by the PM to conduct all dutiesin the area of T&E. The input of the Deputy forT&E to the contract, engineering specifications,budget, program schedule, etc., is essential forthe PM to manage the program efficiently.

4.3 EARLY PROGRAM STAGES

In the early stages of the program, the T&E func-tion is often handled by matrix support from themateriel command. Matrix T&E support or theDeputy for T&E should be responsible fordevelopment of the T&E sections of the Requestfor Proposal (RFP). Although the ultimate respon-sibility for the RFP is between the PM and thePrimary Contracting Officer (PCO), the Deputyfor T&E is responsible for creating severalsections: the test schedule, test program funding(projections), test data requirements for theprogram (test reports, plans, procedures, quick-lookreports, etc.), the test section of the Statement ofWork (SOW), portions of the Acquisition Plan, In-formation for Proposal Preparation (IFPP), and, ifa joint acquisition program, feedback on the Ca-pability Development Document (CDD) integrat-ing multiple Services’ requirements.

4.3.1 Memorandums

Early in the program, another task of the Deputyfor T&E is the arrangement of any Memoran-dums of Agreement or Understanding (MOAs/

Page 56: DAU T&E Management Guide.pdf

4-2

MOUs) between Services, North Atlantic TreatyOrganization (NATO) countries, test organiza-tions, etc., which outline the responsibilities ofeach organization. The RFP and SOW outlinecontractor/government obligations and arrange-ments on the access and use of test facilities(contractor or government owned).

4.3.2 Test Data Management

The Deputy for T&E may have approval authorityfor all contractor-created test plans, procedures,and reports. The Deputy for T&E must haveaccess to all contractor testing and test results,and the Deputy for T&E is responsible fordisseminating the results to government agenciesthat need this data. Additionally, the Deputy forT&E creates report formats and time lines forcontractor submittal, government approval, etc.

The data requirements for the entire test programare outlined in the Contract Data RequirementsList (CDRL). The Deputy for T&E should reviewthe Acquisition Management Systems and DataRequirements Control List1 for relevant test DataItem Descriptions (DIDs). (Examples can be foundin Appendix C.) The Deputy for T&E provides inputto this section of the RFP early in the program. TheDeputy for T&E ensures that the office and all as-sociated test organizations requiring the informationreceive the test documentation on time. Usually,the contractor sends the data packages directly tothe Deputy for T&E, who, in turn, has a distri-bution list trimmed to the minimum number ofcopies for agencies needing that information toperform their mission and oversightresponsibilities. It is important for the Deputy forT&E to use an integrated test program and requestcontractor test plans and procedures well inadvance of the actual test performance to ensurethat the office of the Deputy for T&E has time toapprove the procedures or implementmodifications.

The Deputy for T&E must receive the test resultsand reports on time to enable the office of theDeputy for T&E, the PM, and higher authoritiesto make program decisions. The data receivedshould be tailored to provide the minimum infor-mation needed. The Deputy for T&E must beaware that data requirements in excess of theminimum needed may lead to an unnecessaryincrease in overall program cost. For data thatare needed quickly and informally (at least ini-tially), the Deputy for T&E can request Quick-Look Reports that give test results immediatelyafter test performance. The Deputy for T&E isalso responsible for coordinating with the con-tractor on all report formats (the in-house con-tractor format is acceptable in most cases).

The contract must specify the data the contractorwill supply the Operational Test Agency (OTA).Unlike Development Test and Evaluation(DT&E), the contractor will not be making theOperational Test and Evaluation (OT&E) plans,procedures, or reports. These documents are theresponsibility of the OTA. The PMO Deputy forT&E should include the OTA on the distributionlist for all test documents that are of concernduring the DT&E phase of testing so they willbe informed of test item progress and previoustesting. In this way, the OTA will be informedwhen developing its own test plans and proce-dures for OT&E. In fact, OTA representativesshould attend the CDRL Review Board andprovide the PMO with a list of the types of docu-ments the OTA will need. The Deputy for T&Eshould coordinate the test sections of this datalist with the OTA and indicate concerns at thatmeeting. All contractor test reports should bemade available to the OTA. In return, the Deputyfor T&E must stay informed of all OTA activi-ties, understand the test procedures, and plan andreceive the test reports. Unlike DT&E and con-tractor documentation, the PMO Deputy for T&Ewill not have report or document approval au-thority for OT&E items. The Deputy for T&E isresponsible for keeping the PM informed ofOT&E results.

Page 57: DAU T&E Management Guide.pdf

4-3

4.3.3 Test Schedule Formulation

An important task the Deputy for T&E has duringthe creation of the RFP is the test programschedule. Initially, the PM will need contractorpredictions of the hardware (and software in somecases) availability dates for models, prototypes,mock-ups, full-scale models, etc., once the con-tract is awarded. The Deputy for T&E uses thisinformation and the early test planning in theTechnology Development Strategy (TDS) docu-ment to create a realistic front-end schedule ofthe in-house testing the contractor will conductbefore government testing (Development Testing(DT) and Operational Testing (OT)). Then, a draftintegrated schedule (Part II, TEMP) is developedupon which the government DT and OT sched-ules can be formulated and contractor support re-quirements determined. The Deputy for T&E canuse past experience in testing similar weapon sys-tems/acquisition items or contract test organiza-tions that have the required experience to com-plete the entire test schedule. Since the test sched-ule is a critical contractual item, contractor inputis very important. The test schedule will normallybecome an item for negotiation once the RFP isreleased and the contractor’s proposal is received.Attention must be given to ensure that the testschedule is not so success-oriented that retestingof failures will cause serious program delays foreither the government test agencies or thecontractor.

Another important early activity to be performedby the Deputy for T&E is to coordinate the OT&Etest schedule. Since the contractor may be re-quired to provide support, the OT&E test sup-port may need to be contractually agreed uponbefore contract award. Sometimes, the Deputy forT&E can formulate a straw-man schedule (basedon previous experience) and present this sched-ule to the operational test representative at theinitial T&E Integrated Product Team (IPT) meet-ing for review, or the Deputy for T&E can con-tact the OTA and arrange a meeting to discuss

the new program. In the meeting, time require-ments envisioned by OTA can be discussed. In-put from that meeting then goes into the RFPand to the PM. The test schedule must allow timefor DT&E testing and OT&E testing when test-ing is not combined or test assets are not limited.Before setup of Initial Operational Test and Evalu-ation (IOT&E), certification of readiness forIOT&E may require a time gap for review ofDT&E test results and refurbishment or correc-tions of deficiencies discovered during DT&E,etc. The test schedule for DT&E should not beoverrun so that the IOT&E test schedule is ad-versely impacted, reducing program schedule timewith inadequate operational testing or rushing thereporting of IOT&E results. For example, if theDT&E schedule slips 6 months, the OT&E sched-ule and milestone decision should slip also. TheIOT&E should not be shortened just to make amilestone decision date.

4.3.4 Programmatic Environmental Analysis

The PMO personnel should be sensitive to thepotential environmental consequences of systemmaterials, operations, and disposal requirements.Public Laws (Title 40, Code of Federal Regula-tions (CFR), Parts 1500-1508; National Environ-mental Policy Act (NEPA) Regulations; Execu-tive Order 12114, Environmental Effects Abroadof Major Federal Actions; DoD 5000 Series; etc.)require analysis of hazardous materials and ap-propriate mitigation measures during each acqui-sition phase. As stated in DoD 5000.2-R, “Plan-ning shall consider the potential testing impactson the environment.”

Litigation resulting in personal fines and impris-onment against government employees haveraised the environmental awareness at test rangesand facilities. Environmental Impact Statements(EISs) (supported by long, thorough studies andpublic testimony) or Environmental Analysis andAssessments are generally required before anysystem testing can be initiated.

Page 58: DAU T&E Management Guide.pdf

4-4

4.4 PMO/CONTRACTOR TESTMANAGEMENT

The PMO will, in most cases, have a contractortest section counterpart. With this counterpart, theDeputy for T&E works out the detailed test plan-ning, creation of schedules, etc., for the entiretest program. The PMO uses input from allsources (contracts, development test agencies,OTAs, higher headquarters, etc.) to formulate thetest program’s length, scope, and necessary de-tails. The Deputy for T&E ensures that the RFPreflects the test program envisioned and thecontractor’s role in the acquisition process. TheDeputy for T&E also ensures the RFP includesprovisions for government attendance at contrac-tors’ tests and that all contractor test results areprovided to the government.

After the RFP has been issued and the contractorhas responded, the proposal is reviewed by thePMO. The Deputy for T&E is responsible forperforming a technical evaluation on the test por-tions of the proposal. In this technical evalua-tion, the Deputy for T&E compares the proposalto the SOW, test schedule, IFPP, etc., and reviewsthe contractor’s cost of each testing item. This isan iterative process of refining, clarifying, andmodifying that will ensure the final contractbetween the PMO and the prime contractor (sub-contractors) contains all test-related tasks and ispriced within scope of the proposed test program.Once technical agreement on the contractor’stechnical approach is reached, the Deputy forT&E is responsible for giving inputs to thegovernment contracting officer during contractnegotiations. The contracting officer-requestedcontract deliverables are assigned Contract LineItem Numbers (CLINs), which are created by theDeputy for T&E. This will ensure the contractordelivers the required performances at specifiedintervals during the life of the contract. Usually,there will be separate contracts for developmentand production of the acquisition item. For eachtype of contract, the Deputy for T&E has the

responsibility to provide the PCO and PM withthe T&E input.

4.5 INTEGRATED PRODUCT TEAMSFOR TEST AND EVALUATION

Before the final version of the RFP is created,the Deputy for T&E should form an IPT, a testplanning/integration working group. This groupincludes the OTA, development test agency,organizations that may be jointly acquiring thesame system, the test supporting agencies,operational users, and any other organizations thatwill be involved in the test program by providingtest support or by conducting, evaluating, orreporting on testing. The functions of the groupsare to: facilitate the use of testing expertise,instrumentation, facilities, simulations and models;integrate test requirements; accelerate the TEMPcoordination process; resolve test cost and sched-uling problems; and provide a forum to ensureT&E of the system is coordinated. The existenceof a test coordinating group does not alter theresponsibilities of any command or headquarters;and in the event of disagreement within a group,the issue is resolved through the normal com-mand/staff channels. In later meetings, the con-tractor participates in this test planning group;however, the contractor may not be selected bythe time the first meetings are held.

The purposes of these meetings are to review andassist in the development of early test documen-tation, the TEMP, and to agree on basic testprogram schedules, scope, support, etc. TheTEMP serves as the top-level test managementdocument for the acquisition program, beingupdated as the changing program dictates.

4.6 TEST PROGRAM FUNDING/BUDGETING

The PMO must identify funds for testing very earlyso that test resources can be obtained. The Deputyfor T&E uses the acquisition schedule, TEMP, and

Page 59: DAU T&E Management Guide.pdf

4-5

other program and test documentation to identifytest resource requirements. The Deputy for T&Ecoordinates these requirements with the contrac-tor and government organizations that have the testfacilities to ensure their availability for testing.The Deputy for T&E ensures that test costs includecontractor and government test costs. The contractor’stest costs are normally outlined adequately in theproposal; however, the government test ranges,instrumentation, and test-support resource costsmust be determined by other means. Usually, theDeputy for T&E contacts the test organization andoutlines the test program requirements; and the testorganization sends the program office an estimateof the test program costs. The Deputy for T&Ethen obtains cost estimates from all test sourcesthat the Deputy for T&E anticipates using andsupplies this information to the PM. The Deputyfor T&E must also ensure that any program fund-ing reductions are not absorbed entirely by the testprogram. Some cutbacks may be necessary andallowable; but the test program must supply thePM, other defense decision-making authorities, andCongress with enough information to make pro-gram milestone decisions.

4.7 TECHNICAL REVIEWS, DESIGNREVIEWS, AND AUDITS

The role of the Deputy for T&E changes slightlyduring the contractor’s technical reviews, designreviews, physical and functional configurationaudits, etc. Usually, the Deputy for T&E plans,directs, or monitors government testing; however,in the reviews and audits, the Deputy examinesthe contractor’s approach to the test problem andevaluates the validity of the process and theaccuracy of the contractor’s results. The Deputyfor T&E uses personal experience and back-ground in T&E to assess whether the contractordid enough or too little testing; whether the testswere biased in any way; and if they followed alogical progression using the minimum of time,effort, and funds. If the Deputy for T&E findsany discrepancies, the Deputy must inform the

contractor, the PM, and the PCO to validate theconclusions before effecting corrections. Eachtype of review or audit will have a different focus/orientation, but the Deputy for T&E will alwaysbe concerned with the testing process and how itis carried out. After each review, the Deputy forT&E should always document all observationsfor future reference.

4.8 CONTRACTOR TESTING

The Deputy for T&E is responsible for ensuringthat contractor-conducted tests are monitored bythe government. The Deputy for T&E must alsobe given access to all contractor internal data,test results, and test reports related to the acqui-sition program. Usually, the contract requires thatgovernment representatives be informed ahead oftime of any (significant or otherwise) testing thecontractor conducts so the government canarrange to witness certain testing or receive resultsof the tests. Further, the contractor’s internal datashould be available as a contract provision. TheDeputy for T&E must ensure that government testpersonnel (DT&E/OT&E) have access to contrac-tor test results. It would be desirable to have alltesters observe some contractor tests to help de-velop confidence in the results and identify ar-eas of risk.

4.9 SPECIFICATIONS

Within the program office, the engineeringsection is usually tasked to create the system per-formance specifications for release of the RFP.The contractor is then tasked with creating thespecification documentation called out by thecontract, which will be delivered once the item/system design is formalized for production. TheDeputy for T&E performs an important functionin specification formulation by reviewing thespecif ications to determine if performanceparameters are testable; if current, state-of-the-art technology can determine (during the DT&Etest phase) if the performance specifications are

Page 60: DAU T&E Management Guide.pdf

4-6

• UNDERSTAND THE POLICIES

• ORGANIZE FOR T&E

• KEEP SYSTEM REQUIREMENTS DOCUMENTS CURRENT

• AGONIZE OVER SYSTEM THRESHOLDS

• WORK CLOSELY WITH THE OPERATIONAL TEST DIRECTOR

• DON’T FORGET ABOUT OPERATIONAL SUITABILITY

• MAKE FINAL DT&E A REHEARSAL FOR IOT&E

• PREPARE INTERFACING SYSTEMS FOR YOUR IOT&E

• MANAGE SOFTWARE TESTING CLOSELY

• TRACK AVAILABILITY OF TEST RESOURCES AND TEST SUPPORTPERSONNEL/FACILITIES

SOURCE: Matt Reynolds, NAVSEA.

Figure 4-1. Lessons Learned from OT&E for the PM

being met by the acquisition item; or if the speci-fied parameters are too “tight.” A specificationis too “tight” if the requirements (Section 3) areimpossible to meet or demonstrate, or if the speci-fication has no impact on the form, fit, or functionof the end-item, the system it will become a partof, or the system with which it will interact. TheDeputy for T&E must determine if test objec-tives can be adequately formulated from thosespecifications that will provide thresholds of per-formance, minimum and maximum standards,and reasonable operating conditions for the end-item’s final mission and operating environment.The specifications (Section 4) shape the DT&Etesting scenario, test ranges, test support, targets,etc., and are very important to the Deputy forT&E.

4.10 INDEPENDENT TEST ANDEVALUATION AGENCIES

The PMO Deputy for T&E does not have directcontrol over government-owned test resources,test facilities, test ranges, test personnel, etc.Therefore, the Deputy for T&E must depend onthose DT or OT organizations controlling themand stay involved with the test agency activities.

The amount of involvement depends on the itembeing tested; its complexity, cost, and charac-teristics; the length of time for testing; amountof test funds; etc. Usually, the “nuts and bolts”detailed test plans and procedures are writtenby the test organizations controlling the testresources with input and guidance from theProgram Office Deputy for T&E. The Deputyfor T&E is responsible for ensuring that the tests(DT&E) are performed using test objectives basedon the specifications and that the requirementsof timeliness, accuracy, and minimal costs are metby the test program design. During the testing,the Deputy for T&E monitors test results. Thetest agencies (DT/OT) submit a copy of their re-port to the Program Office at the end of testing,usually to the Office of the Deputy for T&E. TheArmy is the only Service to have a designatedindependent evaluation agency, which providesfeedback to the program office.

4.11 PMO RESPONSIBILITIES FOROPERATIONAL TEST ANDEVALUATION

In the government PMO, there should be a sectionresponsible for T&E. Besides being responsible

Page 61: DAU T&E Management Guide.pdf

4-7

for DT&E support to the PM, this section shouldbe responsible for program coordination with theOT&E agency (Figure 4-1). The offices of thesystems engineer or the Deputy for T&E may bedesignated to provide this support to the PM. Insome Services, responsibilities of the Deputy forT&E include coordination of test resources forall phases of OT&E.

4.11.1 Contract Responsibilities

The Deputy for T&E or a T&E representativeensures that certain sections of the RFP containsufficient allowance for T&E support by contrac-tors. This applies whether the contract is for adevelopment item, a production item (limited pro-duction, such as Low Rate Initial Production(LRIP) or Full Rate Production (FRP)), or theenhancement/upgrade of portions of a weaponssystem. Where allowed within the law, contractorsupport for OT&E should be considered to helpresolve basic issues such as data collectionrequirements, test resources, contractor testsupport, and funding.

In the overall portion of the RFP, government per-sonnel, especially those in the OTAs, must beguaranteed access to the contractor’s developmentfacilities, particularly during the DT&E phase.Government representatives must be allowed toobserve all contractor in-house testing and haveaccess to test data and reports.

4.11.2 Data Requirements

The contract must specify the data the contractorwill supply the OTA. Unlike DT&E, the contrac-tor will not be making the OT&E plans, proce-dures, or reports. These documents are the re-sponsibility of the OTA. The PMO Deputy forT&E should include the OTA on the distributionlist for all test documents that are of concern dur-ing the DT&E phase of testing so they will beinformed of test item progress and previous test-ing. In this way, the OTA will be informed whendeveloping its own test plans and procedures for

OT&E. In fact, OTA representatives should at-tend the CDRL Review Board and provide thePMO with a list of the types of documents theOTA will need. The Deputy for T&E shouldcoordinate the test sections of this data list withthe OTA and indicate concerns at that meeting.All contractor test reports should be made avail-able to the OTA. In return, the Deputy for T&Emust stay informed of all OTA activities, under-stand the test procedures and plans, and receivethe test reports. Unlike DT&E, the PMO Deputyfor T&E will not have report or documentapproval authority as does the Deputy for T&Eover contractor documentation. The Deputy forT&E is always responsible for keeping the PMinformed of OT&E results.

4.11.3 Test Schedule

Another important early activity the Deputy forT&E must accomplish is to coordinate the OT&Etest schedule. Since the contractor may berequired to provide support, the OT&E testsupport may need to be contractually agreed uponbefore contract award. Sometimes, the Deputy forT&E can formulate a draft schedule (based onprevious experience) and present this scheduleto the operational test representative at the initialTest Planning Working Group (TPWG) forreview; or the Deputy for T&E can contact theOTA and arrange a meeting to discuss the newprogram. In the meeting, time requirementsenvisioned by OTA can be discussed. Input fromthat meeting then goes into the RFP and to thePM. The test schedule must allow time for DT&Etesting and OT&E testing if testing is not com-bined or test assets are limited. Before setup ofIOT&E, the Certification of Readiness for IOT&Eprocess may require a time gap for review ofDT&E test results and refurbishment or correc-tions of deficiencies discovered during DT&E,etc. The test schedule for DT&E should not beso “success-oriented” that the IOT&E test sched-ule is adversely impacted, not allowing enoughtime for adequate operational testing or thereporting of IOT&E results.

Page 62: DAU T&E Management Guide.pdf

4-8

4.11.4 Contractor Support

The Deputy for T&E provides all T&E input tothe RFP/SOW. The Deputy for T&E must deter-mine, before the beginning of the programacquisition phase, whether the contractor will beinvolved in supporting IOT&E and, if so, to whatextent. According to Title 10, United States Code(U.S.C.), the system contractor can only be in-volved in the conduct of IOT&E if, once the itemis fielded, tactics and doctrine say the contractorwill be providing support or operating that itemduring combat. If not, no system contractor sup-port is allowed during IOT&E. Before IOT&E,however, the contractor may be tasked with pro-viding training, training aids, and handbooks toService training cadre so they can train theIOT&E users and maintenance personnel. In ad-dition, the contractor must be required to pro-vide sufficient spare parts for the operationalmaintenance personnel to maintain the test itemwhile undergoing operational testing. These sup-port items must be agreed upon by the PMO andOTA and must contractually bind the contractor.If, however, the contractor will be required to pro-vide higher-level maintenance of the item for theduration of the IOT&E, data collection on thosefunctions will be delayed until a subsequentFollow-on Operational Test and Evaluation(FOT&E).

4.11.5 Statement of Work

One of the most important documents receivinginput from the Deputy for T&E is the SOW. TheDeputy for T&E must outline all required oranticipated contractor support for DT&E andOT&E. This document outlines data requirements,contractor-conducted or supported testing, gov-ernment involvement (access to contractor data,tests, and results), operational test support, andany other specific test requirements the contractorwill be tasked to perform during the duration ofthe contract.

4.11.6 OT&E Funding

The Deputy for T&E provides the PM estimatesof PMO test program costs to conduct OT&E.This funding includes contractor and governmenttest support for which the program office directlyor indirectly will be responsible. Since ServiceOTAs fund differently, program office funding forconducting OT&E varies. The Deputy for T&Emust determine these costs and inform the PM.

4.11.7 Test and Evaluation Master Plan

The Deputy for T&E is responsible for managingthe TEMP throughout the test program. The OTAusually is tasked to complete the OT section ofthe TEMP and outline its proposed test programthrough all phases of OT&E. The resources re-quired for OT&E should also be entered in PartV. It is important to keep the TEMP updated regu-larly so that test organizations involved in OT&Eunderstand the scope of their test support. TheTEMP should be updated regularly by the OTA.Further, if any upgrades, improvements, orenhancements to the fielded weapon systemoccur, the TEMP must be updated or a new onecreated to outline new DT and OT requirements.

4.11.8 Program Management OfficeSupport for OT&E

Even though operational testing is performed byan independent organization, the PM plays animportant role in its planning, reporting, and fund-ing. The PM must coordinate program activitieswith the test community in the TE Working-levelIntegrated Product Team (WIPT), especially theOTAs. The PM ensures that testing can addressthe critical issues, and provides feedback fromOT&E testing activities to contractors.

At each milestone review, the PM is required tobrief the decision authority on the testing plannedand completed on the program. It is importantthat PMO personnel have a good understandingof the test program and that they work with the

Page 63: DAU T&E Management Guide.pdf

4-9

operational test community. This will ensureOT&E is well-planned and adequate resources areavailable. The PMO should involve the test com-munity by organizing test coordinating groups atprogram initiation and by establishing channelsof communication between the PMO and the keytest organizations. The PMO can often avoidmisunderstandings by aggressively monitoring thesystem testing and providing up-to-date informa-tion to key personnel in the Office of the Secre-tary of Defense and the Services. The PMO staffshould keep appropriate members of the test com-munity well-informed concerning system prob-lems and the actions taken by the PMO to cor-rect them. The PMO must assure that contractorand government DT&E supports the decision tocertify the system’s readiness for IOT&E.

4.11.9 Support for IOT&E

The Certification of Readiness for the IOT&Eprocess provides a forum for a final review oftest results and support required by the IOT&E.The Deputy for T&E must ensure the contractportions adequately cover the scope of testing asoutlined by the OTA. The program office maywant to provide an observer to represent theDeputy for T&E during the actual testing. TheDeputy for T&E involvement in IOT&E will beto monitor and coordinate; the Deputy for T&Ewill keep the PM informed of progress andproblems that arise during testing and will moni-tor required PMO support to the test organiza-tion. Sufficient LRIP items must be manufacturedto run a complete and adequate OT&E program.The DOT&E determines the number of LRIPitems for IOT&E on oversight programs, and theOTA makes this determination for all others.2 Forproblems requiring program office action, theDeputy for T&E will be the point of contact.

The Deputy for T&E will be concerned withIOT&E of the LRIP units after a limited numberare produced. The IOT&E must be closely moni-tored so that a FRP decision can be made. As inthe operational assessments, the Deputy for T&E

will be monitoring test procedures and results andkeeping the PM informed. If the item does notsucceed during IOT&E, a new process of DT&Eor a modification may result; and the Deputy forT&E will be involved (as in any new programs’inception). If the item passes IOT&E testing andis produced at full rate, the Deputy for T&E willbe responsible for ensuring that testing of thoseproduction items is adequate to ensure that theend items physically and functionally resemblethe development items.

4.11.10 FOT&E and Modifications,Upgrades, Enhancements, orAdditions

During FOT&E, the Deputy for T&E monitors thetesting, and the level of contractor involvement isusually negotiated. The Deputy for T&E shouldreceive any reports generated by the operationaltesters during this time. Any deficiencies noted dur-ing FOT&E should be evaluated by the PMO,which may decide to incorporate upgrades,enhancements, or additions to the current systemor in future increments. If the PM and the engi-neering section of the program office design ordevelop modifications that are incorporated intothe weapon system design, additional FOT&E maybe required.

Once a weapon system is fielded, portions of thatsystem may become obsolete, ineffective, ordeficient and may need replacing, upgrading, orenhancing to ensure the weapon system meetscurrent and future requirements. The Deputy forT&E plays a vital role in this process. Modifica-tions to existing weapon systems may be managedas an entire newly acquired weapon system. How-ever, since these are changes to existing systems,the Deputy for T&E is responsible for determin-ing if these enhancements degrade the existingsystem, are compatible with its interfaces andfunctions, and whether Non-Developmental Items(NDIs) require retest, or the entire weapon systemneeds reverification. The Deputy for T&E mustplan the test program’s funding, schedule, test

Page 64: DAU T&E Management Guide.pdf

4-10

program, and contract provisions with these itemsin mind. A new TEMP may have to be generatedor the original weapon system TEMP modifiedand re-coordinated with the test organizations.The design of the DT&E and FOT&E programusually requires coordination with the engineer-ing, contracting, and program management IPTsof the program office.

4.11.11 Test Resources

During all phases of OT, the Deputy for T&Emust coordinate with the operational testers toensure they have the test articles needed toaccomplish their mission. Test resources will beeither contractor-provided or government-provided.The contractor resources must be covered in thecontract, whether in the development contract orthe production contract. Government test resourcesneeded are determined by the operational testers.They usually coordinate the test ranges, test sup-port, and the user personnel for testing. The PMprograms funding for support of OT. Funding forOT&E is identified in the TEMP and funded inthe PMO’s budget. The Services’ policy speci-fies how the OTAs develop and manage their ownbudgets for operational testing.

4.12 SUMMARY

Staffing requirements in the PMO vary with theprogram phase and the T&E workload. T&Eexpertise is essential in the early planning stagesbut can be provided through matrix support. TheDeputy for T&E may be subordinate to the chiefengineer in early phases but should become a sepa-rate staff element after prototype testing. Changingof critical players can destroy established workingrelationships and abrogate prior agreements ifcontinuity is not maintained. The PMO manage-ment of T&E must provide for an integrated focusand a smooth transition from one staff-supportmode to the next.

The PMO should be proactive in its relations withthe Service OTA. There are many opportunitiesto educate the OTA on system characteristics andexpected performance. Early OTA input to designconsiderations and requirements clarification canreduce surprises downstream. Operational test-ing is an essential component of the systemdevelopment and decision-making process. It canbe used to facilitate system development or maybecome an impediment. In many cases, the PMOattitude toward operational testing and the OTAwill influence which role the OTA assumes.

Page 65: DAU T&E Management Guide.pdf

4-11

ENDNOTES

1. DoD 5010.12-L, Acquisition ManagementSystems and Data Requirements Control List(AMSDL), April 2001.

2. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition System, May 12,2003.

Page 66: DAU T&E Management Guide.pdf

4-12

Page 67: DAU T&E Management Guide.pdf

5-1

55TEST-RELATED

DOCUMENTATION

5.1 INTRODUCTION

During the course of a defense acquisition pro-gram, many documents are developed that havesignificance for those responsible for testing andevaluating the system. This chapter is designed toprovide background on some of these documents.

As Figure 5-1 shows, test-related documentationspans a broad range of materials. It includesrequirements documentation such as the Capabil-ity Development Document (CDD) andCapability Production Document (CPD); programdecision documentation such as the AcquisitionDecision Memorandum (ADM) with exit crite-ria; and program management documentationsuch as the Acquisition Strategy, Baseline docu-mentation, the Systems Engineering Plan (SEP),the logistics support planning, and the Test andEvaluation Master Plan (TEMP). Of importanceto Program Managers (PMs) and to Test andEvaluation (T&E) managers are additional testprogram documents such as specific test designs,test plans, Outline Test Plans/Test Program Out-lines (OTPs/TPOs), evaluation plans, and testreports. This chapter concludes with a descrip-tion of the End-of-Test Phase and Beyond LowRate Initial Production (BLRIP) Reports, and twospecial-purpose T&E status reports that are usedto support the milestone decision process.

5.2 REQUIREMENTS DOCUMENTATION

5.2.1 Continuing Mission CapabilityAnalyses

The Joint Chiefs, via the Joint Requirements Over-sight Council (JROC), are required to conduct

continuing mission analyses of their assigned areasof responsibility.1 These Functional Analyses (Area,Needs, Solutions) may result in recommendationsto initiate new acquisition programs to reduce oreliminate operational deficiencies. If a need cannotbe met through changes in Doctrine, Organization,Training, Leadership and Education, Personnel andFacilities (DOTLPF), and a Materiel (M) solutionis required, the needed capability is described firstin an Initial Capabilities Document (ICD) and thenin the CDD. When the cost of a proposed acquisi-tion program is estimated to exceed limits speci-fied in Operation of the Defense Acquisition System,the proposed program considered a Major DefenseAcquisition Program (MDAP) and requires JROCapproval (JROC Interest).2 Lesser programs for Ser-vice action are designated Joint Integration or Inde-pendent programs. The ICD is completed at the be-ginning of the analyses for a mission area; the CDDand CPD are program-/increment-specific and re-viewed to evaluate necessary system modificationsperiodically.

5.2.2 Initial Capabilities Document

The ICD makes the case to establish the need fora material approach to resolve a specific capabil-ity gap. The ICD supports the Analysis of Alter-natives (AoA), development of the TechnologyDevelopment Strategy (TDS), and variousmilestone decisions. The ICD defines the capabil-ity gap in terms of the functional area(s), relevantrange of military operations, time, obstacles toovercome, and key attributes with appropriate Mea-sures of Effectiveness (MOEs). Once approved, theICD is not normally updated. Format for the ICDis found in CJCSM [Chairman of the Joint Chiefsof Staff Manual] 3170.01A.3

Page 68: DAU T&E Management Guide.pdf

5-2

Figure 5-1. Test-Related Documentation

5.2.3 Capability Development Document

The CDD outlines an affordable increment ofmilitarily useful and supportable operationalcapability that can be effectively developed,produced or acquired, deployed, and sustained.Each increment of capability will have its ownset of attributes and associated performancevalues with thresholds and objectives establishedby the sponsor (Service) with input from theuser. The CDD performance attributes apply toonly one increment and include Key Perfor-mance Parameters (KPPs) and other parametersthat will guide the development, demonstration,and testing of the current increment. The CDDwill outline the overall strategy to develop the

full or complete capability by describing all in-crements (Figure 5-2). The CDD will be ap-proved at Milestone B and not normally updated.The format is found in CJCSM 3170.01A.4

5.2.4 Capability Production Document

The CPD addresses the production attributes andquantities specific to a single increment and isfinalized by the sponsor after the Design Readi-ness Review (DRR), when projected capabili-ties of the increment in development have beenspecified with sufficient accuracy to begin pro-duction. Design trades during System Developmentand Demonstration (SDD) should have led to thefinal production design needed for Milestone C.

TEST REPORT

EOA, OA

MISSIONNEED

STATEMENT

TEST ANDEVALUATION

MASTERPLAN

(TEMP)

SPECIALPURPOSE

T&ESTATUS

REPORTS

TESTPROGRAM

DOCUMENTS

INTEGRATEDLOGISTICSSUPPORT

(ILS)

TECHNICALMANAGEMENT

(SEP)

ACQUISITIONSTRATEGY

SOW CDD/ CPD

LIVE FIRE TR

TEST DESIGN

WBS STA

BLRIP REPORT

EVALUATION PLANTEST RESOURCE

PLANEVALUATION

REPORT

ACQUISITIONPROGRAMBASELINE

ADM-EXITCRITERIA

AoA

PROGRAMMANAGER

SYSTEMSPECIFICATIONS

SYS PERFITEM PERF

ITEM DETAILPROCESSMATERIAL

ACQUISITIONDECISION

MEMORANDUM(ADM)

T&E STRATEGY

Page 69: DAU T&E Management Guide.pdf

5-3

Figure 5-2. Requirements Definition Process

Initial Operational Test and Evaluation (IOT&E)will normally test to the values in the CPD. Thethreshold and objective values from the CDDare, therefore, superseded by the specific pro-duction values detailed in the CPD. Reductionin any KPP threshold value will require reas-sessment of the military utility of the reducedcapability. The CPD will always reference theoriginating ICD. The format is found in CJCSM3170.01A.5

5.2.5 System Threat Assessment (STA)

An STA is prepared, starting at Milestone B, bythe Department of Defense (DoD) ComponentIntelligence Command or Agency, and for Ac-quisition Category (ACAT) ID programs, and isvalidated by the Defense Intelligence Agency(DIA).6 The STA, for Defense Acquisition Board

(DAB) programs, will contain a concise descrip-tion of the projected future operational threatenvironment, the system-specific threat, the re-active threat that could affect program decisions,and when appropriate, the results of interactiveanalysis obtained by the Service PM when evalu-ating the program against the threat. Threat pro-jections start at the Initial Operational Capabil-ity (IOC) and extend over the following 10 years.The STA provides the basis for the test designof threat scenarios and the acquisition of appro-priate threat targets, equipment, or surrogates.It provides threat data for Development Test andEvaluation (DT&E) and Operational Test andEvaluation (OT&E). Vulnerability and lethalityanalyses during Live Fire Testing (LFT) of ACATI and II systems are contingent on valid threatdescriptions. A summary of the STA is includedin Part 1 of the TEMP.

MILESTONE MILESTONE MILESTONE

CHARACTERISTICS

LETHALITY

SURVIVABILITY

PERFORMANCE

PARAMETERS

SFR

PDRITEM SPEC

CONCEPTS

"DESTROY A HARDENEDTARGET IN CENTRAL ASIA THATIS PROTECTED BY A 21st CENTURYAIR DEFENSE SYSTEM."

REQUIREMENTS DOCUMENTS

VERY BROADSTATEMENT OFMISSION NEED

CAPABILITIES DEVDOCUMENT:

ICBMCRUISE MISSILEMANNED BOMBERATTACK AIRCRAFT

SPEEDRANGERCSALTITUDE

MINIMUMMAXCRUISELAUNCH

WARHEAD SIZECEPSURVIVABILITY

FINAL SYSTEMCHARACTERISTICSFINAL PARAMETERS

VALUES

THRESHOLD

DASHCRUISE

TOMAHAWKALCM

CRUISE MISSILEALCM

Page 70: DAU T&E Management Guide.pdf

5-4

5.3 PROGRAM DECISIONDOCUMENTATION

5.3.1 Acquisition Decision Memorandum

Under Secretary of Defense for Acquisition, Tech-nology, and Logistics (USD(AT&L)) decisions atmajor defense ACAT ID milestones are recorded ina document known as an ADM. The ADM docu-ments a USD(AT&L) decision on program progressand on the Acquisition Program Baseline (APB) atmilestones and decision reviews. In conjunction withan ADM, and its included exit criteria for the nextphase, the APB is a primary program guidance docu-ment providing KPP thresholds/objectives for sys-tems cost, schedule, and performance.

5.3.2 Analysis of Alternatives

An AoA is normally prepared by a DoD Compo-nent agency (or Principal Staff Assistant (PSA)for ACAT IA programs) other than the ProgramManagement Office (PMO) for each milestonereview beginning at Milestone A. The AoA aidsdecision makers by examining the relative advan-tages and disadvantages of program alternatives,shows the sensitivity of each alternative to pos-sible changes in key assumptions, and providesthe rationale for each option. The guidance in theDefense Acquisition Guidebook, Chapter 4,requires a clear linkage between the AoA, sys-tem requirements, and system evaluation MOE.

The driving factor behind this linkage is the deci-sion maker’s reluctance to accept modeling orsimulation projections for system performance inthe future without actual test data that validateAoA results.

5.4 PROGRAM MANAGEMENTDOCUMENTATION

5.4.1 Acquisition Strategy

An event-based acquisition strategy must beformulated at the start of a development program.

Event-driven acquisition strategy explicitly linksprogram decisions to demonstrated accomplish-ments in development, testing, and initial pro-duction. The strategy constitutes a broad set ofconcepts that provide direction and control forthe overall development and production effort.The acquisition strategy is updated at each mile-stone and decision review using a Working-levelIntegrated Product Team (WIPT) structurethroughout the life of a program. The level ofdetail reflected in the acquisition strategy can beexpected to increase as a program matures. Theacquisition strategy serves as a tailored conceptualbasis for formulating other program functionalplans such as the TEMP.

It is important that T&E interests be representedas the acquisition strategy is formulated becausethe acquisition strategy should:

• Provide an overview of the T&E planned forthe program, ensuring that adequate T&E isconducted prior to the production decision;

• Discuss plans for providing adequate quantitiesof test hardware;

• Describe levels of concurrence and combinedDevelopment Test/Operational Test (DT/OT).

5.4.2 Baseline Documentation

The APB will initially be developed before Mile-stone B or program initiation and revised for eachsubsequent milestone. Baseline parameters rep-resent the cost, schedule, and performanceobjectives and thresholds for the system in a pro-duction configuration. Each baseline influencesthe T&E activities in the succeeding phases.MOEs or Measures of Performance (MOPs) shallbe used in describing needed capabilities earlyin a program. Guidance on the formulation ofbaselines is found in the Defense AcquisitionGuidebook.7 Performance demonstrated duringT&E of production systems must meet or exceedthe thresholds. The thresholds establish deviation

Page 71: DAU T&E Management Guide.pdf

5-5

limits (actual or anticipated breach triggersreports) for KPPs beyond which the PM may nottrade off cost, schedule, or performance withoutauthorization by the Milestone Decision Authority(MDA). Baseline and test documentation mustreflect the same expectations for system perfor-mance. The total number of performance param-eters shall be the minimum number needed tocharacterize the major drivers of operational ef-fectiveness and suitability, schedule, technicalprogress, and cost for a production systemintended for deployment. The performanceparameters may not completely define operationaleffectiveness or suitability.

5.4.3 Acquisition Logistics Planning

Supportability analyses are a composite of all sup-port considerations necessary to ensure the ef-fective and economical support of a system at alllevels of maintenance for its programmed lifecycle. Support concepts describe the overalllogistics support program and include logisticsrequirements, tasks, and milestones for the currentand succeeding phases of the program. The analy-ses serve as the source document for logistics sup-port testing requirements.

Guidelines for logistics support analyses are docu-mented in Logistics Support Analysis.8 This stan-dard identifies how T&E programs should beplanned to serve the following three logisticssupportability objectives:

(1) Provide measured data for input into system-level estimates of readiness, operationalcosts, and logistics support resourcerequirements;

(2) Expose supportability problems so they canbe corrected prior to deployment;

(3) Demonstrate contractor compliance withquantitative supportability—related designrequirements.

Development of an effective T&E programrequires close coordination of efforts among allsystems engineering disciplines, especially thoseinvolved in logistics support analyses. Areas ofparticular interest include Reliability, Availabil-ity, and Maintainability (RAM), Human SystemsIntegration (HSI), Environmental Safety andOccupational Health (ESOH), and post-deploy-ment evaluations. The support analyses should bedrafted shortly before program start to provide askeletal framework for Logistics Support Analysis(LSA), to identify initial logistics testing require-ments that can be used as input to the TEMP, andto provide test feedback to support Integrated Lo-gistics Support (ILS) development. Test resourceswill be limited early in the program.

5.4.4 Specification

The system specification is a document used indevelopment and procurement that describes thetechnical performance requirements for items,materials, and services including the proceduresby which it will be determined that the require-ments have been met. Specification evolves overthe developmental phases of the program withincreasing levels of detail: system, item perfor-mance, item detail, process, and material. Sec-tion 4 of the specification identifies what proce-dures (inspection, demonstration, analysis, andtest) will be used to verify the performanceparameters listed in Section 3. Further details maybe found in Military Standard (MIL-STD)-961D,DoD Standard Practice for Defense Specifica-tions.9

5.4.5 Work Breakdown Structure (WBS)

A program WBS shall be established that pro-vides a framework for program and technicalplanning, cost estimating, resource allocations,performance measurements, and status reporting.Program offices shall tailor a program WBS foreach program using the guidance in MilitaryHandbook Work Breakdown Structure.10 Level 2of the WBS hierarchical structure addresses

Page 72: DAU T&E Management Guide.pdf

5-6

Figure 5-3. Test Program Documentation

system-level T&E with sublevels for DT&E andOT&E. Additionally, each configuration itemstructure includes details of the integration andtest requirements.

5.5 TEST PROGRAM DOCUMENTATION

5.5.1 Test and Evaluation Master Plan

An evaluation strategy is developed by the earlyconcept team that describes how the capabilitiesin the CDD will be evaluated once the system isdeveloped. The evaluation strategy, part of theTDS, provides the foundation for developmentof the program TEMP at the milestone support-ing program start. The TEMP is the basic plan-ning document for T&E related to a DoD systemacquisition (Figure 5-3). It is prepared by thePMO with the OT information provided by the

Service Operational Test Agency (OTA). It is usedby Office of the Secretary of Defense (OSD) andthe Services for planning, reviewing, andapproving T&E programs and provides the basisand authority for all other detailed T&E planningdocuments. The TEMP identifies Critical Tech-nical Parameters (CTPs), performance character-istics (MOE), and Critical Operational Issues(COIs); and describes the objectives, responsi-bilities, resources, and schedules for all completedand planned T&E. The TEMP, in the Defense Ac-quisition Guidebook specified format, is requiredby DoD Instruction (DoDI) 5000.2 for ACAT I,IA, and designated oversight programs (see en-closure 5 to DoDI 5000.2 for more informationregarding the TEMP). Format is at Service dis-cretion for ACAT II and III programs. For ex-ample, the Army requires that each TEMP con-tain the full set of approved Critical Operational

SYSTEM TEST PLAN ++

TEMP

ICD

CDD

STA

INTEGRATED TEST PLAN

SYSTEMSPECS

DETAILED TEST PLANS

TEST PROCEDURES

PROGRAM SCHEDULE

CONTRACT SCHEDULE

APB

MISSION PLANNING DOCUMENTS

TEST CARD/SCRIPTS

TEST REPORTS

HARDWARE

QUICK LOOK

SOFTWARE

INTERIM

SYSTEM

FINAL

CONTRACTOR

GOVERNMENT

Page 73: DAU T&E Management Guide.pdf

5-7

Issues and Criteria (COIC), with associated scopeand rationale, forming the basis for planning thesystem’s evaluations.11

5.5.2 Evaluation Plan

Evaluation planning is usually included withinthe test plan. Evaluation planning considers theevaluation and analysis techniques that will berequired once the test data have been collectedand processed. Evaluation is linked closely to thetest design, especially the statistical models onwhich the test design is built.

The Army requires a System Evaluation Plan(SEP) describing the evaluation approach that ad-dresses the technical and operational aspects nec-essary to address the system’s operational effec-tiveness, suitability, and survivability.

The objective of the Army’s emphasis onevaluation is to address the issues; describe theevaluation of issues that require data from sourcesother than test; state the technical or operationalissues and criteria; identify data sources (datasource matrix); state the approach to the inde-pendent evaluation; and specify the analytical planand identify program constraints.12

Evaluation plans are prepared for all systems indevelopment by the independent evaluators dur-ing concept exploration and in coordination withthe system developer. The Army SEP complementsthe TEMP and is developed in support of the ini-tial TEMP. It identifies each evaluation issue andthe methodology to be used to assess it and speci-fies requirements for exchange of informationbetween the development/operational testers andmateriel developers.

5.5.3 Test Design

Test designers need to ensure that the test isconstructed to provide useful information in allareas/aspects that will lead to an assessmentof the system performance. For example, a

complicated, even ingenious, test that does notprovide the information required by the deci-sion makers is, in many respects, a failed en-deavor. Therefore, part of the process of devel-oping a test concept or test design (the distinc-tion between these vary from organization toorganization) should be to consider whether thetest will provide the information required by thedecision makers. In other words, “Are we test-ing the right things in the right way? Are ourevaluations meaningful?”

The test design is statistical and analytical innature and should perform the followingfunctions:

(1) Structure and organize the approach totesting in terms of specific test objectives;

(2) Identify key MOEs and MOPs;

(3) Identify the required data and demonstratehow the data will be gathered, stored,analyzed, and used to evaluate MOEs;

(4) Indicate what part Modeling and Simula-tion (M&S) will play in meeting testobjectives;

(5) Identify the number and type of test eventsand required resources.

The test design may serve as a foundation for themore detailed test plan and specifies the testobjectives, events, instrumentation, methodology,data requirements, data management needs, andanalysis requirements.

5.5.4 Test Plan

The test plan is the vehicle that translates a testconcept and statistical/analytical test design intoconcrete resources, procedures, and responsibili-ties. The size and complexity of a test programand its associated test plan are determined by thenature of the system being tested and the type of

Page 74: DAU T&E Management Guide.pdf

5-8

Table 5-1. Sample Test Plan Contents

testing that is to be accomplished. Some majorweapons systems may require large numbers of

PRELIMINARY PAGES

i. TITLE PAGEii. APPROVAL PAGE

iii. CONDITIONS FOR RELEASE OF TECHNICALDATA

iv. PREFACE (ABSTRACT)v. EXECUTIVE SUMMARY

vi. TABLE OF CONTENTS*

*THE ACTUAL NUMBER OF THESE PAGES WILL BE DETERMINEDBY THE LENGTH OF PRELIMINARY ELEMENTS (E.G., TABLE OFCONTENTS, TERMS AND ABBREVIATIONS, ETC.).

MAIN BODY1. INTRODUCTION2. BACKGROUND3. TEST ITEM DESCRIPTION4. OVERALL TEST OBJECTIVES5. CONSTRAINTS AND LIMITATIONS6. TEST RESOURCES7. SAFETY REQUIREMENTS8. SECURITY REQUIREMENTS9. TEST PROJECT MANAGEMENT

10. TEST PROCEDURES11. TEST REPORTING12. LOGISTICS13. ENVIRONMENTAL PROTECTION

APPENDICESA. TEST CONDITION MATRIXB. REQUIREMENTS TRACEABILITYC. TEST INFORMATION SHEETSD. PARAMETERS LISTE. DATA ANALYSIS PLANF. INSTRUMENTATION PLAN

G. LOGISTICS SUPPORT PLANH-Z AS REQUIRED

LIST OF ABBREVIATIONS

DISTRIBUTION

SOURCE: AFFTC Test Plan Preparation Guide, May 1994.

separate tests to satisfy test objectives and, thus,require a multi-volume test plan; other testingmay be well-defined by a relatively brief test plan.The test plan also provides a description of theequipment configuration and known limitationsto the scope of testing. The type of informationtypically included in a test plan is shown in Table5-1.

5.5.5 Outline Test Plan/Resources Plan

The Army’s Outline Test Plan (OTP) and AirForce’s Test Resources Plan (TRP) are essentialtest planning documents. They are formal re-source documents specifying the resourcesrequired to support the test. Since the OTP orTRP provide the basis for fiscal programmingand coordinating the necessary resources, it is im-portant that these documents be developed in ad-vance and kept current to reflect maturingresource requirements as the test program devel-ops. The Navy makes extensive use of the TEMPto document T&E resource requirements. EachService has periodic meetings designed to reviewresource requirements and resolve problems withtest support.

5.5.6 Test Reports

5.5.6.1 Quick-Look Reports

Quick-look analyses are expeditious analyses per-formed during testing using limited amounts ofthe database. Such analyses often are used toassist in managing test operations. Quick-lookreports are used occasionally to inform higherauthorities of test results. Quick-look reports mayhave associated briefings that present T&E resultsand substantiate conclusions or recommendations.Quick-look reports may be generated by the con-tractor or government agency. They are of par-ticularly critical interest for high-visibility sys-tems that may be experiencing some developmentdifficulties. Techniques and formats should bedetermined before the start of testing. They maybe exercised during pretest trials.

Page 75: DAU T&E Management Guide.pdf

5-9

(9) RAM information

(10) Personnel

(11) Training

(12) Safety

(13) Security

(14) Funding

(15) Asset Disposition.

The final test report may contain an evaluationand analysis of the results, or the evaluation maybe issued separately. The analysis tells what theresults are, whereas an evaluation tells what theresults mean. The evaluation builds on theanalysis and generalizes from it, showing howthe results apply outside the test arena. It showswhat the implications of the test are and mayprovide recommendations. The evaluation maymake use of independent analyses of all or partof the data; it may employ data from other sourcesand may use M&S to generalize the results andextrapolate to other conditions. In the case of theArmy, a separate System Evaluation Report isprepared by independent evaluators within theArmy Evaluation Center (AEC).

5.6 OTHER TEST-RELATED STATUSREPORTS

5.6.1 End of Test Reports

The Services are required by DoDI 5000.2,Operation of the Defense Acquisition System tosubmit to OSD T&E offices copies of their for-mal DT&E, OT&E, and Live Fire Test and Evalu-ation (LFT&E) reports that are prepared at theend of each period of testing for ACAT I, IA,and oversight programs.13 These reports will gen-erally be submitted in a timely manner to permitOSD review.

5.5.6.2 Final Test Report

The final test report disseminates the test infor-mation to decision authorities, program officestaff, and the acquisition community. It providesa permanent record of the execution of the testand its results. The final test report should relatethe test results to the critical issues and addressthe objectives stated in the test design and testplan. A final test report may be separated intotwo sections—a main section providing theessential information about test methods andresults, and a second section consisting of sup-porting appendices to provide details and supple-mental information. Generally, the followingtopics are included in the main body of the report:

(1) Test purpose

(2) Issues and objectives

(3) Method of accomplishment

(4) Results (keyed to the objectives and issues)

(5) Discussion, conclusions and recommenda-tions.

Appendices of the final test report may addressthe following topics:

(1) Detailed test description

(2) Test environment

(3) Test organization and operation

(4) Instrumentation

(5) Data collection and management

(6) Test data

(7) Data analysis

(8) M&S

Page 76: DAU T&E Management Guide.pdf

5-10

5.6.2 Beyond Low-Rate Initial Production(BLRIP) Report

Before an ACAT I or Director, Operational Testand Evaluation (DOT&E)-designated programcan proceed beyond Low Rate Initial Produc-tion (LRIP), the DOT&E must submit a BLRIPreport to the Secretary of Defense and the Senateand House of Representatives Committees onArmed Services, National Security, and Appro-priations. This report addresses whether theOT&E performed was adequate and whether theIOT&E results confirm that the items or com-ponents tested are effective and suitable for usein combat by typical military users. The report

may include information on the results ofLFT&E for applicable major systems.

5.7 SUMMARY

A wide range of documentation is available tothe test manager and should be used to developT&E programs that address all relevant issues.The PM must work to ensure that T&E require-ments are considered at the outset when theacquisition strategy is formulated. The PM mustalso require early, close coordination and a con-tinuing dialogue among those responsible forintegration of functional area planning and theTEMP.

Page 77: DAU T&E Management Guide.pdf

5-11

ENDNOTES

1. CJCS Instruction (CJCSI) 3170.013, JointCapabilities Integration and DevelopmentSystem (JCIDS), March 12, 2004; andCJCSM 3170.01A, Operation of the Joint Ca-pabilities Integration and DevelopmentSystem, March 12, 2004.

2. DoDI 5000.2, Operation of the DefenseAcquisition System, May 12, 2003.

3. CJCSM 3170.01A, Appendix A, enclosure D.

4. CJCSM 3170.01A, Appendix A, enclosure E.

5. CJCSM 3170.01A, Appendix A, enclosure F.

6. DoD Directive (DoDD) 5105.21, DefenseIntelligence Agency, May 19, 1977. Cancelled.

7. Defense Acquisition Handbook, http://akss.dau.mil/DAG, Chapter 9.

8. MIL-STD-1388-1A, Logistics SupportAnalysis, April 11, 1983.

9. MIL-STD-961D, DoD Standard Practice forDefense Specifications, March 22, 1995. Mili-tary Defense Specification Standard Practicesincorporated portions of MIL-STD-490,Specification Practices, which is fully exemptfrom the MIL-STD-491D waiver processbecause it is a “Standard Practice.”

10. Military Handbook (MIL-HDBK)-881, WorkBreakdown Structure, January 2, 1998.

11. Army Regulation (AR) 73-1, Test and Evalu-ation Policy, January 7, 2002.

12. Ibid.

13. DoDI 5000.2, enclosure 5.

Page 78: DAU T&E Management Guide.pdf

5-12

Page 79: DAU T&E Management Guide.pdf

6-1

66TYPES OF TEST

AND EVALUATION

6.1 INTRODUCTION

This chapter provides a brief introduction toDevelopment Test and Evaluation (DT&E) andOperational Test and Evaluation (OT&E)—twoprincipal types of Test and Evaluation (T&E); italso discusses the role of qualification testing asa sub-element of development testing. Otherimportant types of T&E are introduced. They in-clude: multi-Service testing; joint T&E; Live FireTesting (LFT); Nuclear, Biological, and Chemi-cal (NBC) testing; and Nuclear Hardness and Sur-vivability (NH&S) testing. As Figure 6-1 illus-trates, DT&E and OT&E are performed through-out the acquisition process and identified bynomenclature that may change with the phase ofthe acquisition cycle in which they occur.

6.2 DEVELOPMENT TEST ANDEVALUATION

DT&E is T&E conducted throughout the acquisi-tion process to assist in engineering design anddevelopment and to verify that technical perfor-mance specifications have been met. The DT&Eis planned and monitored by the developing agencyand is normally conducted by the contractor. How-ever, the development agency may perform tech-nical compliance tests before OT&E. It includesthe T&E of components, subsystems, PreplannedProduct Improvement (P3I) changes, hardware/soft-ware integration, and production qualification test-ing. It encompasses the use of models, simulations,test beds, and prototypes or full-scale engineeringdevelopment models of the system. DT&E mayinvolve a wide degree of test complexity, depend-ing upon the type of system or test article underdevelopment; e.g., tests of electronic breadboards

or brassboards, components, subsystems, or ex-perimental prototypes.

The DT&E may support the system design processthrough an iterative Modeling and Simulation(M&S), such as Simulation, Test, and EvaluationProcess (STEP) that involves both contractor andgovernment personnel. Because contractor testingplays a pivotal role in the total test program, it isimportant that the contractor establish an integratedtest plan early to ensure that the scope of thecontractor’s test program satisfies government andcontractor test objectives.

Anti-tamper component-level verification testingshall take place prior to production as a functionof Development Test/Operational Test (DT/OT).Component-level testing shall not assess thestrength of the anti-tamper mechanism provided,but instead verify that anti-tamper performs asspecified by the source contractor or governmentagency.1

The Program Manager (PM) remains responsiblefor the ultimate success of the overall program.The PM and the test specialists on the PM’s staffmust foster an environment that provides thecontractor with sufficient latitude to pursueinnovative solutions to technical problems and, atthe same time, provides the data needed to makerational trade-off decisions between cost, sched-ule, and performance as the program progresses.

6.2.1 Production Qualification Test ( PQT)

Qualification testing is a form of developmenttesting that verifies the design and manufacturingprocess. PQTs are formal contractual tests that

Page 80: DAU T&E Management Guide.pdf

6-2

Fig

ure

6-1

. Tes

tin

g D

uri

ng

Acq

uis

itio

n

MIL

ESTO

NE

AB

CFR

PDR

-LRI

P-PH

ASE

OSD

DT

OVE

RSI

GHT

DOT&

EO

VER

SIG

HT

HARD

WAR

ECO

NFIG

URAT

ION

MO

CK-U

P &

BREA

DBO

ARD

PRO

TOTY

PE

BRAS

SBO

ARD

ENG

INEE

RING

DEV

MO

DEL

LOW

RAT

EPR

ODU

CTIO

NAR

TICL

EPR

ODU

CTIO

NAR

TICL

E

UPG

RAD

EDSY

STEM

OSD

REPO

RTS

T&E

ASSE

SSM

ENT

T&E

ASSE

SSM

ENT

LIVE

FIR

E R

EPO

RTBE

YOND

LRI

P RE

PORT

DEVE

LOPM

ENT

TEST

AND

EVA

LUAT

ION

(DT&

E)

ENVI

RONM

ENTA

L TE

STIN

G

DESI

GN-

FOR-

TEST

ABIL

ITY

QUA

LIFI

CATI

ON

TEST

ING

COM

BINE

D DT

&E/O

T&E-

CONC

URRE

NT D

T&E/

OT&

E

EARL

Y O

PERA

TIO

NAL

ASSE

SSM

ENTS

(EO

A)O

PERA

TIO

NAL

ASSE

SSM

ENTS

(OA)

IOT&

E

LFT&

EFO

LLO

W-O

NO

T&E

(FO

T&E)

TEM

PUP

DATE

TEM

PUP

DATE

BIT,

BIT

E, A

TE

LIFE

TES

TING

DESI

GN

LIM

IT T

ESTI

NG

RELI

ABIL

ITY

DEVE

LOPM

ENT

TEST

S(T

EST-

ANAL

YZE-

FIX-

TEST

[TAF

T])

PRO

DUCT

ION

ACCE

PTAN

CET&

E

MO

DIFI

CATI

ON

TEST

ING

TECH

DEV

PRO

DUCT

ION

&DE

PLO

YMEN

TSY

STEM

DEV

&DE

MO

NST

RATI

ON

•••

••

••

••

OPE

RATI

ONS

&

SUPP

ORT

EVAL

STRA

TEG

YPR

ELIM

INAR

Y TE

MP

APPR

OVA

L

SYS

INTE

GR

SYS

DEM

ODR

R

FULL

RAT

E

Page 81: DAU T&E Management Guide.pdf

6-3

confirm the integrity of the system design overthe operational and environmental range in thespecification. These tests usually use productionhardware fabricated to the proposed productiondesign specifications and drawings. Such tests in-clude contractual Reliability and Maintainability(R&M) demonstration tests required before pro-duction release. PQT must be completed beforeFull Rate Production (FRP) in accordance withOperation of the Defense Acquisition System.2

PQTs may be conducted on Low Rate InitialProduction (LRIP) items to ensure the maturityof the manufacturing process, equipment, andprocedures. These tests are conducted on eachitem or a sample lot taken at random from thefirst production lot and are repeated if the processor design is changed significantly or a second oralternative source is brought online. These testsare also conducted against contractual design andperformance requirements.

6.3 OPERATIONAL TEST ANDEVALUATION

6.3.1 The Difference Between Developmentand Operational Testing

The 1979 Management of Operational Test andEvaluation, contained the following account ofthe first OT&E. This anecdote serves as anexcellent illustration of the difference between de-velopment and operational testing:

The test and evaluation of aircraft and airweapon systems started with the contractawarded to the Wright brothers in 1908.This contract specified a craft that wouldlift two men with a total weight of 350pounds, carry enough fuel for a flight of125 miles, and fly 40 miles per hour instill air. The contract also required that test-ing be conducted to assure this capability.

What we now call Development Test andEvaluation (DT&E) was satisfied when

the Wright brothers (the developer) dem-onstrated that their airplane could meetthose first contract specifications. How-ever, no immediate military mission hadbeen conceived for the Wright Flyer. Itwas shipped to Fort Sam Houston, Texas,where Captain Benjamin D. Foulois, thepilot, had orders to “teach himself to fly.”He had to determine the airplane’s per-formance, how to maintain it, and the kindof organization that would use it. Cavalrywagon masters had to be trained as air-plane mechanics, and Captain Foulois washis own instructor pilot.

In the process, Captain Foulois subjectedthe Wright Flyer to test and evaluationunder operational conditions. Foulois soondiscovered operational deficiencies. Forexample, there was no seat on the air-plane. During hard landings, Foulois’ 130-pound frame usually parted company fromthe airplane. To correct the problem,Foulois bolted an iron tractor seat to theairplane. The seat helped, but Foulois stilltoppled from his perch on occasion. As afurther improvement, Foulois looped hisSam Browne belt through the seat andstrapped himself in. Ever since then, con-toured seats and safety belts—a productof this earliest “operational” test andevaluation—have been part of the militaryairplane.3

Captain Foulois’ experience may seem humorousnow, but it illustrates the need for operational test-ing. It also shows that operational testing has beengoing on for a long time.

As shown in Table 6-1 where development test-ing is focused on meeting detailed technical speci-fications, the OT focuses on the actual function-ing of the equipment in a realistic combat envi-ronment in which the equipment must interactwith humans and peripheral equipment. WhileDT&E and OT&E are separate activities and are

Page 82: DAU T&E Management Guide.pdf

6-4

Table 6-1. Differences Between DT&E and IOT&E

DT&E

• CONTROLLED BY PM

• ONE-ON-ONE TESTS

• CONTROLLED ENVIRONMENT

• CONTRACTOR INVOLVEMENT

• TRAINED, EXPERIENCED OPERATORS

• PRECISE PERFORMANCE OBJECTIVES ANDTHRESHOLD MEASUREMENT

• TEST TO SPECIFICATION

• DEVELOPMENT TEST ARTICLE

IOT&E

• CONTROLLED BY INDEPENDENT AGENCY

• MANY-ON-MANY TESTS

• REALISTIC/TACTICAL ENVIRONMENT WITH OPERATIONALSCENARIO

• RESTRICTED SYSTEM CONTRACTOR INVOLVEMENT

• USER TROOPS RECENTLY TRAINED ON EQUIPMENT

• PERFORMANCE MEASUREMENT OF OPERATIONALEFFECTIVENESS AND SUITABILITY

• TEST TO REQUIREMENTS

• PRODUCTION REPRESENTATIVE TEST ARTICLE

conducted by different test communities, the com-munities must interact frequently and are gener-ally complementary. The DT&E provides a viewof the potential to reach technical objectives, andOT&E provides an assessment of the system’spotential to satisfy user requirements.

6.3.2 The Purpose of OT&E

Operational Test and Evaluation is defined in Title10, United States Code (U.S.C.), Sections 139and 2399:

The field test, under realistic combat con-ditions, of any item of (or key compo-nent of) weapons, equipment, ormunitions for the purposes of determin-ing the effectiveness and suitability of theweapons, equipment, or munitions for usein combat by typical military users; andthe evaluation of the results of such test.This term does not include an operationalassessment based exclusively on computermodeling, simulation, or an analysis ofsystem requirements, engineering propos-als, design specifications, or any otherinformation contained in programdocuments.4

Definitions of operational effectiveness andoperational suitability are listed below:

Operational Effectiveness: Measure of the over-all ability of a system to accomplish a missionwhen used by representative personnel in theenvironment planned or expected for operationalemployment of the system considering organiza-tion, doctrine, tactics, supportability, survivability,vulnerability, and threat.5

Operational Suitability: The degree to which asystem can be placed and sustained satisfactorilyin field use with consideration being given toavailability, compatibility, transportability, in-teroperability, reliability, wartime usage rates,maintainability, safety, human factors, habitabil-ity, manpower, logistics supportability, natural en-vironmental effects and impacts, documentation,and training requirements.6

In each of the Services, operational testing isconducted under the auspices of an organizationthat is independent of the development agency,in environments as operationally realistic as pos-sible, with hostile forces representative of theanticipated threat and with typical users operat-ing and maintaining the system. In other words,OT&E is conducted to ensure that new systems

Page 83: DAU T&E Management Guide.pdf

6-5

meet the user’s requirements, operatesatisfactorily, and are supportable under actualfield conditions. The major questions addressedin OT&E are shown in Figure 6-2.

Operational Assessment (EOA, OA): An evalu-ation of operational effectiveness and operationalsuitability made by an independent OT activity,with user support as required, on other than pro-duction systems. The focus of an OA is on sig-nificant trends noted in development efforts, pro-grammatic voids, risk areas, adequacy of require-ments, and the ability of the program to supportadequate operational testing. An OA may be con-ducted at any time using technology demonstra-tors, prototypes, mock-ups, Engineering Devel-opment Models (EDMs), or simulators, but willnot substitute for the Initial Operational T&E(IOT&E) necessary to support FRP decisions.Normally conducted prior to Milestone C.7

The OA normally takes place during each phaseof the acquisition process prior to IOT&E. It isused to provide an early assessment of potentialoperational effectiveness and suitability for deci-sion makers at decision points. These assessmentsattempt to project the system’s potential to meetthe user’s requirements. Assessments conductedearly in the program development process maybe called Early Operational Assessments (EOAs).When using an evolutionary acquisition strategy,an OA is required for all increments followingthe one experiencing an IOT&E before the incre-ment is released to the user.8

6.3.3 Initial Operational Test and Evaluation

The OT&E performed in support of the FRP de-cision is generally known as IOT&E. The Navycalls this event OPEVAL (Operational Evalua-tion). The IOT&E occurs during LRIP and must

Figure 6-2. Sample Hierarchy of Questions Addressed in OT&E

OPERATIONALTEST AND

EVALUATION

OPERATIONALEFFECTIVENESS

MISSION ACCOMPLISHMENT

SURVIVABILITY

AVAILABILITY

RELIABILITY

MAINTAINABILITY

SUPPORTABILITY

ORGANIZATIONDOCTRINE

TACTICS

VULNERABILITYSUSCEPTIBILITY

THREAT

IS IT IN OPERATINGCONDITION AND

COMMITABLE?

WILL IT WORK FORA SPECIFIED TIME

UNDER STATEDCONDITIONS?

CAN IT BERETAINED IN OR

RESTORED TO WORKINGCONDITION WITHIN

A GIVEN TIME?

CAN AVERAGEMILITARY PERSONNEL

OPERATE IT? ARE SPAREAND REPAIR PARTS

AVAILABLE?

COMPATIBILITYTRANSPORTABILITYINTEROPERABILITYTECHNICAL DATA

SUPPORT/TESTEQUIPMENT

OPERATIONALSUITABILITY

IS IT SATISFACTORYFOR FIELD USE?

DOES IT PERFORMAS INTENDED?

Page 84: DAU T&E Management Guide.pdf

6-6

be completed before the Full Rate Production De-cision Review (FRPDR). More than one IOT&Emay be conducted on the system if there are sys-tem performance problems requiring retest, thesystem is decertified, or a need exists to test indifferent environments. The IOT&E is conductedon a production or production representative sys-tem using typical operational personnel in a real-istic combat scenario.

6.3.4 Follow-on Operational Test andEvaluation (FOT&E)

The OT&E that may be necessary after theFRPDR to ref ine the estimates made duringIOT&E, to evaluate changes, and to reevaluatethe system to ensure that it continues to meetoperational needs and retains its effectiveness ina new environment or against a new threat.9

FOT&E is conducted during fielding/deploymentand operational support, and sometimes may bedivided into two separate activities. PreliminaryFOT&E is normally conducted after the InitialOperational Capability (IOC) is attained to as-sess full system capability. It is conducted by theOTA or designated organization to verify the cor-rection of deficiencies, if required, and to assesssystem training and logistics status not evaluatedduring IOT&E. Subsequent FOT&E is conductedon production items throughout the life of a sys-tem. The results are used to refine estimates ofoperational effectiveness and suitability; to up-date training, tactics, techniques and doctrine; andto identify operational deficiencies and evaluatemodifications. This later FOT&E often is con-ducted by the operating command.

6.4 MULTI-SERVICE TEST ANDEVALUATION

Multi-Service Test and Evaluation is T&E con-ducted by two or more Department of Defense(DoD) Components for systems to be acquiredby more than one DoD component, or for a DoDcomponent’s systems that have interfaces withequipment of another DoD component.10 All

affected Services and their respective OperationalTest Agencies (OTAs) participate in planning, con-ducting, reporting, and evaluating the multi-Servicetest program. One Service is designated the leadService and is responsible for the management ofthe program. The lead Service is charged with thepreparation and coordination of a single report thatreflects the system’s operational effectiveness andsuitability for each Service.

The management challenge in a joint acquisitionprogram conducting multi-Service T&E stemsfrom the fact that the items undergoing testing willnot necessarily be used by each of the Servicesfor identical purposes. Differences among the Ser-vices usually exist in performance criteria, tactics,doctrine, configuration of armament or electron-ics, and the operating environment. As a result, adeficiency or discrepancy considered disqualify-ing by one Service is not necessarily disqualify-ing for all Services. It is incumbent upon the leadService to establish a discrepancy reporting sys-tem that permits each participating Service to docu-ment all discrepancies noted. At the conclusion ofa multi-Service T&E, each participating OT&Eagency may prepare an Independent Evaluation Re-port (IER) in its own format and submits that re-port through its normal Service channels. The leadService OT&E agency may prepare the documen-tation that goes forward to the Milestone DecisionAuthority (MDA). This documentation is coordi-nated with all participating OT&E agencies.

6.5 JOINT TEST AND EVALUATION

Joint Test and Evaluation (JT&E) is not the sameas multi-Service T&E. JT&E is a specific pro-gram activity sponsored and largely funded byan Office of the Secretary of Defense (OSD) (Di-rector, Operational Test and Evaluation(DOT&E)). JT&E programs are not acquisition-oriented but are a means of examining joint-Service tactics and doctrine. Past joint-test pro-grams have been conducted to provide informa-tion required by the Congress, the OSD, the com-manders of the Unified Commands, and the

Page 85: DAU T&E Management Guide.pdf

6-7

Services. The overarching policies and details offormulating a JT&E program are set forth in DoDDirective (DoDD) Joint Test and Evaluation(JT&E) Program.11

Responsibility for management of the JT&Eprogram was transferred from the Under Secre-tary of Defense for Acquisition, Technology, andLogistics (USD(AT&L)) DT&E to the Office ofthe DOT&E in December 2002. Each year nomi-nations are submitted by Combatant Commands,DoD agencies, and Services to the JT&E Pro-gram Office within DOT&E for review and pro-cessing. The JT&E Program Office identifiesthose to be funded for a feasibility study, theDOT&E assigns lead Service responsibility for aQuick Reaction Test (6 to 12 months) or a fullJT&E (up to three years), and a Joint Test ForceOffice is formed to conduct JT&E activities con-sistent with the chartered objectives from the par-ticipating Services. The lead and participatingServices provide personnel, infrastructure sup-port, and test resources for JT&E activities.

The JT&E program’s primary objectives include:assess Service system interoperability in jointoperations; evaluate joint technical and opera-tional concepts and recommend improvements;validate testing methodologies that have jointapplications; improve M&S validity with field ex-ercise data; increase joint mission capability, us-ing quantitative data for analysis; provide feed-back to the acquisition and joint operations com-munities; improve joint Tactics, Techniques, andProcedures (TTP).

Each JT&E typically produces one or more testproducts that provide continuing benefits to DoD,such as improvements in: joint warfighting capa-bilities; joint tactics, techniques, and procedures;joint and individual Service training programs;new testing methodologies; joint application ofM&Ss that can support acquisition, TTP, and Con-cept of Operations (COOs) development;universal task list update and input.

6.6 LIVE FIRE TESTING

The Live Fire Test (LFT) Program was mandatedby the Congress in the National Defense Autho-rization Act (NDAA) for Fiscal 1987 (Public Law99-661) passed in November 1986. Specifically,this law stipulated that a major [Acquisition Cat-egory (ACAT) I and II] program developmentmay not proceed Beyond Low Rate Initial Pro-duction (BLRIP) until realistic survivability or(in the case of missiles and munitions) lethalitytesting has been completed.

In 1984, before the passage of this legislation, theOSD had chartered a joint test program designed toaddress similar questions relative to systems alreadyin field use. This program, Joint LFT, was initiallydivided into two distinct parts: Armor/Anti-armorand Aircraft. The program’s objectives are to:

• Gather empirical data on the vulnerability ofexisting U.S. systems to Soviet weapons;

• Gather empirical data on the lethality ofexisting U.S. weapons against Soviet systems;

• Provide insights into the design changes nec-essary to reduce vulnerabilities and improvelethalities of existing U.S. weapon systems;

• Calibrate current vulnerability and lethalitymodels.

The legislated LFT Program complements theolder Joint Live Fire (JLF) Program. While theJLF Program was designed to test systems thatwere fielded before being completely tested, thespirit and intent of the LFT legislation is to avoidthe need to play “catch-up.” This program notonly requires the Services to test their weaponssystems as early as possible against the expectedcombat threat, but also before FRP to identifydesign characteristics that cause undue combatdamage or measure munitions lethality. Remediesfor deficiencies can entail required retrofits,production stoppages, or other more time-

Page 86: DAU T&E Management Guide.pdf

6-8

consuming solutions. The essential feature of livefire testing is that appropriate threat munitionsare fired against a major U.S. system configuredfor combat to test its vulnerability and/or that amajor U.S. munitions or missile is fired against athreat target configured for combat to test thelethality of the munitions or missile.

Live Fire Test and Evaluation (LFT&E) Guidelineswere first issued by the Deputy Director, T&E(Live Fire Testing) in May 1987 to supplementDoD Test and Evaluation Master Plan (TEMP)guidelines in areas pertaining to live fire testing.12

These guidelines encompass all Major DefenseAcquisition Programs (MDAPs) and define LFTrequirements. In 1994 Public Law 103-355 directedthat oversight of Live Fire Testing be moved withinDoD to the DOT&E. Guidelines for this programare now found in DoD Instruction (DoDI) 5000.2and the Defense Acquisition Guidebook.

6.7 NUCLEAR, BIOLOGICAL, ANDCHEMICAL (NBC) WEAPONSTESTING

The testing of NBC weapons is highly special-ized and regulated. PMs involved in these areasare advised to consult authorities within theirchain of command for the specific directives,instructions, and regulations that apply to theirindividual situations. Nuclear weapons tests aredivided into categories in which the responsibili-ties of the Department of Energy (DOE), theDefense Nuclear Agency (DNA), and the mili-tary Services are clearly assigned. The DOE isresponsible for nuclear warhead technical tests;the DNA is responsible for nuclear weapons ef-fects tests. The Services are responsible for thetesting of Service-developed components ofnuclear subsystems. All nuclear tests are con-ducted within the provisions of the Limited TestBan Treaty (LTBT) that generally restricts nucleardetonations to the underground environment.Nuclear weapons testing requires extensivecoordination between Service and DOE testpersonnel.13

Since the United States signed and ratified theGeneva Protocol of 1925, U.S. policy has beennever to be the first to use lethal chemical weap-ons; it may, however, retaliate with chemicalweapons if so attacked. With the signing and rati-fication of the 1972 Biological and Toxin WeaponConvention, the United States formally adoptedthe position that it would not employ biologicalor toxin weapons under any circumstances. Allsuch weapons were reported destroyed in the early1970s.14

Regarding retaliatory capability against chemicalweapons, the Service Secretaries are responsiblefor ensuring that their organizations establishrequirements and determine the military char-acteristics of chemical deterrent items andchemical defense items. The Army has been des-ignated the DoD executive agent for DoD chemi-cal warfare, research, development, and acquisi-tion programs.15

United States policy on chemical warfare seeksto:

• Deter the use of chemical warfare weapons byother nations;

• Provide the capability to retaliate if deterrencefails;

• Achieve the early termination of chemicalwarfare at the lowest possible intensity.16

In addition to the customary development tests(conducted to determine if a weapon meets tech-nical specifications) and OTs (conducted todetermine if a weapon will be useful in combat),chemical weapons testing involves two types ofchemical tests—chemical mixing and biotoxicity.Chemical-mixing tests are conducted to obtain in-formation on the binary chemical reaction.Biotoxicity tests are performed to assess thepotency of the agent generated. Chemical weaponstesting, of necessity, relies heavily on the use ofnontoxic stimulants, since such substances are

Page 87: DAU T&E Management Guide.pdf

6-9

more economical and less hazardous and open-air testing of live agents has been restricted since1969.17

6.8 NUCLEAR HARDNESS ANDSURVIVABILITY (NHS) TESTING

Nuclear hardness is a quantitative description ofthe physical attributes of a system or componentthat will allow it to survive in a given nuclearenvironment. Nuclear survivability is the capa-bility of a system to survive in a nuclear environ-ment and to accomplish a mission. DoD policyrequires the incorporation of NH&S features inthe design, acquisition, and operation of majorand nonmajor systems that must perform criticalmissions in nuclear conflicts. Nuclear hardnesslevels must be quantified and validated.18

The T&E techniques used to assess nuclear hard-ness and survivability include: nuclear testing,physical testing in a simulated environment,modeling, simulation, and analysis. Althoughnuclear tests provide a high degree of fidelity andvalid results for survivability evaluation, they arenot practical for most systems due to cost, longlead times, and international treaty constraints.

Underground testing is available only on a pri-oritized basis for critical equipment and compo-nents, and is subject to a frequently changing testschedule. Physical testing provides an opportu-nity to observe personnel and equipment in asimulated nuclear environment. Modeling, simu-lation, and analysis are particularly useful in theearly stages of development to provide early pro-jections before system hardware is available.These methods are also used to furnish assess-ments in an area that, because of safety or test-ing limitations, cannot be directly observedthrough nuclear or physical testing.

6.9 SUMMARY

T&E is a spectrum of techniques used to addressquestions about critical performance parametersduring system development. These questions mayinvolve several issues including: technical andsurvivability (development testing); effectivenessand suitability (operational testing); those affect-ing more than one Service (multi-Service andjoint testing); vulnerability and lethality (LFT),nuclear survivability; or the use of other thanconventional weapons (i.e., NBC).

Page 88: DAU T&E Management Guide.pdf

6-10

ENDNOTES

1. Defense Acquisition Guidebook, Chapter 3,Test and Evaluation. http://akss.dau.mil/DAG.

2. DoDI 5000.2, Operation of the DefenseAcquisition System, May 12, 2003.

3. Air Force Manual (AFM) 55-43, Managementof Operational Test and Evaluation, June1979.

4. Title 10, U.S.C., Section 139, Director ofOperational Test and Evaluation, and Section2399, Operational Test and Evaluation of De-fense Acquisition Programs, March 18, 2004.

5. Chairman of the Joint Chiefs of Staff Instruc-tion (CJCSI) 3170.01D, Joint CapabilitiesIntegration and Development System (JCIDS),March 12, 2004.

6. Ibid.

7. Defense Acquisition University Press, Glossaryof Defense Acquisition Acronyms & Terms,September 2003.

8. DoDI 5000.2

9. Defense Acquisition University Press.

10. Ibid.

11. DoDD 5010.41, Joint Test and Evaluation(JT&E) Program, February 23, 1998.

12. Statement by Mr. James F. O’Bryon, Assis-tant Deputy Under Secretary of Defense (LiveFire Test) before the Acquisition Policy Panelof the House Armed Services Committee,September 10, 1987.

13. DoDI 5030.55, Joint AEC-DoD NuclearWeapons Development Procedures, January25, 2001.

14. DoDD 5160.5, Responsibilities for Research,Development, and Acquisition of ChemicalWeapons and Chemical and BiologicalDefense, May 1, 1985.

15. Ibid.

16. Ibid.

17. Ibid.

18. DoDI 4245.4, Acquisition of Nuclear-Survivable Systems, July 25, 1988. Cancelled.

Page 89: DAU T&E Management Guide.pdf

IIIIMODULE

DEVELOPMENTTEST AND EVALUATION

Material acquisition is an iterative process of designing, build-ing, testing, identifying deficiencies, fixing, retesting, andrepeating. Development Test and Evaluation (DT&E) is animportant aspect of this process. DT&E is performed in the fac-tory, in the laboratory, and on the proving ground. It is con-ducted by subcontractors as they are developing the componentsand subassembly; the prime contractor, as the components areassembled and integration of the system is ensured; and by thegovernment, to demonstrate how well the weapon system meetsits technical and operational requirements. This module describesdevelopment testing and the various types of activities it involves.The module also discusses how development testing is used tosupport the technical review process.

Page 90: DAU T&E Management Guide.pdf
Page 91: DAU T&E Management Guide.pdf

7-1

77INTRODUCTION TO DEVELOPMENT

TEST AND EVALUATION

7.1 INTRODUCTION

Development Test and Evaluation (DT&E) is con-ducted to demonstrate that the engineering de-sign and development process is complete. It isused by the contractor to reduce risk, validate andqualify the design, and ensure that the product isready for government acceptance. The DT&E re-sults are evaluated to ensure that design risks havebeen minimized and the system will meet speci-fications. The results are also used to estimatethe system’s military utility when it is introducedinto service. DT&E serves a critical purpose inreducing the risks of development by testing se-lected high-risk components or subsystems. Fi-nally, DT&E is the government developingagency tool used to confirm that the systemperforms as technically specified and that thesystem is ready for field testing. This chapter pro-vides a general discussion of contractor and gov-ernment DT&E activities, stresses the need foran integrated test program, describes some spe-cial-purpose Development Tests (DTs), and dis-cusses several factors that may influence the ex-tent and scope of the DT&E program.

7.2 DT&E AND THE SYSTEMACQUISITION CYCLE

As illustrated in Figure 7-1, DT&E is conductedthroughout the system life cycle. DT&E maybegin before program initiation with the evalua-tion of evolving technology, and it continues af-ter the system is in the field.

7.2.1 DT&E Prior to Program Initiation

Prior to program initiation, modeling, simulations,and technology, feasibility testing is conductedto confirm that the technology considered for theproposed weapon development is the mostadvanced available and that it is technicallyfeasible.

7.2.2 DT&E During Concept Refinement(CR) and Technology Development (TD)

Development testing that takes place is conductedby a contractor or the government to assist in se-lecting preferred alternative system concepts,technologies, and designs. The testing conducteddepends on the state of technical maturity of thetest article’s design. Government test evaluatorsparticipate in this testing because information ob-tained can be used to support a Systems Require-ments Review (SRR), early test planning in theTechnology Development Strategy (TDS), and de-velopment of the Capability Development Docu-ment (CDD) and the Request for Proposal (RFP)for Milestone B. The information obtained fromthese tests may also be used to support a pro-gram start decision by the Services or the Officeof the Secretary of Defense (OSD).

7.2.3 DT&E During System Developmentand Demonstration (SDD)

Development testing is used to demonstrate that:technical risk areas have been identified and canbe reduced to acceptable levels; the best technicalapproach can be identified; and, from this pointon, engineering efforts will be required rather thanexperimental efforts. It supports the decision

Page 92: DAU T&E Management Guide.pdf

7-2

review that considers transition from prototype de-sign into advanced engineering and constructionof the Engineering Development Model (EDM).This DT&E includes contractor/government inte-grated testing, engineering design testing, andadvanced development verification testing.

Development testing during systems integration ismost often conducted at the contractor’s facility. Itis conducted on components, subsystems, brass-board configurations, or advanced developmentprototypes to evaluate the potential application oftechnology and related design approaches beforesystem demonstration. Component interface prob-lems and equipment performance capabilities areevaluated. The use of properly validated analysis,Modeling and Simulation (M&S) is encouraged,

especially during the early phases to assess thoseareas that, for safety or testing capability limita-tions, cannot be observed directly through testing.M&Ss can provide early projections of systemsperformance, effectiveness, and suitability, and canreduce testing costs. This Test and Evaluation(T&E) also may include initial environmentalassessments.

Army testing of the Advanced Attack Helicopter(AAH) provides an example of the type ofactivities that occur during development testing/technical testing. The early DT&E of the AAHwas conducted by the Army Engineering FlightActivity (EFA). The test was conducted in con-junction with an Early Operational Assessment(EOA), and candidate designs were flown more

Figure 7-1. Contractor’s Integrated Test Requirements

SOURCE: Adapted and modified from “Solving the Risk Equation in Transitioning from Development to Production.” January 19, 1984.

DT

GOVERNMENTTESTS

GOVERNMENT DT/DTASSESSMENTS

SYSTEM DEVELOPMENT

SUBSYSTEM DEVELOPMENT

SYSTEM ACCEPTANCE

SUBSYSTEM ACCEPTANCE

MANUFACTURING SCREENING

MANUFACTURING SCREENING

FINAL

FINAL

SYSTEM QUALIFICATION

SUBSYSTEM QUALIFICATION

EVALUATEAPPROACHES

EVALUATEAPPROACHES

DESIGNLIMIT

DESIGNLIMIT

TAAF

TAAF

VERIFYSOLUTIONS

VERIFYSOLUTIONS

GOVERNMENTPARTICIPATION

IN CONTRACTOR-CONDUCTED TESTS

GOVERNMENT-CONDUCTED

TESTS

IOT&E FOT&E

PRIMECONTRACTOR

TESTS

SUBCONTRACTORTESTS

Page 93: DAU T&E Management Guide.pdf

7-3

than 90 hours to evaluate flight handling quali-ties and aircraft performance. This test also in-cluded the firing of the 30 millimeter cannon andthe 2.75-inch rockets. Reliability, Availability, andMaintainability (RAM) data were obtainedthroughout the test program; and these data, alongwith RAM data provided from early contractortesting, became a part of the system’s RAM da-tabase. After evaluating the results, the Army se-lected a contractor to proceed with the next de-velopment phase of the AAH.

DT&E conducted during system demonstrationprovides the final technical data for determininga system’s readiness to transition into Low RateInitial Production (LRIP). It is conducted usingadvanced EDMs and is characterized byengineering and scientific approaches under con-trolled conditions. The qualification testing pro-vides quantitative and qualitative data for use inthe system’s evaluation. The evaluation results areused by the development community and are alsoprovided to Service and OSD decision authorities.These tests measure technical performance includ-ing: effectiveness, RAM, compatibility, interoper-ability, safety, and supportability. They include testsof human engineering and technical aspects of thesystem. Demonstrations indicating that engineer-ing is reasonably complete and that solutions toall significant design problems are in hand are alsoincluded.

7.2.4 DT&E During Production andDeployment (P&D)

7.2.4.1 Low Rate Initial Production

DT&E may be conducted on EDMs or LRIParticles as a prelude to certifying the system readyfor Initial Operational Test and Evaluation(IOT&E). Each Service has different and specificprocesses incorporated in the certification forIOT&E documentation. The Navy conducts ad-ditional DT&E for certif ication calledTECHEVAL (Technical Evaluation). This is aDT&E event controlled by the program office that

is conducted in a more operationally realistic testenvironment. The Air Force has developed a guidewith a structured process using templates to as-sist the Program Manager (PM) in assessing theprogram’s readiness for IOT&E.

As an example of testing done during this phase,the Army AAH was flown in a series of Engineer-ing Design Tests (EDTs). The EDT-1, -2, and -4were flown at the contractor’s facility. (The EDT-3requirement was deleted during program restruc-turing.) The objectives of these flight tests wereto evaluate the handling characteristics of the air-craft, check significant performance parameters,and confirm the correction of deficiencies notedduring earlier testing. The EDT-5 was conductedat an Army test facility, Yuma Proving Ground,Arizona. The objectives of this test were the sameas earlier EDTs; however, the testers wererequired to ensure that all discrepancies were re-solved before the aircraft entered operationaltesting. During the EDTs, Operational Test (OT)personnel were completing OT design, bringingtogether test resources, and observing the DT&Etests. Additionally, OT personnel were compilingtest data, such as the system contractor’s testresults, from other sources. The evolving DTresults and contractor data were made availableto the Critical Design Review (CDR) membersto ensure that each configuration item design wasessentially completed. The Army conducted aPhysical Configuration Audit (PCA) to providea technical examination to verify that each item“as built” conformed to the technical documen-tation defining that item.

7.2.4.2 Post Full Rate Production DecisionReview (FRPDR)

Development testing may be necessary after theFull-Rate Production (FRP) decision is made.This testing is normally tailored to verify correc-tion of identified design problems and demon-strate the system modification’s readiness for pro-duction. This testing is conducted under con-trolled conditions and provides quantitative and

Page 94: DAU T&E Management Guide.pdf

7-4

qualitative data. This testing is conducted on pro-duction items delivered from either the pilot orinitial production runs. To ensure that the itemsare produced according to contract specification,limited quantity production sampling processesare used. This testing determines whether thesystem has successfully transitioned from engi-neering development prototype to production, andwhether it meets design specifications.

7.2.5 Operations and Support (O&S)

The DT, which occurs soon after the Initial Op-erating Capability (IOC) or initial deployment,assesses the deployed system’s operational readi-ness and supportability. It ensures that all defi-ciencies noted during previous testing have beencorrected, evaluates proposed Product Improve-ments (PIs) and block upgrades, and ensures thatIntegrated Logistics Support (ILS) is complete.It also evaluates the resources on hand and deter-mines if the plans to ensure operational phasereadiness and support objectives are sufficient tomaintain the system for the remainder of itsacquisition life cycle. For mature systems, DT&Eis performed to assist in modifying the system tohelp meet new threats, add new technologies, oraid in extending service life.

Once a system approaches the end of its useful-ness, the DT conducted is concerned with themonitoring of a system’s current state of opera-tional effectiveness, suitability, and readiness todetermine whether major upgrades are necessaryor deficiencies warrant consideration of a newsystem replacement. Tests are normally conductedby the operational testing community; however,the DT&E community may be required to assessthe technical aspects of the system.

7.3 DT&E RESPONSIBILITIES

As illustrated in Figure 7-1, the primary partici-pants in testing are the prime contractor, subcon-tractor, Service materiel developer, or develop-ing agency; the Operational Test and Evaluation

(OT&E) agency acts as observer. In some Ser-vices, there are also independent evaluation or-ganizations that assist the testing organization indesigning and evaluating development tests. Asthe figure shows, system development testing isperformed principally by contractors during theearly development stages of the acquisition cycleand by government test/evaluation organizationsduring the later phases.

Army testing of the AAH illustrates the type ofDT performed by contractors and the relation-ship of this type of testing to government DT&Eactivities. During the contractor competitive test-ing of the Army AAH, prime contractor andsubcontractor testing included design supporttests, testing of individual components, estab-lishing fatigue limits, and bench testing of dy-namic components to demonstrate sufficientstructural integrity to conduct the Army com-petitive flight test program. Complete dynamicsystem testing was conducted utilizing groundtest vehicles. Besides supporting the contractor’sdevelopment effort, these tests provided infor-mation for the Army technical review processas the systems, Preliminary Design Reviews(PDRs), and Critical Design Reviews (CDRs)were conducted.

Following successful completion of the groundtest vehicle qualification testing, first flights wereconducted on the two types of competing heli-copters. Each aircraft was being flown 300 hoursbefore delivery of two of each competing aircraftto the Army. The contractor flight testing was ori-ented toward flight-envelope development, dem-onstration of structural integrity, and evaluationand verification of aircraft flight handling quali-ties. Some weapons system testing was conductedduring this phase. Government testers used muchof the contractor’s testing data to develop the testdata matrices as part of the government’s DT andOT planning efforts. The use of contractor testdata reduced the testing required by the govern-ment and added validity to the systems alreadytested and data received from other sources.

Page 95: DAU T&E Management Guide.pdf

7-5

7.3.1 Contractor Testing

Materiel DT&E is an iterative process in which acontractor designs hardware and software, evalu-ates performance, makes necessary changes, andretests for performance and technical compliance.Contractor testing plays a primary role in the to-tal test program, and the results of contractor testsare useful to the government evaluator in sup-porting government test objectives. It is impor-tant that government evaluators oversee contrac-tor system tests and use test data from them toaddress government testing issues. It is not un-common for contractor testing to be conductedat government test facilities, since contractorsoften do not have the required specialized facili-ties (e.g., for testing hazardous components orfor missile flight tests). This enables governmentevaluators to monitor the tests more readily andincreases government confidence in the testresults.

Normally, an RFP requires that the winning con-tractor submit an Integrated Engineering DesignTest Plan within a short period after contractinitiation for coordination with government testagencies and approval. This test plan shouldinclude testing required by the Statement of Work(SOW), specifications, and testing expected aspart of the engineering development and integra-tion process. When approved, the contractor’s testprogram automatically becomes part of thedevelopment agency’s Integrated Test Plan (ITP).

If the contractor has misinterpreted the RFPrequirements and the Integrated EngineeringDesign Test Plan does not satisfy government testobjectives, the iterative process of amending thecontractor’s test program begins. This iterativeprocess must be accomplished within limitedbounds so the contractor can meet the test objec-tives without significant effects on contract cost,schedule, or scope.

7.3.2 Government Testing

Government testing is performed to demonstratehow well the materiel system meets its technicalcompliance requirements; provide data to assessdevelopmental risk for decisionmaking; verify thatthe technical and support problems identified inprevious testing have been corrected; and ensurethat all critical issues to be resolved by testing havebeen adequately considered. All previous testing,from the contractor’s bench testing through devel-opment agency testing of representative prototypes,is considered during government evaluation.

Government materiel development organizationsinclude major materiel acquisition commands and,in some cases, operational commands. The mate-riel acquisition commands have T&E organizationsthat conduct government development testing. Inaddition to monitoring and participating in con-tractor testing, these organizations conduct de-velopment testing on selected high-concern ar-eas to evaluate the adequacy of systems engineer-ing, design, development, and performance tospecifications. The Program Management Office(PMO) must be involved in all stages of testingthat these organizations perform.

In turn, the materiel development/T&E agenciesconduct T&E of the systems in the developmentstage to ensure they meet technical and operationalrequirements. These organizations operate govern-ment proving grounds, test facilities, and labs. Theymust be responsive to the needs of the PM by pro-viding test facilities, personnel, and data acquisitionservices, as required.

7.4 TEST PROGRAM INTEGRATION

During the development of a weapon system, thereare a number of tests conducted by subcontrac-tors, the prime contractor, and the government. Toensure these tests are properly timed, that adequateresources are available, and to minimize unneces-sary testing, a coordinated test program must bedeveloped and followed. The Test and Evaluation

Page 96: DAU T&E Management Guide.pdf

7-6

Master Plan (TEMP) normally does not provide asufficient level of detail concerning contractor orsubcontractor tests. A contractor or PMO ITP mustalso be developed to describe these tests. The PMis responsible for coordinating the total T&E pro-gram. The PM performs this task with the assis-tance of the T&E IPT whose members are as-sembled from development agency, user, technicaland operational T&E, logistics, and training orga-nizations. The PM must remain active in all aspectsof testing including planning, funding, resourcing,execution, and reporting. The PM plays an impor-tant role as the interface between the contractor andthe government testing community. Recent empha-sis on early T&E highlights a need for early gov-ernment tester involvement in contractor testing. Forexample, during development of the AAH test, itwas found that having program management per-sonnel on the test sites improved test continuity,facilitated the flow of spare and repair parts, pro-vided a method of monitoring contractor perfor-mance, and kept the Service headquarters informedwith timely status reports.

7.4.1 Integrated Test Plan

The ITP is used to record the individual test plansfor the subcontractor, prime contractor, and gov-ernment. The prime contractor should be contrac-tually responsible for the preparation and updat-ing of the ITP, and the contractor and Service-developing agency should ensure that it remainscurrent. The ITP includes all developmental teststhat will be performed by the prime contractorand the subcontractors at both the system and sub-system levels. It is a detailed, working-level docu-ment that assists in identifying risk as well asduplicative or missing test activities. A well-main-tained ITP facilitates the most efficient use oftest resources.

7.4.2 Single Integrated Test Policy

Most Services have adopted a single integratedcontractor/government test policy, thereby reduc-ing much of the government testing requirements.

This policy stresses independent governmentevaluation and permits an evaluator to monitorcontractor and government test programs andevaluate the system from an independent perspec-tive. The policy stresses the use of data from allsources for system evaluation.

7.5 AREAS OF DT&E FOCUS

7.5.1 Life Testing

Life testing is performed to assess the effects oflong-term exposure to various portions of theanticipated environment. These tests are used toensure the system will not fail prematurely dueto metal fatigue, component aging, or other prob-lems caused by long-term exposure to environ-mental effects. It is important that the require-ments for life testing are identified early and in-tegrated into the system test plan. Life tests aretime-consuming and costly; therefore, life test-ing requirements and life characteristics must becarefully analyzed concurrent with the initial testdesign. Aging failure data must be collected earlyand analyzed throughout the testing cycle. If lifecharacteristics are ignored until results of the testare available, extensive redesign and project de-lays may be required. Accelerated life testing tech-niques are available and may be used wheneverapplicable.

7.5.2 Design Evaluation/Verification Testing

Design evaluation and verification testing are con-ducted by the contractor and/or the developmentagency with the primary objective of influencingsystem design. Design evaluation is fully inte-grated into the DT cycle. Its purposes are to:

(1) Determine if critical system technicalcharacteristics are achievable;

(2) Provide data for refining and making thehardware more rugged to comply withtechnical specification requirements;

Page 97: DAU T&E Management Guide.pdf

7-7

(3) Eliminate as many technical and designrisks as possible or to determine the extentto which they are manageable;

(4) Provide for evolution of design and verifi-cation of the adequacy of design changes;

(5) Provide information in support of develop-ment efforts;

(6) Ensure components, subsystems, and sys-tems are adequately developed before begin-ning OTs.

7.5.3 Design Limit Testing

Design Limit Tests (DLTs) are integrated into thetest program to ensure the system will provideadequate performance when operated at outerperformance limits and when exposed to envi-ronmental conditions expected at the extremes ofthe operating envelope. The tests are based onmission profile data. Care must be taken to en-sure all systems and subsystems are exposed tothe worst-case environments, with adjustmentsmade because of stress amplification factors andcooling problems. Care must also be taken toensure that the system is not operated beyond thespecified design limits; for example, an aircraftcomponent may have to be tested at temperatureextremes from an Arctic environment to a desertenvironment.

7.5.4 Reliability Development Testing(RDT)

RDT or Reliability Growth Testing (RGT) is aplanned Test, Analyze, Fix, and Test (TAFT) pro-cess in which development items are tested un-der actual or simulated mission-profile environ-ments to disclose design deficiencies and to pro-vide engineering information on failure modesand mechanisms. The purpose of RDT is to pro-vide a basis for early incorporation of correctiveactions and verification of their effectiveness inimproving the reliability of equipment. RDT is

conducted under controlled conditions with simu-lated operational mission and environmental pro-files to determine design and manufacturing pro-cess weaknesses. The RDT process emphasizesreliability growth rather than a numerical mea-surement. Reliability growth during RDT is theresult of an iterative design process because, asthe failures occur, the problems are identified, so-lutions proposed, the redesign is accomplished,and the RDT continues. A substantial reliabilitygrowth TAFT effort was conducted on the F-18DT&E for selected avionics and mechanical sys-tems. Although the TAFT effort added $100 mil-lion to the Research, Development, Test andEvaluation (RDT&E) Program, it is estimated thatmany times that amount will be saved throughlower operational and maintenance coststhroughout the system’s life.

7.5.5 Reliability, Availability, andMaintainability

The RAM requirements are assessed during allcontractor and government testing. The data arecollected from each test event and placed in aRAM database, which is managed by the devel-opment agency. Contractor and government DTsprovide a measure of the system’s common RAMperformance against stated specifications in acontrolled environment. The primary emphasis ofRAM data collection during the DT is to providean assessment of the system RAM parametersgrowth and a basis for assessing the consequencesof any differences anticipated during field opera-tions. Early projections of RAM are important tologistics support planners. The test data facilitatedetermination of spares quantities, maintenanceprocedures, and support equipment.

7.6 SYSTEM DESIGN FOR TESTING

Built-in Test (BIT), Built-in-Test Equipment(BITE) and Automated Test Equipment (ATE) aremajor areas that must be considered from the startof the design effort. Design for testing (Figure 7-2) addresses the need to: (1) collect data during

Page 98: DAU T&E Management Guide.pdf

7-8

Figure 7-2. Design for Testing Procedures

the development process concerning particular per-formance characteristics; (2) enable efficient andeconomical production by providing ready accessto and measurement of appropriate acceptanceparameters; and (3) enable rapid and accurateassessment of the status of the product to the lowestrepairable element when deployed. Many hardwaresystems have testing circuits designed and built-in. This early planning by design engineers allowseasy testing for fault isolation of circuits, both insystem development phases and during operationaltesting and deployment. There are computer chipsin which more than half of the circuits are for test/circuit check functions. This type of circuit designrequires early planning by the PM to ensure theRFP requirements include the requirement for de-signed/BIT capability. Evaluation of these BIT/BITE/ATE systems must be included in the testprogram.

7.7 IMPACT OF WARRANTIES ON T&E

A warranty or guarantee is a commitment pro-vided by a supplier to deliver a product that meetsspecified standards for a specified time. With aproperly structured warranty, the contractor mustmeet technical and operational requirements. Ifthe product should fail during that warrantyperiod, the contractor must replace or makerepairs at no additional cost to the government.The Defense Appropriations Act of 1984 requireswarranties or guarantees on all weapon systemsprocurement. This Act makes warranties astandard item on most fixed-price productioncontracts. Incentives are the main thrust of war-ranties, and the government will perform a reli-ability demonstration test on the system to deter-mine these incentives. Although warranties havefavorable advantages to the government duringthe early years of the contract, warranties do not

DEFINETESTABILITYOBJECTIVES

• TESTABILITY DEMONSTRATIONS• TESTABILITY ANALYSIS

IMPLEMENT TESTABILITY

SELECT TESTABILITYTECHNIQUES TO

INFLUENCE DESIGN

CONDUCT TRADE-OFFSTUDIES

CHECK SYSTEMTESTABILITY

NO YESDOES

TESTABILITYSATISFYNEEDS?

• BUILT-IN-TEST• END-TO-END TESTING• TEST POINTS AND CONTROL POINTS• PARTITIONING• INITIALIZATION

• ISOLATION• ACCESSIBILITY• OBSERVABILITY• HUMAN FACTORS• COMPLETENESS

• STANDARDIZATION OF TEST EQUIPMENT• INTERFACE WITH TEST EQUIPMENT• FAULT COVERAGE• RELIABILITY• OPERATION UNDER FAILURE

Page 99: DAU T&E Management Guide.pdf

7-9

affect the types of testing performed to ensurethe system meets technical specifications andsatisfies operational effectiveness and suitabilityrequirements. Warranties do, however, affect theamount of testing required to establish reliability.Because the standard item is warranted, lessemphasis on that portion of the item can allowfor additional emphasis on other aspects of theitem not covered under the warranty. Further, thegovernment may tend to have more confidencein contractor test results and may be able, there-fore, to avoid some duplication of test effort. Thewarranty essentially shifts the burden of risk fromthe government to the contractor. Warranties cansignificantly increase the price of the contract,especially if high-cost components are involved.

7.8 DT&E OF LIMITED PROCUREMENTQUANTITY PROGRAMS

Programs that involve the procurement of rela-tively few items, such as satellites, some largemissiles, and unique intelligence equipment,

typically over an extended period, are normallysubjected to modified DT&E. Occasionally, aunique test approach that deviates from the stan-dard timing and reporting schedule will be used.The DT&E principle of iterative testing startingwith components, subsystems, prototypes, andfirst-production models of the system is normallyapplied to limited procurements. It is importantthat DT&E and OT&E organizations work to-gether to ensure that integrated T&E plans areadapted/tailored to the overall acquisition strategy.

7.9 SUMMARY

DT&E is an iterative process of designing, build-ing, testing, identifying deficiencies, fixing, re-testing, and repeating. It is performed in the fac-tory, laboratory, and on the proving ground bythe contractors and the government. Contractorand government testing is combined into one in-tegrated test program and conducted to determineif the performance requirements have been metand to provide data to the decision authority.

Page 100: DAU T&E Management Guide.pdf

7-10

Page 101: DAU T&E Management Guide.pdf

8-1

88DT&E SUPPORT OF TECHNICAL REVIEWS

AND MILESTONE DECISIONS

8.1 INTRODUCTION

Throughout the acquisition process, DevelopmentTest and Evaluation (DT&E) is oriented towardthe demonstration of specifications showing thecompleteness and adequacy of systems engineer-ing, design, development, and performance. A criti-cal purpose of DT&E is to identify the risks ofdevelopment by testing and evaluating selectedhigh-risk components or subsystems. DT&E is thedeveloper’s tool to show that the system performsas specified or that deficiencies have been cor-rected and the system is ready for operational test-ing and fielding (Figure 8-1). The DT&E resultsare used throughout the Systems Engineering Pro-cess (SEP) to provide valuable data in support offormal design reviews. This chapter describes the

test’s relationship to the Formal Design Reviews(FDRs) essential to the SEP.

8.2 DT&E AND THE REVIEW PROCESS

8.2.1 The Technical Review Process

Technical reviews and audits are conducted by thegovernment and the contractor as part of the SEPto ensure the design meets the system, subsystem,and software specifications. Each review is uniquein its timing and orientation. Some reviews buildon previous reviews and take the design and test-ing effort one step closer to the final system designto satisfy the operational concept/purpose for theweapon system. Table 8-1 illustrates the sequencingof the technical reviews in relation to the Test andEvaluation (T&E) phases.

Figure 8-1. Relationship of DT&E to the Acquisition Process

DT&E

DT&E

VALIDATE SPECIFICATIONSSYSTEM PERFORMANCEMODIFICATIONACCEPTANCE

DT&E

DT&E

DT&ETECHNOLOGYFEASIBILITY

PRODUCTION &DEPLOYMENT

ENGR DEV MODEL

SYSTEM DEVELOPMENT& DEMONSTRATIONTECHNOLOGY

DEVELOPMENT

INITIALCAPABILITYDOCUMENT

PROTOTYPE LRIP

TECHNOLOGYDEVELOPSTRATEGYSTUDIES

ANDANALYSIS

SIMULATION

REQUIREMENTS

SIMULATION

ALTERNATIVES

PRODUCTIONDESIGN

PRODUCTIONDESIGN

SIMULATION

PREFERREDTECHNICAL APPTECHNICAL RISKENGINEER DESIGNSOLUTIONS

ENGINEERING COMPLETENESSIDENTIFY “ILITIES”SYSTEM PERFORMANCESPECIFICATIONSTECHNICAL COMPLIANCE TESTQUALIFICATION TESTS

PERFORMANCE/SPECSENGINEER COMPLETETECHNICAL RISK

OPERATIONS &SUPPORT

MODIFICATIONS

FRPDRMILESTONE A MILESTONE B MILESTONE C

Page 102: DAU T&E Management Guide.pdf

8-2

WHEN PURPOSE DOCUMENTATION DATA

INITIAL ITR EARLY • EVALUATE INITIAL PROGRAM • COST ANALYSIS REQUIREMENTSTECHNICAL CONCEPT TECHNICAL BASELINE DESCRIPTION (CARD) LIKEREVIEW REFINEMENT • ASSESS INITIAL PROGRAM DOCUMENT

COST ESTIMATES • ASSESSMENT OF TECHNICALAND COSTS RISKS

ALTERNATIVE ASR LATE CONCEPT • ASSESS REQUIREMENTS • TRADE STUDIESSYSTEM REFINEMENT AGAINST CUSTOMER NEEDS • PRELIMINARY SYSTEM SPECREVIEW (PRE-MS A) • ASSESS READINESS OF • INPUT TO TDS

PREFERRED ALTERNATIVE TO • INPUT TO AOAENTER TECHNOLOGY DEV • SYSTEMS ENGINEERING PLAN

SYSTEM SRR EARLY SYSTEMS • EVALUATE SYSTEM • PRELIM PERF SPECREQUIREMENTS INTEGRATION FUNCTIONAL REQUIREMENTS • PRELIM PLANNING DOCUMENTATIONREVIEW • FFBD, RAS, MBN ANALYSIS

SYSTEM SFR EARLY SYSTEMS • EVALUATE SYSTEM DESIGN • PERFORMANCE SPECFUNCTIONAL INTEGRATION • VALIDATE SYSTEM SPEC • PRELIM ITEM (PERF) SPECREVIEW • ESTABLISH SYSTEM LEVEL • DESIGN DOCUMENTS

FUNCTIONAL BASELINE • RAS, TLS

SOFTWARE SSR MID SYSTEMS • EVALUATE SW PERFORMANCE • SW SPECSPECIFICATION INTEGRATION REQUIREMENTS • (SRS & IRS)REVIEW • VALIDATE SW SPECS • OPS CONCEPT DOC

• ESTABLISH SW SPECS BASELINE

PRELIMINARY PDR LATE SI OR • VALIDATE ITEM (PERF) SPECS • ITEM (PERF) SPECDESIGN EARLY SYSTEMS • ESTABLISH HW ALLOCATED • DES DOC TEST PLANREVIEW DEMO BASELINE • ICD, ENGR DRAWINGS

• EVALUATE PRELIMINARY • PRELIMINARY SDD - IDDDESIGN HW & SW

CRITICAL CDR EARLY • EVALUATE CI DESIGN • PRELIM ITEM (DETAIL),DESIGN SYSTEMS • DETERMINE READINESS PROCESS, MATERIAL SPECSREVIEW DEMO FOR FABRICATION OF EDM • DETAIL DESIGN DOCUMENTS

INCLUDE SDD - IDD

TEST TRR MID/LATE • APPROVE SW TEST PROCEDURES • SW TEST PLAN/PROCEDURESREADINESS SYSTEMS • DETERMINE READINESS FOR • INFORMAL SW TEST RESULTSREVIEW DEMO FORMAL TEST

FUNCTIONAL FCA LATE SYS DEMO • VERIFY CI ACTUAL PERFORMANCE • TEST PLANS & DESCRIPTIONSCONFIGURATION OR EARLY P&D COMPLIES WITH HW DEVELOPMENT • SOFTWARE TEST REPORTSAUDIT OR SRS & IRS

FORMAL FQR LATE LRIP • VERIFY CI’S PERFORM IN • TEST REPORTSQUALIFICATION SYSTEM ENVIRONMENT • SPECSREVIEW • O&S DOCS

PRODUCTION PRR INCREMENTAL • ASSESS RISK FOR • PROD PLANNING DOCUMENTSREADINESS SYSTEMS PRODUCTION GO-AHEADREVIEW DEMO

PHYSICAL PCA LRIP EARLY • FORMAT EXAMINATION OF • FINAL ITEM (DETAIL) SPECCONFIGURATION FRP THE AS-BUILT • LISTINGSAUDIT • LEVEL II & III DRAWING

IN-SERVICE ISR AFTER SYSTEM IN • ASSESS SYSTEM OPERATIONS • SYSTEM HAZARD RISK ASSESSMENTREVIEW OPERATIONAL USE IN FIELD CONDITIONS • OPERATIONAL READINESS ASSESSMENT

(POST-FRPDR) • ASSESS SUPPORT OF FIELDED SYS • DISCREPANCY REPORTS

Table 8-1. Technical Reviews and Audits

SOURCE: Defense Acquisition Guidebook, September 2004.

Page 103: DAU T&E Management Guide.pdf

8-3

The review process was established to ensure thatthe system under development would meetgovernment requirements. The reviews evaluatedata from contractor and government testing,engineering analysis, and models to determine ifthe system or its components will eventually meetall functional and physical specifications and todetermine the final system design. (Figure 8-2)The system specification is very important in thisprocess. It is the document used as a benchmarkto compare contractor progress in designing anddeveloping the desired product. Guidelines forthese formal technical reviews and audits can befound in Processes for Engineering a System, orApplication and Management of the SystemsEngineering Process.1

8.2.2 Testing in Support of TechnicalReviews

The testing community must be continually in-volved in supporting the technical reviews of theirsystems. The timing of these events will varysomewhat from program to program, based upon

the explicit and unique needs of the acquisitionstrategy. Lower level design reviews will lead tosystem-level reviews. Decisions made at thesereviews have major impacts on the system testdesign, resources required to test, and the devel-opment of the Test and Evaluation Master Plan(TEMP) and other documentation. A moredetailed discussion of testing to support the tech-nical reviews is provided in the Systems Engi-neering Fundamentals Guide.2 The reviews focusprimarily on government technical specificationsfor the system. Figure 8-3 illustrates the programspecifications and how they are developed in thesystem life cycle.

8.2.3 Design Reviews and Audits

8.2.3.1 Concept Refinement (CR) andTechnology Development (TD)

The Alternative Systems Review (ASR) is con-ducted to demonstrate the preferred systemconcept(s). Technical reviews conducted duringthe early stages of development tend to focus on

T

T T T T T T T T T

P

PP

C IOC(PROGRAM INITIATION)

SYSTEM DEVELOPMENT &DEMONSTRATION

PRODUCTION &DEPLOYMENT

FOC

OPERATIONS &SUPPORT

SYSTEMDEMONSTRATION

DESIGNREADINESSREVIEW

SUSTAINMENTFRP

DISPOSAL

FRPDECISION

IOT&E

LRIP

SYSTEMS ACQUISITION

TT TTTTTTTTTTTT

TRA

SRR(May be

morethan one)

IBR(May be

morethan one)

SFR PDR CDRASRITR

TECHNICAL

REVIEWS

TECHNOLOGY READINESS ASSESSMENT T TECHNICAL REVIEWS PROGRAM REVIEWSP

TRR SVR(FCA)/PRR

OTRR PCA ISRPPTT PPTT

SUSTAINMENT

A BPHASES CONCEPT

REFINEMENTTECHNOLOGYDEVELOPMENT

WORKEFFORTS CONCEPT

DECISION

SYSTEMINTEGRATION

ACTIVITIES PRE SYSTEMS AQUISITIONTRA

TECHNICAL REVIEWS INTERACTIVE TIMELINE

T

Figure 8-2. Technical Review Timeline

SOURCE: Skalamera, Robert J. November 2004. “Implementing OSD Systems Engineering Policy” briefing to the USD(AT&L).

Page 104: DAU T&E Management Guide.pdf

8-4

top-level concepts and system def initionsclarifying the requirements on which subsequentdesign and development activities will be based.

8.2.3.2 System Development andDemonstration (SDD)

The System Requirements Review (SRR) is nor-mally conducted late in the system concept evalu-ation or shortly after program initiation. It con-sists of a review of the system/system segmentspecifications, previously known as the “A” speci-fications,3 and is conducted after the accomplish-ment of functional analysis and preliminary re-quirements allocation. During this review, thesystems engineering management activity and itsoutput are reviewed for responsiveness to theStatement of Work (SOW) requirements. The pri-mary function of the SRR is to ensure that thesystem’s requirements have been completed andproperly identified and that there is a mutual un-derstanding between the contractor and thegovernment. During the review, the contractor de-

scribes his progress and any problems in risk iden-tification and ranking, risk avoidance and reduc-tion, trade-off analysis, producibility and manu-facturing considerations, and hazards consider-ations. The results of integrated test planning arereviewed to ensure the adequacy of planning toassess the design and to identify risks. Issues oftestability of requirements should be discussed.

The System Functional Review (SFR) is con-ducted as a final review before submittal of theprototype design products. The system specifi-cation is validated to ensure that the most currentspecification is included in the System FunctionalBaseline and that they are adequate and cost-effective to satisfy validated mission require-ments. The SFR encompasses the total systemrequirement of operations, maintenance, test,training, computers, facilities, personnel, andlogistics considerations. A technical understand-ing should be reached on the validity and thedegree of completeness of specifications, design,Operational Concept Documentation (OCD),

Figure 8-3. Specifications Summary

SPECIFICATION WHEN PREPARING APPROVING CONTENT BASELINEPREPARED AGENT AGENT

SYSTEM EARLY DEV/PROG DEV/PROG DEFINES MISSION AND TECH REQUIRE- FUNCTIONALSEGMENT SI MGR MGR MENTS; ALLOCATES REQUIREMENTS TO

(OLD TYPE A) INDUSTRY USER FUNCTIONAL AREAS; DOCUMENTS DESIGNCONSTRAINTS; DEFINES INTERFACES

ITEM MID INDUSTRY DEV/PROG DETAILS DESIGN REQUIREMENTS; STATES, ALLOCATEDPERFORMANCE SD MGR DESCRIBES PERFORMANCE CHARACTER- “DESIGN

(OLD TYPE B) ISTICS OF EACH CI; DIFFERENTIATES TO”REQUIREMENTS ACCORDING TO

COMPLEXITY AND DISCIPLINE SETS

ITEM DETAIL LATE SD INDUSTRY DEV/PROG DEFINES FORM, FIT, FUNCTION,(OLD TYPE C) MGR PERFORMANCE, AND TEST REQUIREMENTS

FOR ACCEPTANCE

PROCESS LATE SD INDUSTRY DEV/PROG DEFINES PROCESS PERFORMED(OLD TYPE D) MGR DURING FABRICATION

MATERIAL LATE SD INDUSTRY DEV/PROG DEFINES PRODUCTION OF RAW OR SEMI-(OLD TYPE E) MGR FABRICATED MATERIAL USED IN “BUILD UP”

FABRICATION

PRODUCT

Page 105: DAU T&E Management Guide.pdf

8-5

Software Requirements Specifications (SRSs),and Interface Requirements Specif ications(IRSs) during this review.

The Software Specification Review (SSR) is aformal review of the Computer System Configu-ration Item (CSCI) requirements, normally heldafter an SFR but before the start of a CSCIpreliminary design. Its purpose is to validate theallocated baseline for preliminary CSCI designby demonstrating to the government theadequacy of the SRSs, IRSs, and OCDs.

The Preliminary Design Review (PDR) is aformal technical review of the basic approachfor a Configuration Item (CI). It is conducted atthe CI and system level early in the SDD Phaseto confirm that the preliminary design logicallyfollows SFR findings and meets the system re-quirements. The review results in an approvalto begin the detailed design. The draft itemspecifications (performance) are reviewed dur-ing the PDR. The purpose of the PDR is to:evaluate the progress, technical adequacy, andrisk resolution (on technical, cost, and schedulebasis) of the CI design approach; review Devel-opment Test (DT) and Operational Test (OT)activities to measure the performance of eachCI; and establish the existence and compatibil-ity of the physical and functional interfaceamong the CI and other equipment.

The Critical Design Review (CDR) may be con-ducted on each CI and/or at the system level. Itis conducted on the Engineering DevelopmentModel (EDM) design when the detailed designis essentially complete and prior to the FunctionalConfiguration Audit (FCA). During the CDR, theoverall technical program risks associated witheach CI are also reviewed on a technical, cost,and schedule basis. It includes a review of theitem specifications (detail) and the status of boththe system’s hardware and software. Input fromqualification testing should assist in determina-tion of readiness for design freeze and Low RateInitial Production (LRIP). Test plans are reviewed

to assess if test efforts are being developed suffi-ciently to indicate a Test Readiness Review (TRR)would be successful. The CDR should precedethe program DRR.

The TRR was a formal review of the contractor’sreadiness to begin CSCI testing, but has evolvedinto a review of both hardware and software. Itis conducted after the software test proceduresare available and computer software componenttesting has been completed. A government wit-ness will observe the system demonstration toverify that the system is ready to proceed withCSCI testing. The purpose of the TRR is for theProgram Management Office (PMO) to deter-mine whether the test procedures are completeand verify their compliance with test plans anddescriptions.

8.2.3.3 Production and Deployment (P&D)

The FCA is a formal review to verify that theCI’s performance complied with its system speci-fication. The item specifications are derived fromthe system requirements and baseline documen-tation. During the FCA, all relevant test data arereviewed to verify that the item has performed asrequired by its functional and/or allocatedconfiguration identification. The audit is con-ducted on the item representative (prototype orproduction) of the configuration to be releasedfor production. The audit consists of a review ofthe contractor’s test procedures and results. Theinformation provided will be used during the FCAto determine the status of planned tests.

The Physical Configuration Audit (PCA) is aformal review that establishes the product baselineas reflected in an early production CI. It is theexamination of the as-built version of hardwareand software CIs against its technical documen-tation. The PCA also determines that theacceptance testing requirements prescribed by thedocumentation are adequate for acceptance ofproduction units of a CI by Quality Assurance(QA) activities. It includes a detailed audit of

Page 106: DAU T&E Management Guide.pdf

8-6

engineering drawings, final Part II item specifi-cations (detail), technical data, and plans for test-ing that will be utilized during production. ThePCA is performed on all first articles and on thefirst CIs delivered by a new contractor.

The System Verif ication Review (SVR) is asystems-level configuration audit that may beconducted after system testing is completed. Theobjective is to verify that the actual performanceof the CI (the production configuration), asdetermined through testing, complies with its itemspecifications (performance) and to document theresults of the qualification tests. The SVR andFCA are often performed at the same time; how-ever, if sufficient test results are not available atthe FCA to ensure the CI will perform in its op-erational environment, the SVR can be scheduledfor a later time.

The Production Readiness Review (PRR) is anassessment of the contractor’s ability to producethe items on the contract. It is usually a series ofreviews conducted before an LRIP or FRP deci-sion. For more information, see Chapter 10, Pro-duction Related Testing Activities.

8.3 CONFIGURATION CHANGECONTROL

Configuration Change Control is reviewed toassess the impact of engineering or designchanges. It is conducted by the engineering, T&E,and Program Manager (PM) portions of the PMO.Most approved Class I Engineering Change

Proposals (ECPs) will require additional testing,and the test manager must accommodate the newschedules and resource requirements. Adequatetesting must be accomplished to ensure integra-tion and compatibility of these changes. Forexample, an Engineering Change Review (ECR)was conducted to replace the black and whitemonitors and integrate color monitors into theAirborne Warning and Control System (AWACS).Further, the AWACS operating software had tobe upgraded to handle color enhancement. Thereview was conducted by the government PMO;and sections of the PMO were tasked to contract,test, engineer, logistically support, control, cost,and finance the change to completion. Guidelinesfor configuration control and engineering changesare discussed in Configuration Control – Engi-neering Changes, Deviations, and Waivers.4

8.4 SUMMARY

Design reviews are an integral and essential partof the SEP. The meetings range from very formalreviews by government and contractor PMs to in-formal technical reviews concerned with productor task elements of the Work Breakdown Struc-ture (WBS). Reviews may be conducted in in-crements over time. All reviews share thecommon objective of determining the technicaladequacy of the existing design to meet techni-cal requirements. The DT/OT assessments andtest results are made available to the reviews,and it is important that the test community beinvolved.

Page 107: DAU T&E Management Guide.pdf

8-7

ENDNOTES

1. EIA Standard 632, Processes for Engineer-ing a System, January 1999; or IEEE 1220-1994, Application and Management of theSystems Engineering Process, 1994.

2. Defense Systems Management College,Systems Engineering Fundamentals Guide,January 2001.

3. Military Standard (MIL-STD)-1512A, Tech-nical Reviews and Audits for Systems, Equip-ment, and Computer Programs. SystemFunctional Block Diagram, Chapter 12,September 30, 1997.

4. EIA Standard 649, MIL-STD-480, Configu-ration Control – Engineering Changes,Deviations, and Waivers, August 1, 1998.

Page 108: DAU T&E Management Guide.pdf

8-8

Page 109: DAU T&E Management Guide.pdf

9-1

99COMBINED AND

CONCURRENT TESTING

9.1 INTRODUCTION

The terms “concurrency,” “concurrent testing,”and “combined testing” are sometimes subject tomisinterpretation. Concurrency is defined as anapproach to system development and acquisitionin which phases of the acquisition process thatnormally occur sequentially overlap to someextent. For example, a weapon system enters theproduction phase while development efforts arestill underway.

Concurrent testing refers to circumstances inwhich development testing and operational test-ing take place at the same time as two parallel,but separate and distinct, activities. In contrast,combined/integrated testing refers to a single testprogram conducted to support Development Test(DT) and Operational Test (OT) objectives. Thischapter discusses the use of combined testing andconcurrent testing, and highlights some of theadvantages and disadvantages associated withthese approaches. (Table 9-1)

9.2 COMBINING DEVELOPMENT TESTAND OPERATIONAL TEST

Certain test events can be organized to provideinformation useful to development testers andoperational testers. For example, a prototype free-fall munition could be released from a fighteraircraft at operational employment conditionsinstead of from a static stand to satisfy DT andOT objectives. Such instances need to be identi-fied to prevent unnecessary duplication of effortand to control costs. A combined testing approachis also appropriate for certain specialized typesof testing. For example, in the case of Nuclear

Hardness and Survivability (NHS) testing,systems cannot be tested in a totally realisticoperational environment; therefore, a single testprogram is often used to meet both DT and OTobjectives.

A combined/integrated Development Test andEvaluation (DT&E) and Operational Test andEvaluation (OT&E) approach should be consid-ered when there are time and cost savings.1 Thecombined approach must not compromise eitherDT or OT objectives. If this approach is elected,planning efforts must be carefully coordinatedearly in the program to ensure data are obtainedto satisfy the needs of both the developing agencyand the independent operational tester. Care mustalso be exercised to ensure a combined test pro-gram contains dedicated OT events to satisfy therequirement for an independent evaluation. Afinal independent phase of OT&E testing shallbe required for Beyond Low Rate Initial Produc-tion (BLRIP) decisions. In all combined testprograms, provisions for separate independentdevelopment and operational evaluations of testresults should be provided.

Service regulations describe the sequence ofactivities in a combined testing program asfollows:

Although OT&E is separate and distinctfrom DT&E, most of the generated dataare mutually beneficial and freely shared.Similarly, the resources needed to conductand support both test efforts are often thesame or very similar. Thus, when sequen-tial DT&E and OT&E efforts would causedelay or increase the acquisition cost of

Page 110: DAU T&E Management Guide.pdf

9-2

Table 9-1. Combined vs. Concurrent Testing: Advantages and Limitations

the system, DT&E and OT&E are com-bined. When combined testing is planned,the necessary test conditions and datarequired by both DT&E and OT&Eorganizations must be integrated. Com-bined testing can normally be divided intothree segments.

In the first segment, DT&E event[s]usually assume priority because critical

technical and engineering tests must beaccomplished to continue the engineeringand development process. During this earlyperiod, OT&E personnel participate to gainfamiliarity with the system and to gain ac-cess to any test data that can supportOT&E. Next, the combined portion of thetesting frequently includes shared objec-tives or joint data requirements. The lastsegment normally contains the dedicated

ADVANTAGES

• SHORTENS TIME REQUIRED FOR TESTING AND,THUS, THE ACQUISITION CYCLE.

• ACHIEVES COST SAVINGS BY ELIMINATINGREDUNDANT ACTIVITIES.

• EARLY INVOLVEMENT OF OT&E PERSONNEL DURINGSYSTEM DEVELOPMENT INCREASES THEIRFAMILIARITY WITH SYSTEM.

• EARLY INVOLVEMENT OF OT&E PERSONNEL PER-MITS COMMUNICATION OF OPERATIONAL CON-CERNS TO DEVELOPER IN TIME TO ALLOW CHANGESIN SYSTEM DESIGN.

LIMITATIONS

• REQUIRES EXTENSIVE EARLY COORDINATION.

• TEST OBJECTIVES MAY BE COMPROMISED.

• REQUIRES DEVELOPMENT OF DT/OT COMMON TESTDATABASE.

• COMBINED TESTING PROGRAMS ARE OFTEN CON-DUCTED IN A DEVELOPMENT ENVIRONMENT.

• TEST WILL BE DIFFICULT TO DESIGN TO MEET DT ANDOT REQUIREMENTS.

• THE SYSTEM CONTRACTOR IS PROHIBITED BY LAWFROM PARTICIPATING IN IOT&E.

• TIME CONSTRAINTS MAY RESULT IN LESS COVERAGETHAN PLANNED FOR OT&E OBJECTIVES.

COMBINED TESTING

ADVANTAGES

• SHORTENS THE TIME REQUIRED FOR TESTING AND,THUS, THE ACQUISITION CYCLE.

• ACHIEVES COST SAVINGS BY OVERLAPPINGREDUNDANT ACTIVITIES.

• PROVIDES EARLIER FEEDBACK TO THEDEVELOPMENT PROCESS.

LIMITATIONS

• REQUIRES EXTENSIVE COORDINATION OF TESTASSETS.

• IF SYSTEM DESIGN IS UNSTABLE AND FAR-REACHINGMODIFICATIONS ARE MADE, OT&E MUST BE REPEATED.

• CONCURRENT TESTING PROGRAMS OFTEN DO NOTHAVE DT DATA AVAILABLE FOR OT&E PLANNING ANDEVALUATION.

• CONTRACTOR PERSONNEL FREQUENTLY PERFORMMAINTENANCE FUNCTIONS IN A DT&E. LOGISTICSSUPPORT BY USER MUST BE AVAILABLE EARLIER FORIOT&E.

• LIMITED TEST ASSETS MAY RESULT IN LESS COVERAGETHAN PLANNED FOR OT&E OBJECTIVES.

CONCURRENT TESTING

Page 111: DAU T&E Management Guide.pdf

9-3

knowledgeably with a wide-ranging and complextest plan.

9.3 CONCURRENT TESTING

In 1983, a senior Department of Defense (DoD)T&E official testified that a concurrent testingapproach is usually not an effective strategy.4 Heacknowledged, however, that certain test eventsmay provide information useful to developmentand operational testers, and test planners must bealert to identify those events. His testimony in-cluded the following examples of situations wherea concurrent testing approach was unsuccessful:

(1) During AAH (Advanced Attack Helicop-ter) testing in 1981, the Target AcquisitionDesignation System (TADS) was under-going developmental and operationaltesting at the same time. The schedule didnot allow enough time for qualificationtesting (a development test activity) of theTADS prototype prior to a full field testof the total aircraft system, nor was theretime to introduce changes to TADSproblems discovered in tests. As a result,the TADS performed poorly and wasunreliable during the operational test. Theresulting DSARC [Defense SystemsAcquisition Review Council] actionrequired the Army to fix and retest theTADS prior to release of second year andsubsequent production funds.

(2) When the AIM-7 Sparrow air-to-air missilewas tested, an attempt was made to moveinto operational testing while developmen-tal reliability testing was still underway.The operational test was suspended afterless than 2 weeks because of poor reliabil-ity of the test missiles. The program con-centrated on an intensive reliability im-provement effort. A year after the initialfalse start, a full operational test was con-ducted and completed successfully.

OT&E or separate OT&E events to beconducted by the OT&E agency. TheOT&E agency and implementing com-mand must ensure the combined test isplanned and executed to provide the nec-essary OT information. The OT&E agencyprovides an independent evaluation of theOT&E portion and is ultimately respon-sible for achieving OT&E objectives.

The testing of the Navy’s F-14 aircraft has beencited as an example of a successful combined Testand Evaluation (T&E) program.2 A key factor inthe success of the F-14 approach was the selec-tion of a T&E coordinator responsible for super-vising the generation of test plans that integratedthe technical requirements of the developers withthe operational requirements of the users. The T&Ecoordinator was also responsible for the alloca-tion of test resources and the overall managementof the test. In a paper for the Defense SystemsManagement College, Mr. Thomas Hoivik de-scribes the successful F-14 test program as follows:

The majority of the Navy developmentaland operational testing took place duringthe same period and even on the sameflights. Maximum use was made of con-tractor demonstrations witnessed by theNavy testing activities to obviate theretesting of a technical point alreadydemonstrated by the contractor. Witness-ing by testing activities was cruciallyimportant and allowed the contractor’s datato be readily accepted by the testing ac-tivities. This approach also helped to elimi-nate redundancy in testing, i.e., the testingof the same performance parameter by sev-eral different activities, which has been aconsistent and wasteful feature of Navytesting in the past.3

This approach placed a great deal of responsibilitydirectly on the shoulders of the T&E Coordinator,and required the T&E Coordinator’s staff to deal

Page 112: DAU T&E Management Guide.pdf

9-4

(3) The Maverick missile had a similar ex-perience of being tested in an operationalenvironment before component reliabilitytesting was completed. As a result,reliability failures had a major impact onthe operational testers and resulted in theprogram being extended.

9.4 ADVANTAGES AND LIMITATIONS

Before adopting a combined or concurrent testingapproach, program and test managers are advisedto consider the advantages and disadvantagessummarized in Table 9-1.

9.5 SUMMARY

A combined or concurrent testing approach mayoffer an effective means of shortening the timerequired for testing and achieving cost savings.If such an approach is used, extensive coordina-tion is required to ensure the development andoperational requirements are addressed.

It is possible to have combined test teams, con-sisting of DT&E, OT&E, and contractor person-nel involved throughout the testing process. Theteams can provide mutual support and sharemutually beneficial data as long as the test pro-gram is carefully planned and executed andreporting activities are conducted separately.

Page 113: DAU T&E Management Guide.pdf

9-5

ENDNOTES

1. As suggested by DoD Instruction (DoDI)5000.2, Operation of the Defense AcquisitionSystem, May 12, 2003.

2. Hoivik, Thomas H., The Navy Test and Evalu-ation Process in Major Systems Acquisition.DTIC [Defense Technical Information Cen-ter] Accession Number AD A035935,November 1976.

3. Ibid.

4. Hearing before the Senate Committee on Gov-ernmental Affairs, June 23, 1983.

Page 114: DAU T&E Management Guide.pdf

9-6

Page 115: DAU T&E Management Guide.pdf

10-1

1010PRODUCTION-RELATED

TESTING ACTIVITIES

10.1 INTRODUCTION

Most of the Test and Evaluation (T&E) discussedin this guide concerns the testing of the actualweapon or system being developed, but theProgram Manager (PM) must also evaluateproduction-related test activities and the produc-tion process. This chapter describes productionmanagement and the production process testingrequired to ensure the effectiveness of the manu-facturing process and the producibility of thesystem’s design.

Normally, the Development Test (DT) and Op-erational Test (OT) organizations are not involveddirectly in this process. Usually, the manufactur-ing and Quality Assurance (QA) sections of theprogram office and a representative of the gov-ernment Defense Contract Management Agency(DCMA) oversee/perform many of thesefunctions.

10.2 PRODUCTION MANAGEMENT

Production (manufacturing) management is the ef-fective use of resources to produce, on schedule,the required number of end items that meet speci-fied quality, performance, and cost. Productionmanagement includes, but is not limited to, indus-trial resource analysis, producibility assessment,producibility engineering and planning, productionengineering, industrial preparedness planning, post-production planning, and productivity enhance-ment. Production management begins early in theacquisition process, as early as the concept assess-ments, and is specifically addressed at each

program milestone decision point. For instance,before program initiation production feasibility,costs and risks should be addressed. The PM mustconduct an Industrial Resource Analysis (IRA) todetermine the availability of production resources(e.g., capital, material, manpower) required to sup-port the production of the weapon system. On thebasis of the results of the IRA, critical materials,deficiencies in the U.S. industrial base, and require-ments for new or updated manufacturing technol-ogy can be identified. Analysis of the industrial-base capacity is one of the considerations in pre-paring for the program start decision. As develop-ment proceeds, the manufacturing strategy is de-veloped; and detailed plans are made for produc-tion. Independent producibility assessments, con-ducted in preparation for the transition from de-velopment to production, are reviewed before en-tering Low Rate Initial Production (LRIP). Onceproduction starts, the producibility of the systemdesign concept is evaluated to verify that the sys-tem can be manufactured in compliance with theproduction-cost and the industrial-base goals andthresholds.

The LRIP decision is supported by an assessmentof the readiness of the system to enter produc-tion. The system cannot enter production until itis determined the principal contractors have thenecessary resources (i.e., physical, financial, andmanagerial capacity) to achieve the cost andschedule commitments and to meet peacetime andmobilization requirements for production of thesystem. The method of assessing productionreadiness is the Production Readiness Review(PRR), which is conducted by the PM and staff.

Page 116: DAU T&E Management Guide.pdf

10-2

10.3 PRODUCTION READINESSREVIEW

The following is an overview of PRRs:

This review is intended to determine thestatus of completion of the specif icactions that must be satisfactorilyaccomplished prior to executing a produc-tion go-ahead decision. The review isaccomplished in an incremental fashionbefore commencing production. Usuallytwo initial reviews and one final revieware conducted to assess the risk in exer-cising the production go-ahead decision.In its earlier stages the PRR concernsitself with gross-level manufacturingconcerns such as the need for identifyinghigh-risk/low-yield manufacturing pro-cesses or materials, or the requirement formanufacturing development effort tosatisfy design requirements. Timing of theincremental PRRs is a function ofprogram posture and is not specificallylocked into other reviews.

The conduct of a PRR (Table 10-1) is theresponsibility of the PM, who usually appoints adirector. The director assembles a team comprisedof individuals in the disciplines of design,industry, manufacturing, procurement, inventorycontrol, contracts, engineering, and qualitytraining. The PRR director organizes and managesthe team effort and supervises preparation of thefindings.

10.4 QUALIFICATION TESTING

Qualification testing is performed to verify thedesign and manufacturing process, and it providesa baseline for subsequent acceptance tests. Theproduction qualification testing is conducted atthe unit, subsystem, and system level on produc-tion items and is completed before the productiondecision. The results of these tests are a criticalfactor in assessing the system’s readiness for

production. Down line Production QualificationTests (PQTs) are performed to verify process con-trol and may be performed on selected parametersrather than at the levels originally selected forqualification.

10.4.1 Production Qualification Tests

PQTs are a series of formal contractual testsconducted to ensure design integrity over thespecified operational and environmental range.The tests are conducted on pre-Full Rate Produc-tion (FRP) items fabricated to the proposed pro-duction design drawings and specifications. ThePQTs include all contractual Reliability andMaintainability (R&M) demonstration testsrequired prior to production release. For volumeacquisitions, these tests are a constraint toproduction release.

10.4.2 First Article Tests (FATs)

FATs consist of a series of formal contractual testsconducted to ensure the effectiveness of themanufacturing process, equipment, and proce-dures. These tests are conducted on a randomsample from the first production lot. These se-ries of tests are repeated if the manufacturingprocess, equipment, or procedure is changed sig-nificantly and when a second or alternative sourceof manufacturing is brought online.1

10.5 TRANSITION TO PRODUCTION

In an acquisition process, often the first indica-tion that a system will experience problems isduring the transition from engineering design toLRIP. This transition continues over an extendedperiod, often months or years; and during this pe-riod, the system is undergoing stringent contrac-tor and government testing. There may be unex-pected failures requiring signif icant designchanges, which impact on quality, producibility,supportability, and may require program sched-ule slippage. Long periods of transition usuallyindicate that insufficient attention to design or

Page 117: DAU T&E Management Guide.pdf

10-3

Table 10-1. PRR Guidelines Checklist

PRODUCT DESIGN

• PRODUCIBLE AT LOW RISK• STABILIZED AT LOW RATE OF CHANGE• VALIDATED• RELIABILITY, MAINTAINABILITY, AND PERFORMANCE DEMONSTRATED• COMPONENTS ENGINEERING HAS APPROVED ALL PARTS SELECTIONS

INDUSTRIAL RESOURCES

• ADEQUATE PLANT CAPACITY (PEACETIME AND WARTIME DEMANDS)• FACILITIES, SPECIAL PRODUCTION AND TEST EQUIPMENT, AND TOOLING IDENTIFIED• NEEDED PLANT MODERNIZATION (CAD/CAM, OTHER AUTOMATION) ACCOMPLISHED, WHICH PRODUCES

AN INVESTED CAPTIVE PAYBACK IN 2 TO 5 YEARS• ASSOCIATED COMPUTER SOFTWARE DEVELOPED• SKILLED PERSONNEL AND TRAINING PROGRAMS AVAILABLE

PRODUCTION ENGINEERING AND PLANNING (PEP)

• PRODUCTION PLAN DEVELOPED*• PRODUCTION SCHEDULES COMPATIBLE WITH DELIVERY REQUIREMENTS• MANUFACTURING METHODS AND PROCESSES INTEGRATED WITH FACILITIES, EQUIPMENT, TOOLING,

AND PLANT LAYOUT• VALUE ENGINEERING APPLIED• ALTERNATE PRODUCTION APPROACHES AVAILABLE• DRAWINGS, STANDARDS, AND SHOP INSTRUCTIONS ARE EXPLICIT• CONFIGURATION MANAGEMENT ADEQUATE• PRODUCTION POLICIES AND PROCEDURES DOCUMENTED• MANAGEMENT INFORMATION SYSTEM ADEQUATE• CONTRACTOR’S MANAGEMENT STRUCTURE IS ACCEPTABLE TO THE PMO• THE PEP CHECKLIST HAS BEEN REVIEWED

MATERIALS

• ALL SELECTED MATERIALS APPROVED BY CONTRACTOR’S MATERIAL ENGINEERS• BILL OF MATERIALS PREPARED• MAKE-OR-BUY DECISIONS COMPLETE• PROCUREMENT OF LONG LEAD-TIME ITEMS IDENTIFIED• SOLE-SOURCE AND GOVERNMENT-FURNISHED ITEMS IDENTIFIED• CONTRACTOR’S INVENTORY-CONTROL SYSTEM ADEQUATE• CONTRACTOR’S MATERIAL COST PROCUREMENT PLAN COMPLETE

QUALITY ASSURANCE (QA)

• QUALITY PLAN IN ACCORDANCE WITH CONTRACT REQUIREMENTS• QUALITY CONTROL PROCEDURES AND ACCEPTANCE CRITERIA ESTABLISHED• QA ORGANIZATION PARTICIPATES IN PRODUCTION PLANNING EFFORT

LOGISTICS

• OPERATIONAL SUPPORT, TEST, AND DIAGNOSTIC EQUIPMENT AVAILABLE AT SYSTEM DEPLOYMENT• TRAINING AIDS, SIMULATORS, AND OTHER DEVICES READY AT SYSTEM DEPLOYMENT• SPARES INTEGRATED INTO PRODUCTION LOT FLOW

*Military Standard (MIL-STD)-1528, Manufacturing Master Plan.

Page 118: DAU T&E Management Guide.pdf

10-4

producibility was given early in the program’s ac-quisition process.

10.5.1 Transition Planning

Producibility Engineering and Planning (PEP) isthe common thread that guides a system fromearly concept to production. Planning is a man-agement tool used to ensure that adequate risk-handling measures have been taken to transitionfrom development to production. It contains achecklist to be used during the readiness reviews.Planning should tie together the applications ofdesigning, testing, and manufacturing activitiesto reduce data requirements, duplication of effort,costs, and scheduling, and to ensure early successof the LRIP first production article.

10.5.2 Testing During the Transition

Testing accomplished during the transition fromdevelopment to production will include accep-tance testing, manufacturing screening, and finaltesting. These technical tests are performed bythe contractor to ensure the system will transi-tion smoothly and that test design and manufac-turing issues affecting design are addressed. Dur-ing this same period, the government will be usingthe latest available Configuration Item (CI) toconduct the Initial Operational Test and Evalua-tion (IOT&E). The impact of these tests may over-whelm the configuration management of thesystem unless careful planning is accomplishedto handle these changes.

10.6 LOW RATE INITIAL PRODUCTION

LRIP is the production of a system in limitedquantity to provide articles for IOT&E and todemonstrate production capability. Also, it per-mits an orderly increase in the production ratesufficient to lead to FRP upon successful comple-tion of operational testing. The decision to havean LRIP is made at the Milestone C approval ofthe program acquisition strategy. At that time, thePM must identify the quantity to be produced

during LRIP and validate the quantity of LRIParticles to be used for IOT&E [Acquisition Cat-egory (ACAT) I approved by the Director,Operational Test and Evaluation (DOT&E), ACATII and III approved by Service Operational TestAgency (OTA)].2 When the decision authoritythinks the systems will not perform to expecta-tion, the PM may direct that it not proceed intoLRIP until there is a program review. TheDOT&E submits a Beyond LRIP (BLRIP) Re-port on all oversight systems to congressionalcommittees before the FRP decision, approvingthe system to proceed beyond LRIP, is made.

10.7 PRODUCTION ACCEPTANCE TESTAND EVALUATION (PAT&E)

The PAT&E ensures that production items dem-onstrate the fulfillment of the requirements andspecifications of the procuring contract or agree-ments. The testing also ensures the system beingproduced demonstrates the same performance asthe pre-FRP models. The procured items or sys-tem must operate in accordance with system anditem specifications. The PAT&E is usually con-ducted by the program office QA section at thecontractor’s plant and may involve operationalusers.

For example, for the Rockwell B-1B Bomber pro-duction acceptance, Rockwell and Air Force QAinspectors reviewed all manufacturing and groundtesting results for each aircraft. In addition, aflight test team, composed of contractor and AirForce test pilots, flew each aircraft a minimumof 10 hours, demonstrating all on-board aircraftsystems while in flight. Any discrepancies inflight were noted, corrected, and tested on theground; they were then retested on subsequentcheckouts and acceptance flights. Once each air-craft had passed all tests and all systems werefully operational, Air Force authorities acceptedthe aircraft. The test documentation also becamepart of the delivered package. During this testperiod, the program office monitored eachaircraft’s daily progress.

Page 119: DAU T&E Management Guide.pdf

10-5

10.8 SUMMARY

A primary purpose of production-related testingis to lower the production risk in a Major DefenseAcquisition Program (MDAP). The PM mustensure the contractor’s manufacturing strategy andcapabilities will result in the desired product

within acceptable cost. The LRIP and PAT&E alsoplay major roles in ensuring the production unitis identical to the design drawings, conforms tothe specifications of the contract, that the IOT&Eis conducted with representative system configu-rations, and the system fielded meets the user’sneeds.

Page 120: DAU T&E Management Guide.pdf

10-6

ENDNOTES

1. Federal Acquisition Regulation (FAR) Part9.3, First Article Testing and Approval,September 2001.

2. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition System, May 12,2003.

Page 121: DAU T&E Management Guide.pdf

IIIIIIMODULE

OPERATIONALTEST AND EVALUATION

Operational Test and Evaluation (OT&E) is conducted to ensurethat a weapon system meets the validated requirements of theuser in a realistic scenario. Operational tests are focused onoperational requirements, effectiveness and suitability, and noton the proof of engineering specifications, as is the case withdevelopment testing. This module provides an overview of OT&Eand discusses how OT&E results provide essential informationfor milestone decisions.

Page 122: DAU T&E Management Guide.pdf
Page 123: DAU T&E Management Guide.pdf

11-1

1111INTRODUCTION TO OPERATIONAL

TEST AND EVALUATION

11.1 INTRODUCTION

This chapter provides an introduction to the con-cept of Operational Test and Evaluation (OT&E).It outlines the purpose of OT&E, discusses theprimary participants in the OT&E process, de-scribes several types of OT&E, and includes somegeneral guidelines for the successful planning,execution, and reporting of OT&E programs.

11.2 PURPOSE OF OT&E

OT&E is conducted for major programs by anorganization that is independent of the develop-ing, procuring, and using commands. Some formof operational assessment is normally conductedin each acquisition phase. Each assessment shouldbe keyed to a decision review in the materiel ac-quisition process. It should include typical useroperators, crews, or units in realistic combat simu-lations of operational environments. The OT&Eprovides the decision authority with an estimateof:

(1) The degree of satisfaction of the user’srequirements expressed as operational ef-fectiveness and operational suitability of thenew system;

(2) The system’s desirability, considering equip-ment already available, and operationalbenefits or burdens associated with the newsystem;

(3) The need for further development of the newsystem to correct performance deficiencies;

(4) The adequacy of doctrine, organizations,operating techniques, tactics, and trainingfor employment of the system; of mainte-nance support for the system; and of thesystem’s performance in the countermeasuresenvironment.

11.3 TEST PARTICIPANTS

The OT&E of developing systems is managedby an independent Operational Test Agency(OTA), which each Service is required to main-tain. It is accomplished under conditions of op-erational realism whenever possible. Personnelwho operate, maintain, and support the systemduring OT&E are trained to a level commensu-rate with that of personnel who will perform thesefunctions under peacetime and wartime condi-tions. Also, Program Management Office (PMO)personnel, the Integrated Product Teams (IPTs),and test coordinating groups play important partsin the overall OT&E planning and executionprocess.

11.3.1 Service Operational Test Agencies

The OT&E agencies should become involvedearly in the system’s life cycle, usually duringthe program’s evaluation of concepts. At this time,they can begin to develop strategies for conduct-ing operational tests (OT&E). As test planningcontinues, a more-detailed Test and EvaluationMaster Plan (TEMP), Part IV (OT&E), is devel-oped, and the test resources are identified andscheduled. During the early stages, the OTAsstructure an OT&E program consistent with theapproved acquisition strategy for the system, iden-tify critical Operational Test (OT) issues, and

Page 124: DAU T&E Management Guide.pdf

11-2

assess the adequacy of candidate systems. As theprogram moves into advanced planning, OT&Eefforts are directed toward becoming familiar withthe system, encouraging interface between theuser and developer and further refining the Criti-cal Operational Issues (COIs). The OTA test di-rectors, analysts, and evaluators design the OT&Eso that the data collected will support answeringthe COIs. Each Service has an independent orga-nization dedicated to planning, executing, andreporting the results of that Service’s OT&E ac-tivities. These organizations are the: Army Testand Evaluation Command (ATEC), Navy Opera-tional Test and Evaluation Force (OPTEVFOR),Air Force Operational Test and Evaluation Cen-ter (AFOTEC), and Marine Corps OperationalTest and Evaluation Activity (MCOTEA). Al-though policy only requires OTA within the Ser-vices, the Defense Information Systems Agency(DISA) has designated the Joint InteroperabilityTest Command (JITC) as the OTA for their De-partment of Defense (DoD) information systemsand a Special Operations Command (SOCOM)element conducts many of its own IOT&E.

11.3.2 Test Personnel

Operational testing is conducted on materielsystems with “typical” user organizational unitsin a realistic operational environment. It usespersonnel with the same Military OccupationalSpecialties (MOSs) as those who will operate,maintain, and support the system when fielded.Participants are trained in the system’s operationbased on the Service’s operational mission pro-files. Because some OT&E consist of force-on-force tests, the forces opposing the tested systemmust also be trained in the use of threat equip-ment, tactics, and doctrine. For operational test-ing conducted before Initial Operational Test andEvaluation (IOT&E), most system training is con-ducted by the system’s contractor. For IOT&E,the contractor trains the Service school cadre whothen train the participating organizational units.Once the system has entered Full-Rate Produc-tion (FRP), the Service will normally assume

training responsibilities. Operational testing of-ten requires a large support staff of data collec-tors and scenario controllers operating in the fieldwith the user test forces and opposing forces.

11.4 TYPES OF OT&E

The OT&E can be subdivided into two phases:operational testing performed before FRP and theoperational testing performed after the FRP deci-sion. The pre-FRP OT&E includes Early Opera-tional Assessments (EOAs), Operational Assess-ments (OAs), and IOT&E. OAs begin early in theprogram, frequently before program start and con-tinue until the system is certified as ready forIOT&E. The IOT&E is conducted late during lowrate production, when test assets are available, insupport of the full rate decision review. The Navyuses the term “OPEVAL” (Operational Evaluation)to define IOT&E. After transition to FRP, all sub-sequent operational testing is usually referred toas Follow-on Operational Test and Evaluation(FOT&E). In the Air Force, if no Research andDevelopment (R&D) funding is committed to asystem, Qualification Operational Test and Evalu-ation (QOT&E) may be performed in lieu ofIOT&E.

11.4.1 Early Operational Assessments

The EOAs are conducted primarily to forecast andevaluate the potential operational effectivenessand suitability of the weapon system during de-velopment. EOAs start during Concept Refine-ment (CR) and/or Technology Development (TD),may continue into system integration, and areconducted on prototypes of the developing design.

11.4.1.1 Operational Assessments

The OAs begin when the OTAs start their evalu-ations of system-level performance. The OTAuses any testing results, Modeling and Simula-tion (M&S), and data from other sources duringan assessment. These data are evaluated by theOTA from an operational point of view. As the

Page 125: DAU T&E Management Guide.pdf

11-3

program matures, these OAs of performancerequirements are conducted on Engineering De-velopment Models (EDMs) or pre-production ar-ticles until the system performance is consideredmature. The mature system can then be certifiedready for its IOT&E (OPEVAL in the Navy).When using an evolutionary acquisition strategy,subsequent increments must have at least an OAbefore the new increment is issued to the user.

11.4.1.2 Initial Operational Test andEvaluation (Navy OPEVAL)

The IOT&E is the final dedicated phase of OT&Epreceding an FRP decision. It is the final evalua-tion that entails dedicated operational testing ofproduction-representative test articles and usestypical operational personnel in a scenario that isas realistic as possible in compliance with UnitedStates Code (U.S.C.).1 The IOT&E is conductedby an OT&E agency independent of the contrac-tor, PMO, or developing agency. The test has beendescribed as:

All OT&E is conducted on production orproduction-representative articles, tosupport the decision to proceed beyondLow Rate Initial Production [LRIP]. It isconducted to provide a valid estimate ofexpected system operational effectivenessand operational suitability. The definitionof “OT&E” as spelled out in congres-sional legislation (see Glossary) is gener-ally considered applicable only to InitialOperational Test and Evaluation (IOT&E).

Further, IOT&E must be conducted with-out system contractor personnel partici-pation in any capacity other than stipu-lated in-service wartime tactics and doc-trine as set forth in Public Law 99-6612

by Congress. The results from this test areevaluated and presented to the MilestoneDecision Authority (MDA) (i.e., the de-cision to enter FRP) to support the Be-yond-Low-Rate Initial Production

(BLRIP) decision. This phase of OT&Eaddresses the Key Performance Param-eters (KPPs) identified in the CapabilityProduction Document (CPD) and theCritical Operational Issues (COIs) in theTEMP. IOT&E test plans for ACAT I andIA and other designated programs mustbe approved by the OSD (Office of theSecretary of Defense), Director, Opera-tional Test and Evaluation (DOT&E). Ser-vice IOT&E test reports provide the foun-dation for the DOT&E BLRIP Report.

11.4.2 Follow-On Operational Test andEvaluation (FOT&E)

The FOT&E is conducted after the FRP decision.The tests are conducted in a realistic tactical en-vironment similar to that used in IOT&E, butfewer test items may be used. Normally FOT&Eis conducted using fielded production systemswith appropriate modifications, upgrades, or in-crements. Specific objectives of FOT&E includetesting modifications that are to be incorporatedinto production systems, completing any deferredor incomplete IOT&E, evaluating correction ofdeficiencies found during IOT&E, and assessingreliability including spares support on deployedsystems. The tests are also used to evaluate thesystem in a different platform application for newtactical applications or against new threats.

11.4.3 Qualification Operational Test andEvaluation (QOT&E) (USAF)

Air Force QOT&E may be performed by themajor command, user, or AFOTEC and is con-ducted on minor modifications or new applica-tions of existing equipment when no R&D fund-ing is required. An example of a program in whichQOT&E was performed by the Air Force is theA-10 Air-to-Air Self Defense Program. In thisprogram the mission of the A-10 was expandedfrom strictly ground support to include an air-to-air defense role. To accomplish this, the A-10 air-craft was modified with off-the-shelf AIM-9 and

Page 126: DAU T&E Management Guide.pdf

11-4

air-to-air missiles; QOT&E was performed on thesystem to evaluate its operational effectivenessand suitability.

11.5 TEST PLANNING

Operational Test (OT) planning is one of the mostimportant parts of the OT&E process. Properplanning facilitates the acquisition of data to sup-port the determination of the weapon system’soperational effectiveness and suitability. Planningmust be pursued in a deliberate, comprehensive,and structured manner. Careful and completeplanning may not guarantee a successful test pro-gram; but inadequate planning can result in sig-nificant test problems, poor system performance,and cost overruns. OT planning is conducted bythe OTA before program start, and more detailedplanning usually starts about 2 years before eachOT event.

Operational planning can be divided into threephases: early planning, advanced planning, anddetailed planning. Early planning entails devel-oping COIs, formulating a plan for evaluations,determining the Concept of Operation (COO),envisioning the operational environment, anddeveloping mission scenarios and resource re-quirements. Advanced planning encompasses thedetermination of the purpose and scope of test-ing and identification of Measures of Effective-ness (MOEs) for critical issues. It includes de-veloping test objectives, establishing a test ap-proach, and estimating test resource requirements.Detailed planning involves developing step-by-step procedures to be followed as well as the fi-nal coordination of resource requirements neces-sary to carry out OT&E.

11.5.1 Testing Critical Operational Issues

The COIs have been described as:

A key operational effectiveness or opera-tional suitability issue that must be ex-amined in OT&E to determine the

system’s capability to perform its mission.A COI is normally phrased as a questionto be answered in evaluating a system’soperational effectiveness and/or opera-tional suitability.

One of the purposes of OT&E is to resolve COIsabout the system. The first step in an OT&Eprogram is to identify these critical issues, someof which are explicit in the Operational Require-ment Document (ORD). Examples can be foundin questions such as: “How well does the systemperform a particular aspect of its mission?” “Canthe system be supported logistically in the field?”Other issues arise from questions asked aboutsystem performance or how it will affect othersystems with which it must operate. Critical issuesprovide focus and direction for the OT. Identify-ing the issues is analogous to the first step in theSystems Engineering Process: defining the prob-lem. When COIs are properly addressed,deficiencies in the system can be uncovered. Theyform the basis for a structured technique of analy-sis by which detailed sub-objectives or MOEs canbe established. During the OT, each sub-objectiveis addressed by an actual test measurement (Mea-sure of Performance (MOP)). After these issuesare identified, the evaluation plans and test de-sign are developed for test execution. (For moreinformation, see Chapter 13.)

11.5.2 Test Realism

Test realism for OT&E will vary directly withthe degree of system maturity. Efforts early inthe acquisition program should focus on activeinvolvement of users and operationally orientedenvironments. Fidelity of the “combat environ-ment” should peak during the IOT&E when force-on-force testing of the production representativesystem is conducted. The degree of success inreplicating a realistic operational environment hasa direct impact on the credibility of the IOT&Etest report. Areas of primary concern for the testplanner can be derived from the legislated defi-nition of OT&E:

Page 127: DAU T&E Management Guide.pdf

11-5

(1) A field test includes all of the elementsnormally expected to be encountered in theoperational arena, such as appropriate sizeand type of maneuver terrain, environmentalfactors, day/night operations, austere livingconditions, etc.

(2) Realistic combat should be replicated usingappropriate tactics and doctrine, representa-tive threat forces properly trained in theemployment of threat equipment, free playresponses to test stimulus, stress, “dirty”battle area (fire, smoke, Nuclear, Biologicaland Chemical (NBC); Electronic Counter-measures (ECM), etc.), wartime tempo to op-erations, real time casualty assessment, andforces requiring interoperability.

(3) Any item means the production-represen-tative configuration of the system at thatpoint in time, including appropriate logis-tics tail.

(4) Typical military users are obtained by tak-ing a cross section of adequately trainedskill levels and ranks of the intended op-erational force. Selection of “golden crews”or the best-of-the-best does not provide testdata reflecting the successes nor problemsof the “Murphy and gang” of typical units.

In his book, Operational Test and Evaluation: ASystems Engineering Process, Roger Stevensstated, “In order to achieve realism effectively inan OT&E program, a concern for realism mustpervade the entire test program from the very be-ginning of test planning to the time when the verylast test iteration is run.” Realism is a significantissue during planning and execution of OT&E.3

11.5.3 Selection of a Test Concept

An important step in the development of anOT&E program is to develop an overall testprogram concept. Determinations must be maderegarding when OT&E will be conducted during

systems development, what testing is to be doneon production equipment, how the testing willbe evolutionary, and what testing will have to waituntil all system capabilities are developed. Thisconcept can best be developed by considering anumber of aspects such as test information re-quirements, system availability for test periods,and the demonstration of system capabilities. Thetest concept is driven by the acquisition strategyand is a roadmap used for planning T&E events.The DOT&E is briefed on test concepts for over-sight programs and approves the test plan beforeIOT&E starts.

11.6 TEST EXECUTION

An OT plan is only as good as the execution ofthat plan. The execution is the essential bridgebetween test planning and test reporting. Thetest is executed through the OTA test director’sefforts and the actions of the test team. For suc-cessful execution of the OT&E plan, the test di-rector must direct and control the test resourcesand collect the data required for presentation tothe decision authority. The test director must pre-pare for testing, activate, and train the test team,develop test procedures and operating instruc-tions, control data management, create OT&Eplan revisions, and manage each of the test trials.The test director’s data management duties willencompass collecting raw data, creating a datastatus matrix, and ensuring data quality by pro-cessing and reducing, verifying, filing, storing,retrieving, and analyzing collected data. Onceall tests have been completed and the data arereduced and analyzed, the results must be re-ported. A sample test organization used for theArmy OT&E of the program is illustrated in Fig-ure 11-1. (In the Army, the Deputy Test Direc-tor comes from the OTA and controls the dailyOT activity.)

11.7 TEST REPORTING

The IOT&E test report is a very important docu-ment. It must communicate the results of

Page 128: DAU T&E Management Guide.pdf

11-6

Figure 11-1. Organizational Breakdown of the I-81mmMortar Operational Test Directorate

completed tests to decision authorities in a timely,factual, concise, comprehensive, and accuratemanner. The report must present a balanced viewof the weapon system’s successes and problemsduring testing, illuminating both the positive as-pects and system deficiencies discovered. Analy-sis of test data and their evaluation may be inone report (Air Force, Navy) or in separate docu-ments (Army, Marine Corps). The Army’s Inde-pendent Evaluation Report (IER) advises the de-cision review principals and MDA concerning theadequacy of testing, the system’s operational ef-fectiveness and suitability, and survivability, aswell as providing recommendations for futureT&E and system improvements.

There are four types of reports most frequentlyused in reporting OT&E results. These includestatus, interim, quick-look, and final reports. Thestatus report gives periodic updates (e.g., monthly,quarterly) and reports recent test findings (dis-crete events such as missile firings). The interimreport provides a summary of the cumulative testresults to date when there is an extended periodof testing. The quick-look reports provide prelimi-nary test results, are usually prepared immediatelyafter a test event (less than 7 days), and have beenused to support program decision milestones. Thefinal T&E report (Air Force, Navy) or IER (Army,Marine Corps) presents the conclusions and rec-ommendations including all supporting data and

- - - - - - - -SYSTEM

REPRESENTATIVE(DEV CMD)

DEPUTYTEST

DIRECTOR(OTA)

TESTDIRECTOR

HUMANFACTORS

SITESUPPORT

PLAYERUNIT

DATACOLLECTION

(OTA)

DATAMANAGEMENT

(OTA)

DEPUTYTEST

DIRECTOR(USER REP)

TESTCONTROL

PERFORM

RAM

CENTER

UNITS

SYSTEM

“A”/“B”

SYSTEM

“A”/“B”

PERFORM

RAM

Page 129: DAU T&E Management Guide.pdf

11-7

covering the entire IOT&E program. The ServiceOTA are required to provide reports of OT&Eresults in support of Milestones B and C, in ad-dition to the IOT&E report required for the FullRate Production Decision Review (FRPDR).4

11.8 SUMMARY

The purpose of OT&E is to assess operationaleffectiveness and suitability at each stage in theacquisition process. Operational effectiveness is ameasure of the contribution of the system to mis-sion accomplishment under actual conditions ofemployment. Operational suitability is a measure

of the Reliability and Maintainability (R&M) ofthe system; the effort and level of training re-quired to maintain, support, and operate it; andany unique logistics or training requirements itmay have. The OT&E may provide informationon tactics, doctrine, organization, and personnelrequirements and may be used to assist in thepreparation of operating and maintenance instruc-tions and other publications. One of the mostimportant aspects is that OT&E provides an in-dependent evaluation of the degree of progressmade toward satisfying the user’s requirementsduring the system development process.

Page 130: DAU T&E Management Guide.pdf

11-8

ENDNOTES

1. Title 10, U.S.C., Section 2399, OperationalTest and Evaluation of Defense AcquisitionPrograms.

2. Public Law 99-661, National DefenseAuthorization Act (NDAA), Section 1207.

3. Stevens, Roger T., Operational Test andEvaluation: A Systems Engineering Process,John Wiley & Sons, New York: 1979, pp. 49-56.

4. As required by DoD Instruction (DoDI)5000.2, Operation of the Defense AcquisitionSystem, May 12, 2003.

Page 131: DAU T&E Management Guide.pdf

12-1

1212OT&E TO SUPPORTDECISION REVIEWS

12.1 INTRODUCTION

Mindful of principles of objectivity and impar-tial evaluation, Operational Test and Evaluation(OT&E) may be conducted before each decisionreview to provide the decision authority with thelatest results from testing of Critical OperationalIssues (COIs). The philosophy of OT&E has beenrelated to three terms—adequacy, quality, andcredibility:

Adequacy: The amount of data and realismof test conditions must be sufficient to supportthe evaluation of the COIs.

Quality: The test planning, control of testevents, and treatment of data must provideclear and accurate test reports.

Credibility: The conduct of the test and datahandling must be separated from externalinfluence and personal biases.

Operational testing is conducted to provide in-formation to support Department of Defense(DoD) executive-level management decisions onmajor acquisition programs. OT&E is accom-plished using a test cycle of successive actionsand documents. During the early stages of theprogram, the process is informal and modifiedas necessary. As programs mature, documenta-tion for major systems and those designated bythe Director, Operational Test and Evaluation(DOT&E) for oversight must be sent to the Of-fice of the Secretary of Defense (OSD) for ap-proval before the testing can be conducted or thesystems can be cleared to proceed Beyond LowRate Initial Production (BLRIP). Figure 12-1

illustrates how OT&E relates to the acquisitionprocess.

12.2 CONCEPT REFINEMENT (CR) ANDTECHNOLOGY DEVELOPMENT (TD)

The OT&E conducted during the first phase maybe an Early Operational Assessment (EOA)focused on investigating the deficiencies identi-fied during the mission area analysis. Operationaltesters participate in these evaluations to validatethe OT&E requirements for future testing and toidentify issues and criteria that can only be resolvedthrough OT&E to initiate early test resourceplanning.

Before program initiation, the OT&E objectivesare to assist in evaluating alternative concepts tosolve the mission capability deficiencies and toassess the operational impact of the system. Anearly assessment also may provide data to supporta decision on whether the technology is suffi-ciently mature to support entry into the nextdevelopment phase. The OT&E conducted duringthis phase supports developing estimates of:

(1) The military need for the proposed system;

(2) A demonstration that there is a soundphysical basis for a new system;

(3) An analysis of concepts, based on demon-strated physical phenomena, for satisfyingthe military need;

(4) The system’s affordability and Life CycleCost (LCC);

Page 132: DAU T&E Management Guide.pdf

12-2

Figure 12-1. OT&E Related to the Milestone Process

(5) The ability of a modification to an existingU.S. or allied system to provide neededcapability;

(6) An operational utility assessment;

(7) An impact of the system on the forcestructure.

During concept assessment, there is normally nohardware available for the operational tester. There-fore, the EOA is conducted from surrogate test andexperiment data, breadboard models, factory usertrials, mock-up/simulators, modeling/simulation,and user demonstrations (Figure 12-2). This makesearly assessments difficult, and some areas can-not be covered in-depth. However, these assess-ments provide vital introductory information onthe system’s potential operational utility.

The OT&E products from this phase of testing in-clude the information provided to the decision au-thority, data collected for further evaluation, inputto the test planning in the Technology DevelopmentStrategy (TDS) that will later evolve into the Testand Evaluation Master Plan (TEMP), and early Test

and Evaluation (T&E) planning. Special logisticsproblems, program objectives, program plans, per-formance parameters, and acquisition strategy areareas of primary influence to the operational testerduring this phase and must be carefully evaluatedto project the system’s operational effectivenessand suitability.

12.3 SYSTEM DEVELOPMENT ANDDEMONSTRATION (SDD)

After program initiation, combined DT/OT&E oran EOA may be conducted to support the EOAof prototype development and a decision regard-ing a system’s readiness to move into the devel-opment of the Engineering Development Model(EDM). In all cases, appropriate T&E must beconducted on a mature system configuration, i.e.,the EDM, before the production decision, therebyproviding data for identification of risk beforemore resources are committed. As appropriate,Low Rate Initial Production (LRIP) will followthis phase of testing to verify system-level per-formance capability and to provide insight intotest resources needed to conduct future interop-erability, live fire, or operational testing.

POTENTIAL EFFECTIVENESSSUITABILITYALTERNATIVESINDEPENDENT EVALUATIONMILESTONE II DECISION INFORMATION

PRODUCTION &DEPLOYMENT

MODIFICATIONSPROTOTYPE

SYSTEM DEVELOPMENT& DEMONSTRATION

TECHNOLOGYDEVELOPMENT

CONCEPTREFINEMENT

ANALYSISOF

ALTERNATIVESTEMP SIMULATION

TECHDEVEL

STRATEGY

EOA

FOT&E/OAIOT&EFOT&E

OPERATIONALUTILITYPRODUCTIONVALIDATIONINDEPENDENTASSESSMENT OA

EOA

OPERATIONAL ASPECTSPREFERRED ALTERNATIVES

ENGR DEV MODEL LRIP

FOT&EOPERATIONALUTILITYTACTICS-DOCTRINEPERSONNELINTEROPERABILITY

VALIDATEREQUIREMENTSOPERATIONALUTILITYINDEPENDENTEVALUATION

OPERATIONS &SUPPORT

MILESTONE A MILESTONE B MILESTONE C FRPDR

Page 133: DAU T&E Management Guide.pdf

12-3

Figure 12-2. Sources of Data

12.3.1 Objectives of OperationalAssessments

Operational Assessments (EOAs, OAs) are con-ducted to facilitate identification of the bestdesign, indicate the risk level of performance forthis phase of the development, examine opera-tional aspects of the system’s development, andestimate potential operational effectiveness andsuitability. Additionally, an analysis of the plan-ning for transition from development to produc-tion is initiated. EOAs supporting decision re-views are intended to:

(1) Assess the potential of the new system inrelation to existing capabilities;

(2) Assess system effectiveness and suitabilityso that affordability can be evaluated forprogram cost versus military utility;

(3) Assess the adequacy of the concept foremployment, supportability, and organiza-tion; doctrinal, tactical, and training require-ments; and related critical issues;

(4) Estimate the need for the selected systemsin consideration of the threat and systemalternatives based on military utility;

(5) Assess the validity of the operationalconcept;

(6) List the key risk areas and COIs that needto be resolved before construction of EDMsis initiated;

(7) Assess the need during LRIP of long leadhardware to support Initial Operational Testand Evaluation (IOT&E) prior to the Full-Rate Production (FRP) decision;

(8) Provide data to support test planning for thisphase.

During this phase, OT&E may be conducted onbrassboard configurations, experimental proto-types, or advanced development prototypes. Dedi-cated test time may be made available for the op-erational tester. However, the OT&E assessmentsmay also make use of many other additional datasources. Examples of additional sources often usedby the Army during this phase include: ConceptExperimentation Program (CEP) tests, innovativetesting, Force Development Tests and Experimen-tation (FDT&E), source selection tests, user par-ticipation in Development Test and Evaluation(DT&E), and operational feasibility tests. The re-sults from this testing, analysis, and evaluation are

EOA

• LAB REPORTS

• TEST CELL

• MODELS/SIMULATIONS

• ENGINEER ANALYSIS

• MOCK-UPS

• DOCUMENT REVISIONS

• HISTORICAL DATA

• DT&E

OA

• DEMO

• CONTRACTOR DATA

• GOVERNMENT DATA

• INTEGRATION

• TESTS

• QUALITY TESTS

• COMBINED DT/OT

• SIMULATORS

• PRIOR EOA

IOT&E

• EOA/OA

• DT&E

• DEMO

• MODEL

• SIMULATION

FOT&E

• DT&E

• OA

• FIELD DATA

• EXERCISES

• WAR GAMES

MSA

MSB

MSC LRIP FRPDRDRR

Page 134: DAU T&E Management Guide.pdf

12-4

documented in an Operational Assessment (EOA)or end of phase OT&E report. These data, alongwith the mission needs and requirements docu-mentation and TEMP, assist in the review of per-formance for the next decision review.

The OAs during the system demonstration areconducted on EDMs. These operational evalua-tions estimate the operational effectiveness andsuit-ability and provide data on whether thesystem meets minimum operational thresholds.

12.4 PRODUCTION AND DEPLOYMENT(P&D)

Just before the FRP decision, the dedicated T&Eis conducted on equipment that has been formallycertified by the Program Manager (PM) as beingready for the “f inal OT&E.” This dedicatedIOT&E is conducted in a test environment asoperationally realistic as possible.

12.4.1 OT&E Objectives

The IOT&E conducted is characterized by test-ing performed by user organizations in a fieldexercise to examine the organization and doctrine,Integrated Logistics Support (ILS), threat, com-munications, Command and Control (C2), andtactics associated with the operational employ-ment of the unit during tactical operations. Thisincludes estimates that:

(1) Assess operational effectiveness andsuitability;

(2) Assess the survivability of the system;

(3) Assess the systems reliability,maintainability, and plans for ILS;

(4) Evaluate manpower, personnel, training, andsafety requirements;

(5) Validate organizational and employmentconcepts;

(6) Determine training and logistics require-ments deficiencies;

(7) Assess the system’s readiness to enter FRP.

12.5 OPERATIONS AND SUPPORT (O&S)

After the FRP decision and deployment, theemphasis shifts towards procuring productionquantities, repairing hardware def iciencies,managing changes, and phasing in full logisticssupport. During initial deployment of the system,the OT&E agency and/or the user may performFollow-on Operational Test and Evaluation(FOT&E) to refine the effectiveness and suitabil-ity estimates made during earlier OT&E, assessperformance not evaluated during IOT&E, evalu-ate new tactics and doctrine, and assess theimpacts of system modifications or upgrades.

The FOT&E is performed with production articlesin operational organizations. It is normally fundedwith Operations and Maintenance (O&M) funds.The first FOT&E conducted during this phasemay be used to:

(1) Ensure that the production system performsas well as reported at the Milestone IIIreview;

(2 Demonstrate expected performance andreliability improvements;

(3) Ensure that the correction of deficienciesidentified during earlier testing have beencompleted;

(4) Evaluate performance not tested duringIOT&E.

Additional objectives of FOT&E are to validatethe operational effectiveness and suitability of amodified system during an OA of the system innew environments. The FOT&E may look atdifferent platform applications, new tacticalapplications, or the impact of new threats.

Page 135: DAU T&E Management Guide.pdf

12-5

12.5.1 FOT&E of Logistics SupportSystems

The testing objectives to evaluate post-productionlogistics readiness and support are to:

(1) Assess the logistics readiness and sustain-ability;

(2) Evaluate the weapon support objectives;

(3) Assess the implementation of logisticssupport planning;

(4) Evaluate the capability of the logisticssupport activities;

(5) Determine the disposition of displacedequipment;

(6) Evaluate the affordability and LCC of thesystem.

12.6 SUMMARY

The OT&E is that T&E (OAs, IOT&E, orFOT&E) conducted to provide feedback onsystem design and its potential to be operation-ally effective and operationally suitable. OT&Eevents in each phase of development will iden-tify needed modifications; provide informationon tactics, doctrine, organizations, and personnelrequirements; and evaluate the system’s logisticssupportability. The acquisition program structureshould include feedback from OAs or evaluationsat critical decision points/technical reviews be-ginning early in the development cycle and con-tinuing throughout the system’s life cycle.

Page 136: DAU T&E Management Guide.pdf

12-6

Page 137: DAU T&E Management Guide.pdf

IVIVMODULE

TEST AND EVALUATIONPLANNING

Many program managers face several T&E issues that must beresolved to get their particular weapon system tested and ulti-mately fielded. These issues may include modeling and simula-tion support, combined and concurrent testing, test resources,survivability and lethality testing, multi-Service testing, orinternational T&E. Each issue presents a unique set of challengesfor program managers when they develop the integrated strat-egy for the T&E program.

Page 138: DAU T&E Management Guide.pdf
Page 139: DAU T&E Management Guide.pdf

13-1

1313EVALUATION

13.1 INTRODUCTION

This chapter describes the evaluation portion ofthe Test and Evaluation (T&E) process. It stressesthe importance of establishing and maintaining aclear audit trail from system requirements throughcritical issues, evaluation criteria, test objectives,and Measures of Effectiveness (MOEs) to theevaluation. The importance of the use of data fromall sources is discussed as are the differences inapproaches to evaluating technical and operationaldata.

13.2 DIFFERENCE BETWEEN “TEST”AND “EVALUATION”

The following distinction has been made betweenthe functions of “test” and “evaluation”:

While the terms “test” and “evaluation” aremost often found together, they actuallydenote clearly distinguishable functions inthe RDT&E [Research, Development, Testand Evaluation] process.

“Test” denotes any “program or procedure that isdesigned to obtain, verify, or provide data for theevaluation of any of the following: 1) progress inaccomplishing developmental objectives; 2) theperformance, operational capability, and suitabil-ity of systems, subsystems, components, andequipment items; and 3) the vulnerability andlethality of systems, subsystems, components, andequipment items.”1

“Evaluation” denotes the process whereby dataare logically assembled, analyzed, and compared

to expected performance to aid in makingsystematic decisions.

To summarize, evaluation is the process forreview and analysis of qualitative or quantitativedata obtained from design review, hardwareinspection, Modeling and Simulation (M&S),testing, or operational usage of equipment.

13.3 THE EVALUATION PROCESS

The evaluation process requires a broad analyti-cal approach with careful focus on the develop-ment of an overall T&E plan that will providetimely answers to critical issues and questions re-quired by decision authorities throughout all theacquisition phases. (Table 13-1) Evaluationsshould focus on Key Performance Parameters(KPPs); i.e., those “minimum attributes or char-acteristics considered most essential for an ef-fective military capability.” An attribute is a test-able or measurable characteristic that describesan aspect of a system or capability.2

A functional block diagram of a generic (i.e., notService-specific) evaluation process is shown inFigure 13-1. The Army’s Independent EvaluationPlan (IEP) is written very early and applies tothe system and not a specific test event, so it isnot as detailed as the format provided in Table13-1. The process begins with the identificationof a deficiency or need and the documentationof an operational requirement. It continues withthe identification of critical issues that must beaddressed to determine the degree to which thesystem meets user requirements. Objectives andthresholds must then be established to define

Page 140: DAU T&E Management Guide.pdf

13-2

Table 13-1. Sample Evaluation Plan

CHAPTER 1 INTRODUCTION1.1 PURPOSE1.2 SCOPE1.3 BACKGROUND1.4 SYSTEM DESCRIPTION1.5 CRITICAL OPERATIONAL ISSUES AND CRITERIA (COIC)1.6 PROJECTED THREAT1.7 TEST AND EVALUATION MILESTONES

CHAPTER 2 EVALUATION STRATEGY2.1 EVALUATION CONCEPT2.2 OPERATIONAL EFFECTIVENESS

2.2.1 ISSUE 12.2.1.1 SCOPE2.2.1.2 CRITERIA2.2.1.3 RATIONALE2.2.1.4 EVALUATION APPROACH2.2.1.5 ANALYSIS OF MOPS AND DATA PRESENTATIONS

2.2.1.5.1 MOP 1 THROUGH2.2.1.5.1.X MOPx

2.2.2 ISSUE 2 THROUGH

2.2.m ISSUE n2.3 OPERATIONAL SUITABILITY

2.3.1 ISSUE n+1THROUGH

2.3.n ISSUE n+x2.4 DATA SOURCE MATRIX2.5 DESCRIPTION OF OTHER PRIMARY DATA SOURCES2.6 TEST APPROACH

2.6.1 TEST SCOPE2.6.2 FACTORS AND CONDITIONS2.6.3 SAMPLE SIZE AND OTHER TEST DESIGN CONSIDERATIONS2.6.4 DATA AUTHENTICATION GROUP (DAG)

2.7 EVALUATION DATABASE STRUCTURE2.7.1 IDENTIFICATION OF REQUIRED FILES2.7.2 DESCRIPTION OF FILE RELATIONSHIPS2.7.3 DATA ELEMENTS DEFINITIONS

APPENDICES:APPENDIX A IOT&E RESOURCE PLANAPPENDIX B PATTERN OF ANALYSISAPPENDIX C CONTROL CONCEPTAPPENDIX D DATA COLLECTION CONCEPTAPPENDIX E DATA REDUCTION CONCEPTAPPENDIX F QUALITY CONTROL CONCEPTAPPENDIX G DAG CHARTER AND SOPAPPENDIX H TRAINING CONCEPTAPPENDIX I TEST ENVIRONMENTAL ASSESSMENT AND ENVIRONMENTAL IMPACT STATEMENTAPPENDIX J STATUS OF SUPPORT DOCUMENTSAPPENDIX K SYSTEM DESCRIPTIONAPPENDIX L SCENARIOAPPENDIX M INSTRUMENTATIONAPPENDIX N BASELINE CORRELATION MATRIXAPPENDIX O STRAWMAN INDEPENDENT EVALUATION REPORTAPPENDIX P GLOSSARYAPPENDIX Q ABBREVIATIONS

SOURCE: OT&E Methodology Guide, Department of the Army (DA) Pamphlet 71-3.

Page 141: DAU T&E Management Guide.pdf

13-3

Figure 13-1. Functional Block Diagram of the Evaluation Process

required performance or supportability parametersand to evaluate progress in reaching them. T&Eanalysts then decompose the issues into measur-able test elements, conduct the necessary testing,review and analyze the test data, weigh the testresults against the evaluation criteria, and pre-pare an evaluation report for the decisionauthorities.

13.4 ISSUES AND CRITERIA

Issues are questions regarding a system thatrequire answers during the acquisition process.Those answers may be needed to aid in thedevelopment of an acquisition strategy, to refineperformance requirements and designs, or to sup-port Milestone Decision Reviews (MDRs). Evalu-ation criteria are the standards by which accom-plishments of required technical and operationaleffectiveness and/or suitability characteristics orresolution of operational issues may be assessed.The evaluation program may be constructed usinga structured approach identifying each issue.

(1) Issue: a statement of the question to beanswered;

(2) Scope: detailed conditions and range ofconditions that will guide the T&E processfor this issue;

(3) Criteria: quantitative or qualitative standardsthat will answer the issue;

(4) Rationale: full justification to support theselected criteria.

13.4.1 Key Performance Parameters (KPPs)/Critical Issues

The KPPs often can support the development ofa hierarchy of critical issues and less significantissues. Critical issues are those questions relat-ing to a system’s operational, technical, support,or other capability. These issues must be answeredbefore the system’s overall worth can be esti-mated/evaluated, and they are of primary impor-tance to the decision authority in allowing thesystem to advance to the next acquisition phase.Critical issues in the Test and Evaluation MasterPlan (TEMP) may be derived from the KPPsfound in the capability requirements documents—Capability Development Document (CDD) and

IDENTIFYDEFICIENCY/

NEED

DOCUMENTREQUIREMENT

FORMULATECRITICALISSUES

DEVELOPEVALUATION

CRITERIA

IDENTIFYDATA

REQUIREMENTS

SELECTMEASURES

OFPERFORMANCE

SELECTMEASURES

OFEFFECTIVENESS

CONDUCTTEST

ANALYZETEST

RESULTS

COMPARE TESTRESULTS WITH

EVALUATIONCRITERIA

PREPAREEVALUATION

REPORT

Page 142: DAU T&E Management Guide.pdf

13-4

Capability Production Document (CPD). The sys-tem requirements and baseline documentationwill provide many of the performance parametersrequired to develop the hierarchy of issues.

13.4.2 Evaluation Issues

Evaluation issues are those addressed in techni-cal or operational evaluations during the acquisi-tion process. Evaluation issues can be separatedinto technical or operational issues and addressedin the TEMP.

Technical issues primarily concern technicalparameters or characteristics and engineeringspecifications normally assessed in developmenttesting. Operational issues concern effectivenessand suitability characteristics for functions to beperformed by equipment/personnel. They addressthe system’s operational performance when ex-amined in a realistic operational mission envi-ronment. Evaluation issues are answered by what-ever means necessary (analysis/survey, modeling,simulation, inspection, demonstration, or testing)to resolve the issue. Issues requiring test data arefurther referred to as test issues.

13.4.3 Test Issues

Test issues are a subset of evaluation issues andaddress areas of uncertainty that require test datato resolve the issue adequately. Test issues maybe partitioned into technical issues that are ad-dressed by the Development Test and Evaluation(DT&E) community (contractors and govern-ment) and operational issues that are addressedby the Operational Test and Evaluation (OT&E)community. Test issues may be divided into criti-cal and noncritical categories. All critical T&Eissues, objectives, methodologies, and evaluationcriteria should be def ined during the initialestablishment of an acquisition program. Criticalissues are documented in the TEMP. These evalu-ation issues serve to define the testing requiredfor each phase of the acquisition process andserve as the structure to guide the testing program

so these data may be compared against perfor-mance criteria.

13.4.4 Criteria

Criteria are statements of a system’s required tech-nical performance and operational effectiveness,suitability, and supportability. Criteria are oftenexpressed as “objectives and thresholds.” (SomeServices, however, specify performance and sup-portability requirements exclusively in terms ofthresholds and avoid the use of the concept ofobjectives.) These performance measurementsprovide the basis for collecting data used to evalu-ate/answer test issues.

Criteria must be unambiguous and assessablewhether stated qualitatively or quantitatively. Theymay compare the mission performance of the newsystem to the one being replaced, compare thenew system to a predetermined standard, or com-pare mission performance results using the newsystem to not having the system. Criteria are thefinal values deemed necessary by the user. Assuch, they should be developed in close coordi-nation with the system user, other testers, andspecialists in all other areas of operational effec-tiveness and suitability. These values may bechanged as systems develop and associated test-ing and evaluation proceed. Every issue shouldhave at least one criteria that is a concise mea-sure of the function. Values must be realistic andachievable within the state of the art of engineer-ing technology. A quantitative or qualitative cri-terion should have a clear definition, free of am-biguous or imprecise terminology, such as “ad-equate,” “sufficient,” or “acceptable.”

13.4.4.1 Test of Thresholds and Objectives

A threshold for a performance parameter lists aminimum acceptable operational value belowwhich the utility of the system becomes ques-tionable.3 Thresholds are stated quantitativelywhenever possible. Specification of minimallyacceptable performance in measurable parameters

Page 143: DAU T&E Management Guide.pdf

13-5

is essential to selecting appropriate MOEs, which,in turn, heavily influence test design. Thresholdsare of value only when they are testable; i.e., ac-tual performance can be measured against them.The function of T&E is to verify the attainmentof required thresholds. The Program Manager(PM) must budget for the attainment of thresh-old values.

Objectives are “the desired operational goalassociated with a performance attribute, beyondwhich any gain in utility does not warrant addi-tional expenditure. The objective value is anoperationally significant increment above thethreshold. An objective value may be the sameas the threshold when an operationally significantincrement above the threshold is not significantor useful.”4 Objectives are not normally addressedby the operational tester, whose primary concernis to “determine if thresholds in the approved CPDand critical operational issues have beensatisfied.”5

Going into system demonstration, thresholds andobjectives are expanded along with the identifi-cation of more detailed and refined performancecapabilities and characteristics resulting fromtrade-off studies and testing conducted during theevaluation of Engineering Development Models(EDMs). Along with the CPD, they should remainrelatively stable through production.

13.5 MEASURES OF EFFECTIVENESS(MOEs)

Requirements, thresholds, and objectives estab-lished in early program documentation form thebasis for evaluation criteria. If program documen-tation is incomplete, the tester may have to developevaluation criteria in the absence of specificrequirements. Evaluation criteria are associatedwith objectives, sub-objectives, and MOEs (some-times partitioned into MOEs and measures of suit-ability). For example, an MOE (airspeed) may havean associated evaluation criterion (450 knots)against which the actual performance (425 knots)

is compared to arrive at a rating. An MOE of asystem is a parameter that evaluates the capacityof the system to accomplish its assigned missionsunder a given set of conditions. They are impor-tant because they determine how test results willbe judged; and, since test planning is directed to-ward obtaining these measures, it is importantthat they be defined early. Generally, the resolu-tion of each critical issue is in terms of the evalu-ation of some MOE. In this case, the operating,implementing, and supporting commands mustagree with the criteria before the test organiza-tion makes use of them in assessing test results.Ensuring that MOEs can be related to the user’soperational requirements is an important consid-eration when identifying and establishing evalu-ation criteria. Testers must ensure that evaluationcriteria and MOEs are updated if requirementschange. The MOEs should be so specific that thesystem’s effectiveness during developmental andoperational testing can be assessed using someof the same effectiveness criteria as the Analysisof Alternatives [AoAs].6

13.6 EVALUATION PLANNING

13.6.1 Evaluation Planning Techniques

Evaluation planning is an iterative process thatrequires formal and informal analyses of systemoperation (e.g., threat environment, system de-sign, tactics, and interoperability). Techniques thathave been proven effective in evaluation planninginclude: process analysis, design or engineeringanalysis, matrix analysis, and dendritic analysis.7

13.6.1.1 Process Analysis Techniques

Process analysis techniques consist of thinkingthrough how the system will be used in a varietyof environments, threats, missions, and scenariosin order to understand the events, actions, situa-tions, and results that are expected to occur. Thistechnique aids in the identification and clarifica-tion of appropriate MOEs, test conditions, anddata requirements.

Page 144: DAU T&E Management Guide.pdf

13-6

13.6.1.2 Design/Engineering AnalysisTechniques

Design or engineering analysis techniques areused to examine all mechanical or functionaloperations that the system has been designed toperform. These techniques involve a systematicexploration of the system’s hardware and soft-ware components, purpose, performance bounds,manpower and personnel considerations, knownproblem areas, and impact on other components.Exploration of the way a system operates com-pared to intended performance functions oftenidentifies issues, MOEs, specific data, test events,and required instrumentation.

13.6.1.3 Matrix Analysis Techniques

Matrix analysis techniques are useful for analyz-ing any situation where two classifications mustbe cross-referenced. For example, a matrix of“types of data” versus “means of data collection”can reveal not only types of data having noplanned means of collection but also redundantor backup collection systems. Matrix techniquesare useful as checklists, as organizational tools,or as a way of identifying and characterizing prob-lem areas. Matrix techniques are effective for trac-ing a system’s operational requirements throughcontractual specification documents, issues, andcriteria to sources of individual data or specifictest events.

13.6.1.4 Dendritic Analysis Techniques

Dendritic analysis techniques are an effective wayof decomposing critical issues to the point whereactual data requirements and test measurementscan be identified. In these techniques, issues aresuccessively broken down into objectives, MOEs,Measures of Performance (MOPs), and datarequirements in a rootlike structure as depictedin Figure 13-2. In this approach, objectives areused to clearly express the broad aspects of T&Erelated to the critical issues and the overallpurpose of the test. The MOEs are developed as

subsets of the objectives and are designed to treatspecific and addressable parts of the objectives.Each MOE is traceable as a direct contributor toone objective and, through it, is identifiable as adirect contributor to addressing one or morecritical issues.8 Each test objective and MOE isalso linked to one or more MOPs (quantitativeor qualitative measures of system performanceunder specified conditions) that, in turn, are tiedto specific data elements. The dendritic approachhas become a standard military planningtechnique.

13.6.2 Sources of Data

As evaluation and analysis planning matures, focusturns toward identifying data sources as a meansfor obtaining each data element. Initial identifica-tion tends to be generic such as: engineering study,simulation, modeling, or contractor test. Later iden-tification reflects specific studies, models, and/ortests. A data source matrix is a useful planningtool to show where data are expected to be ob-tained during the T&E of the system.

There are many sources of data that can contrib-ute to the evaluation. Principal sources include:studies and analyses, models, simulations, wargames, contractor testing, Development Test (DT),Operational Test (OT), and comparable systems.

13.7 EVALUATING DEVELOPMENT ANDOPERATIONAL TESTS

Technical and operational evaluations employdifferent techniques and have different evaluationcriteria. The DT&E is often considered technicalevaluation while OT&E addresses the operationalaspects of a system. Technical evaluation dealsprimarily with instrumented tests and statisticallyvalid data. An operational evaluation deals withoperational realism and the combat uncertainties.9

The DT&E uses technical criteria for evaluatingsystem performance. These criteria are usuallyparameters that can be measured during controlledDT&E tests. They are particularly important to

Page 145: DAU T&E Management Guide.pdf

13-7

Figure 13-2. Dendritic Approach to Test and Evaluation

the developing organization and the contractor butare of less interest to the independent operationaltester. The operational tester focuses on issuessuch as demonstrating target acquisition at use-ful ranges, air superiority in combat, or the prob-ability of accomplishing a given mission. For ex-ample, during DT&E, firing may be conductedon a round-by-round basis with each shot de-signed to test an individual specification or pa-rameter with other parameters held constant. Suchtesting is designed to measure the technical per-formance of the system. In contrast, in OT&Eproper technical performance regarding individualspecifications/parameters is de-emphasized andthe environment is less controlled. The OT&Eauthority must assess whether, given this technicalperformance, the weapon system is operationallyeffective and operationally suitable whenemployed under realistic combat (with opposing

force) and environmental conditions by typicalpersonnel.

The emphasis in DT is strictly on the use of quan-titative data to verify attainment of technical speci-fications. Quantitative data are usually analyzedusing some form of statistics. Qualitative data takeon increasing importance in OT&E when effec-tiveness and suitability issues are being explored.Many techniques are used to analyze qualitativedata. They range from converting expressions ofpreference or opinion into numerical values to es-tablishing a consensus by committee. For example,a committee may assign values to parameters suchas “feel,” “ease of use,” “friendliness to the user,”and “will the user want to use it,” on a scale of 1to 10. Care should be exercised in the interpreta-tion of the results of qualitative evaluations. Forinstance, when numbers are assigned to average

CRITICAL

CAPABILITY OFCANDIDATERADAR "X"

• DETERMINE DETECTIONCAPABILITY

• DETERMINE TRACKINGCAPABILITY

• DETERMINE MEAN/MEDIANDETECTION RANGE

• DETERMINE EASE OFOPERATION OFCONTROLS/DISPLAYS

• DETERMINE FALSE TARGET RATES

• MEAN AND VARIANCEOF DETECTION RANGE

• % TIME OPERATORS JUDGEDOPERATION OF CONTROLS &DISPLAYS SATISFACTORY

• MEAN AND VARIANCE OF THENUMBER OF FALSE TARGETSPER SCAN

• MEASURED/OBSERVEDDETECTION RANGE

• OPERATOR’S COMMENTS

• OBSERVATION OF SCANS

• OBSERVATION OF FALSE ALARMS

ISSUE

TEST OBJECTIVE 1

MOE 1

MOP 1

DATA ELEMENT 1

DATA ELEMENT 2

DATA ELEMENT nMOP 2

MOP nMOE 2

MOE n

MOS 1

MOS 2

TEST OBJECTIVE 2

TEST OBJECTIVE n

Page 146: DAU T&E Management Guide.pdf

13-8

evaluations and their standard deviations, mean-ings will differ from quantitative data averagesand standard deviations.

13.7.1 Technical Evaluation

The Services’ materiel development organizationsare usually responsible for oversight of all aspectsof DT&E including the technical evaluation. Theobjectives of a technical evaluation are:

• To assist the developers by providing infor-mation relative to technical performance; quali-fication of components; compatibility, inter-operability, vulnerability, lethality, transport-ability, Reliability, Availability, and Maintain-ability (RAM); manpower and personnel; sys-tem safety; Integrated Logistics Support (ILS);correction of deficiencies; accuracy of environ-mental documentation; and refinement ofrequirements;

• To ensure the effectiveness of the manufactur-ing process of equipment and proceduresthrough production qualification T&E;

• To confirm readiness for OT by ensuring thatthe system is stressed beyond the levelsexpected in the OT environment;

• To provide information to the decision author-ity at each decision point regarding a system’stechnical performance and readiness to proceedto the next phase of acquisition;

• To determine the system’s operability in therequired climatic and realistic battlefield en-vironments to include natural, induced, andcountermeasure environments.10

13.7.2 Operational Evaluation

The independent OT&E authority is responsiblefor the operational evaluation. The objectives ofan operational evaluation are:

• To assist the developers by providing infor-mation relative to operational performance;doctrine, tactics, training, logistics; safety; sur-vivability; manpower, technical publications;RAM; correction of deficiencies; accuracy ofenvironmental documentation; and refinementof requirements;

• To assist decision makers in ensuring thatonly systems that are operationally effectiveand suitable are delivered to the operatingforces;

• To provide information to the decision author-ity at each decision point as to a system’soperational effectiveness, suitability, andreadiness to proceed to the next phase ofacquisition;

• To assess, from the user’s viewpoint, asystem’s desirability, considering systems al-ready fielded, and the benefits or burdens as-sociated with the system.11

13.8 SUMMARY

A primary consideration in identifying informa-tion to be generated by an evaluation programis having a clear understanding of the decisionsthe information will support. The importance ofstructuring the T&E program to support the reso-lution of critical issues cannot be overempha-sized. It is the responsibility of those involvedin the evaluation process to ensure that theproper focus is maintained on key issues, theT&E program yields information on criticaltechnical and operational issues, all data sourcesnecessary for a thorough evaluation are tapped,and evaluation results are communicated in aneffective and timely manner. The evaluation pro-cess should be evolutionary throughout the ac-quisition phases.

Page 147: DAU T&E Management Guide.pdf

13-9

ENDNOTES

1. Defense Acquisition University Press,Glossary of Defense Acquisition Acronyms &Terms, 11th Edition, September 2003.

2. Chairman of the Joint Chiefs of StaffInstruction (CJCSI) 3170.01D, Joint Capa-bilities Integration and Development System(JCIDS), March 12, 2004, GL (glossary) -9.

3. Ibid.

4. Ibid, GL -11.

5. Department of Defense Instruction (DoDI)5000.2, Operation of the Defense AcquisitionSystem, May 12, 2003, p. 37.

6. Ibid.

7. Department of the Army (DA) Pamphlet 73-1, Test and Evaluation in Support of SystemsAcquisition, May 30, 2003.

8. Air Force Regulation (AFR) 80-38, Manage-ment of the Air Force Systems SurvivabilityPrograms, May 27, 1994.

9. NAVSO-P-2457, RDT&E/Acquisition Man-agement Guide, 10th Edition.

10. Army Regulation (AR) 73-1, Test and Evalu-ation Policy, January 7, 2002.

11. AFR 800-2, Acquisition Program Manage-ment, July 21, 1994.

Page 148: DAU T&E Management Guide.pdf

13-10

Page 149: DAU T&E Management Guide.pdf

14-1

1414MODELING AND SIMULATION

SUPPORT TO T&E

14.1 INTRODUCTION

This chapter discusses the applications of Mod-eling and Simulation (M&S) in Test and Evalua-tion (T&E). The need for M&S has long beenrecognized, as evidenced by this quotation fromthe USAF Scientific Advisory Board in June1965:

Prediction of combat effectiveness can onlybe, and therefore must be, made by usingthe test data in analytical procedures. Thisanalysis usually involves some type ofmodel, simulation, or game (i.e., the toolsof operations or research analysis). It is theexception and rarely, that the “end result,”i.e., combat effectiveness, can be deduceddirectly from test measurements.

In mandating T&E early in the acquisition pro-cess, Department of Defense Instruction (DoDI)5000.2 encourages the use of M&S as a sourceof T&E data.1 For instance, the Armored Familyof Vehicles (AFV) program used more than 60models, simulations, and other test data to sup-port system concept exploration. The reliance onM&S by this and other acquisition programs pro-vides the T&E community with valuable infor-mation that can increase confidence levels, de-crease field test time and costs, and provide datafor pre-test prediction and post-test validation.The Defense Modeling and Simulation Office(DMSO), working for the Director DefenseResearch and Engineering (DDR&E), is devel-oping Office of the Secretary of Defense (OSD)guidance on the application of M&S to theacquisition process. The DMSO operates theModeling and Simulation Information Analysis

Center (MSIAC) to provide assistance to all De-partment of Defense (DoD) M&S users, includ-ing program offices and the acquisition commu-nity at large.

This chapter discusses using M&S to increasethe efficiency of the T&E process, reduce timeand cost, provide otherwise unattainable andimmeasurable data, and provide more timely andvalid results.

14.2 TYPES OF MODELS ANDSIMULATIONS

The term “modeling and simulation” is oftenassociated with huge digital computer simula-tions; but it also includes manual and man-in-the-loop war games, Hardware-in-the-Loop(HWIL) simulations, test beds, hybrid laboratorysimulators, and prototypes.

A mathematical model is an abstract representa-tion of a system that provides a means of devel-oping quantitative performance requirementsfrom which candidate designs can be developed.Static models are those that depict conditions ofstate while dynamic models depict conditions thatvary with time, such as the action of an auto-pilot in controlling an aircraft. Simple dynamicmodels can be solved analytically, and the resultsrepresented graphically.

According to a former Director, Defense Testand Evaluation (DDT&E),2 simulations used inT&E can be divided into three categories:

Constructive Simulations: Computersimulations are strictly mathematical

Page 150: DAU T&E Management Guide.pdf

14-2

representations of systems and do notemploy any actual hardware. They may,however, incorporate some of the actualsoftware that might be used in a system.Early in a system’s life cycle, computersimulations can be expected to provide themost system evaluation information. Inmany cases, computer simulations can bereadily developed as modifications ofexisting simulations for similar systems.For example, successive generations ofAIM-7 missile simulations have beeneffectively used in T&E.

Virtual Simulations: A system test bedusually differs from a computer simula-tion as it contains some, but not neces-sarily all, of the actual hardware that willbe a part of the system. Other elementsof the system are either not incorporatedor, if they are incorporated, are in the formof computer simulations. The system op-erating environment (including threat)may either be physically simulated, as inthe case of a flying test bed, or computersimulated, as in the case of a laboratorytest bed. Aircraft cockpit simulators usedto evaluate pilot performance are goodexamples of system test beds. As devel-opment of a system progresses, more sub-systems become available in hardwareform. These subsystems can be incorpo-rated into system test beds that typicallyprovide a great deal of the system evalu-ation information used during the middlepart of a system’s development cycle.

Another type of virtual simulation usedin T&E is the system prototype. Unlikethe system test bed, all subsystems arephysically incorporated in a system proto-type. The system prototype may closelyrepresent the final system configuration,depending on the state of development ofthe various subsystems that compose it.Preproduction prototype missiles and

aircraft used in operational testing by theServices are examples of this class ofsimulation. As system development pro-ceeds, eventually all subsystems willbecome available for incorporation in oneor more system prototypes. HWIL simu-lators or full-up man-in-the-loop systemsimulators may provide the foundation forcontinuous system testing and improve-ment. These simulators can provide thebasis for transitioning hardware and soft-ware into operationally realistic trainingdevices with mission rehearsal capabili-ties. Operational testing of these proto-types frequently provides much of thesystem evaluation information needed fora decision on full-scale production anddeployment.

Live Simulations: Some say that every-thing except global combat is a simula-tion, even limited regional engagements.Live exercises where troops use equip-ment under actual environmental condi-tions approach real life in combat whileconducting peacetime operations. Train-ing exercises and other live simulationsprovide a testing ground with real dataon actual hardware, software, and humanperformance when subjected to stressfulconditions. These data can be used to vali-date the models and simulations used inan acquisition program.

As illustrated in Figure 14-1, there is a continuousspectrum of simulation types with the pure com-puter simulation at one end and the pure hardwareprototype at the other end. As you rely less on liveforces in exercises, the ability to clearly understandthe live-force exercise’s representativeness de-creases. At the other end of the spectrum, as yourely more on virtual simulations (software-generated activities), more of the models’ internalcomplexities and increasing complexity impactsthe accuracy of results. Examples of uncertaintywould be defining why a battle was lost in a war

Page 151: DAU T&E Management Guide.pdf

14-3

Figure 14-1. The Simulation Spectrum

LESS CREDIBILITYMORE COMPLEXITY OF FORCES

MORE SOFTWARE

GREATER CREDIBILITYFEWER FORCES

MORE HARDWARETEST-BEDS

COMPUTERSIMULATIONS

PROTOTYPES

OPERATIONAL REALISM

LESS MORE

game simulation, or a system failed to perform asexpected in a prototype or full system simula-tion, or why in a flight simulator, the aircraftseemed to become inverted when crossing theequator. Determining accuracy is a factor of thetechnical level of confidence or credibility of thesimulation.

14.3 VALIDITY OF MODELING ANDSIMULATION

Simulations are not a substitute for live testing.There are many things that cannot be adequatelysimulated by computer programs, including thedecision process and the proficiency of personnelin the performance of their functions. Therefore,models and simulations are not a total substitu-tion for physical T&Es. Simulations, manual andcomputer-designed, can complement and increasethe validity of live T&Es by proper selection andapplication. Figure 14-2 contrasts the test crite-ria that are conducive to M&S versus physicaltesting. Careful selection of the simulation,

knowledge of its application and operation, andmeticulous selection of input data will producerepresentative and valid results.

Conversely, certain highly complex simulationscan be more effective analytical tools than limitedlive testing, generally due to the significance ofthe external constraints. For example, nuclearweapons effects testing can only be done in simu-lation because of international treaties. Testingdepleted uranium ammunition is very expensivewith its related environmental considerations.Finally, many confidence-building iterations ofmultimillion dollar missile defense tests can onlybe executed using simulated events in conjunctionwith a few instances of real firings.

The important element in using a simulation isto select one that is representative and eitheraddresses, or is capable of being modified toaddress, the level of detail (issues, thresholds, andobjectives) under investigation. Models andsimulations must be approved for use through

Page 152: DAU T&E Management Guide.pdf

14-4

verification, validation, and accreditation pro-cesses.3 Verification is the process of determin-ing that a model implementation accurately rep-resents the developer’s conceptual description andspecifications. Validation is the process of deter-mining: (a) the manner and degree to which amodel is an accurate representation of the realworld from the perspective of the intended usesof the model; and (b) the confidence that shouldbe placed on this assessment. Accreditation is theofficial certification that a model or simulationis acceptable for use for a specific purpose.

14.4 SUPPORT TO TEST DESIGN ANDPLANNING

14.4.1 Modeling and Simulation in T&EPlanning

The M&S can assist in the T&E planning processand can reduce the cost of testing. In Figure 14-3, areas of particular application include scenariodevelopment and the timing of test events; thedevelopment of objectives, essential elementsof analysis, and MOEs; the identification ofvariables for control and measurement; and the

development of data collection, instrumentation,and data analysis plans. For example, using simu-lation, the test designer can examine system sen-sitivities to changes in variables to determine thecritical variables and their ranges of values to betested. The test designer can also predict theeffects of various assumptions and constraints andevaluate candidate MOEs to help formulate thetest design. In the live fire vulnerability testingof the F/A-22 wing panel, a mechanical defor-mation model was used extensively in pre-testpredictions. The pre-test analysis was used todesign the experiment that assisted in shot-lineselection and allowed elimination of the aft boomtesting saving schedule and about $100,000. Realdata in post-test analysis verified the capabilityto predict impulse damage within acceptable lim-its, resulting in greater use of the model in futureapplications.

Caution must be exercised when planning to relyon simulations to obtain test data as they tend tobe expensive to develop or modify, difficult tointegrate with data from other sources, and oftendo not provide the level of realism required foroperational tests. Although simulations are not a

Figure 14-2. Values of Selected Criteria Conducive to Modeling and Simulation

THE SIMULATION SPECTRUM

VALUES CONDUCIVE TO:

CRITERIA

PHYSICAL TESTING MODELING AND SIMULATION

TEST SAMPLE SIZE/ SMALL/FEW LARGE/MANYNUMBER OF VARIABLES

STATUS OF VARIABLES/UNKNOWNS CONTROLLABLE UNCONTROLLABLE

PHYSICAL SIZE OF PROBLEM SMALL AREA/FEW PLAYERS LARGE AREA/MANY PLAYERS

AVAILABILITY OF TEST EQUIPMENT AVAILABLE UNAVAILABLE

AVAILABILITY OF TEST FACILITIES RANGES, OTHER TEST AVAILABLE BENCHMARKED, VALIDATEDCOMPUTER MODELS AVAILABLE

TYPES OF VARIABLES/UNKNOWNS SPATIAL/TERRAIN LOW IMPORTANCE OF SPATIAL/TERRAIN

DIPLOMATIC/POLITICAL FACTORS CONVENTIONAL CONFLICTS NUCLEAR OR CHEMICAL CONFLICTS

Page 153: DAU T&E Management Guide.pdf

14-5

“cure-all,” they should be explored and usedwhenever feasible as another source of data forthe evaluator to consider during the test evalua-tion. Simulation fidelity is improving rapidly withenhancements to computer technology, and it isno longer cost prohibitive to develop credibleM&S applications for future weapon systems.

Computer simulations may be used to test theplanning for an exercise. By setting up andrunning the test exercise in a simulation, the tim-ing and scenario may be tested and validated.Critical events may include interaction of variousforces that test the MOEs and, in turn, test objectives.Further, the simulation may be used to verify thestatistical test design and the instrumentation, datacollection, and data analysis plans. Essentially,

the purpose of computer simulation in pre-testplanning is to preview the test to evaluate waysto make test results more effective. Pre-testingattempts to optimize test results by pointing outpotential trouble spots. It constitutes a test setupanalysis, which can encompass a multitude ofareas. The model-test-model process is an inte-grated approach to using models and simulationsin support of pre-test analysis and planning;conducting the actual test and collecting data; andpost-test analysis of test results along with furthervalidation of the models using the test data.

As an example of simulations used in testplanning, consider a model that portrays aircraftversus air defenses. The model can be used toreplicate typical scenarios and provide data on

Figure 14-3. Modeling and Simulation Application in Test and Evaluation

• TEMP

MODELS &SIMULATIONS

PROGRAM ACTIVITIES/DOCUMENTS

CONCEPT REFINE

MSA

CAMPAIGN/THEATER(OUTCOMES)

MISSION/BATTLE(EFFECTIVENESS)

ENGAGEMENT(EFFECTIVENESS)

UPDATES

VIRTUAL PROTOTYPES

ENGINEERING(PERFORMANCE)

PHYSICAL MODELS

LIVE PROTOTYPE TESTING

• COMPONENT DT&E • MODIFICATION TESTING

UPDATES

THREAT SIMULATORSSURVIVABILITY/VULNERABILITY

PHYSICAL MODELS

LIVE PROTOTYPE TESTING

ENGINEERING (E.G.,HW/SWIL)

STIMULATORS

DISTRIBUTED SIMULATION

LIVE TESTING

STIMULATORS

LIVE TESTING

UPDATESAS REQD

FRPDR

TECH DEV SYSTEM DEV & DEMO PRODUCTION & DEPLOY OPERATIONS & SUPPORT

ANALYSISOFALTERNATIVES

• ID MODELS

• TECHNOLOGYDEVELOPMENTSTRATEGY

MSB

MSC

UPDATESAS REQD

ENGINEERING (E.G.,HW/SWIL TO SUPPORTPAT&E)

ENGINEERING (TO SUPPORT MODIFICATION TESTING)

• PRELIMINARYTEMP

• TEMP• DOT&E REPORT• LFT REPORT

• IDENTIFY MOEs, MOPs

• DT&E• PAT&E

• LFT&E

• SUBSYSTEM &SYSTEM DT&E

• EARLY OPERATIONALASSESSMENT

• OPERATIONALASSESSMENT

• IOT&E • FOT&E • FOT&E

• CRITICAL SYSTEMCHARACTERISTICS

Page 154: DAU T&E Management Guide.pdf

14-6

the number of engagements, air defense systemsinvolved, aircraft target, length and quality ofthe engagement, and a rough approximation ofthe success of the mission (i.e., if the aircraftmade it to the target). With such data available,a data collection plan can be developed tospecify, in more detail, when and where datashould be collected, from which systems, andin what quantity. The results of this analysisimpact heavily on long lead-time items such asdata collection devices and data processing sys-tems. The more specificity available, the fewerthe number of surprises that will occur down-stream. As tactics are decided upon and typicalflight paths are generated for the scenario, ananalysis can be prepared on the flight paths overthe terrain in question; and a determination canbe made regarding whether the existing instru-mentation can track the numbers of aircraft in-volved in their maneuvering envelopes. Alter-native site arrangements can be examined andtradeoffs can be made between the amount ofequipment to be purchased and the types of pro-files that can be tracked for this particular test.

Use of such a model can also highlight numer-ous choices available to the threat air defensesystem in terms of opportunities for engagementand practical applications of doctrine to the spe-cific situations.

14.4.2 Simulation, Test and EvaluationProcess (STEP)

The STEP has been proposed in DoD guidanceof the recent past and is still a valid concept. InSTEP, simulation and test are integrated, each de-pending on the other to be effective and efficient.Simulations provide predictions of the system’sperformance and effectiveness, while tests are partof a strategy to provide information regarding riskand risk mitigation, to provide empirical data tovalidate models and simulations, and to determinewhether systems are operationally effective, suit-able, and survivable for intended use. A by-product of this process is a set of models andsimulations with a known degree of credibilityproviding the potential for reuse in other efforts(Figure 14-4).

Figure 14-4. STEP Process

MISSIONCAPABILITIES

1. SYSTEM DESIGN 5.THRU LIFE CYCLE

2. DESIGN DEVELOPMENT 4. RECURRING ASSESSMENTS

3. TESTING

FIELDEDPRODUCT

SYSTEM DESIGN TESTING

FORCE-ON-FORCESIMULATION

Page 155: DAU T&E Management Guide.pdf

14-7

STEP is driven by mission and system require-ments. The product of STEP is information. Theinformation supports acquisition program deci-sions regarding technical risk, performance,system maturity, operational effectiveness, suit-ability, and survivability. STEP applies to allacquisition programs, especially Major DefenseAcquisition Programs (MDAPs) and MajorAutomated Information Systems (MAISs).

Throughout STEP, tests are conducted to collectdata for evaluating the system and refining andvalidating models. Through the model-test-modeliterative approach, the sets of models mature,culminating in accurate representations of thesystem with appropriate fidelity, which can beused to predict system performance and to sup-port the acquisition and potentially the trainingcommunities.

1. STEP begins with the Missions NeedsAnalysis and continues through the lifecycle. Top-level requirements are used todevelop alternative concepts and select/develop digital models that are used toevaluate theater/campaign and mission-/battle-level simulations. Mission-/battle-level models are used to evaluate the abil-ity of a multiple platform force package toperform a specific mission. Mission andfunctional requirements continue to be re-fined, and the system reaches the prelimi-nary design stage.

2. M&S is used both as a predictive tool andwith test in an iterative process to evaluatethe system design. The consequences ofdesign changes are evaluated and help trans-late the most promising design approachinto a stable, interoperable, and cost-effective design.

3. System components and subsystems aretested in a laboratory environment. Datafrom this hardware are employed in themodel-test-model process. M&S is used in

the planning of tests to support a more effi-cient use of resources. Simulated tests canbe run on virtual ranges to conduct rehears-als and determine if test limitations can beresolved. STEP tools are used to provide datafor determining the real component orsubsystem’s performance and interactionwith other components. M&S is used dur-ing both Development Testing (DT) and Op-erational Testing (OT) to increase the amountof data and supplement the live test eventsthat are needed to meet test objectives.

4. Periodically throughout the acquisition pro-cess the current version of the system underdevelopment should be reexamined in a syn-thetic operational context to reassess itsmilitary worth. This is one of the signifi-cant aspects of STEP, understanding theanswer to the question: What differencedoes this change make in the system’sperformance?

5. STEP does not end with f ielding anddeployment of a system, but continues to theend of the system’s life cycle. STEP resultsin a thoroughly tested system with perfor-mance and suitability risks identified. A by-product is a set of models and simulationswith a known degree of credibility with thepotential for reuse in other efforts. New testdata can be applied to models to incorporateany system enhancements and furthervalidate its models.

14.5 SUPPORT TO TEST EXECUTION

Simulations can be useful in test execution anddynamic planning. With funds and other restric-tions limiting the number of times that a test maybe repeated and each test conducted over severaldays, it is mandatory that the test directorexercises close control over the conduct of thetest to ensure the specific types and quantities ofdata needed to meet the test objectives are beinggathered and to ensure adequate safety. The test

Page 156: DAU T&E Management Guide.pdf

14-8

director must be able to make minor modifica-tions to the test plan and scenario to force achieve-ment of these goals. This calls for a dynamic(quick-look) analysis capability and a dynamicplanning capability. Simulations may contributeto this capability. For example, using the samesimulation(s) as used in pre-test planning, thetester could input data gathered during the firstday of the exercise to determine the adequacy ofthe data to fulfill the test objectives. Using thisdata, the entire test could be simulated. Projectedinadequacies could be isolated, and the test planscould be modified to minimize the deficiencies.

Simulations may also be used to support testcontrol and to ensure safety. For example, duringmissile test firings at White Sands Missile Range(WSMR), New Mexico, aerodynamic simulationsof the proposed test were run on a computer dur-ing actual firings so that real-time missile posi-tion data could be compared continuously to thesimulated missile position data. If any signifi-cant variations occurred and if the range safetyofficer was too slow (both types of position datawere displayed on plotting boards), the computerissued a destruct command.

Simulations can be used to augment tests by simu-lating non-testable events and scenarios. Althoughoperational testing should be accomplished in asrealistic an operational environment as possible,pragmatically some environments are impossibleto simulate for safety or other reasons. Some ofthese include the environment of a nuclear battle-field, to include the effects of nuclear bursts onfriendly and enemy elements. Others include two-sided live firings and adequate representation ofother forces to ascertain compatibility and interop-erability data. Instrumentation, data collection, anddata reduction of large combined armed forces(e.g., brigade, division and larger-sized forces)become extremely difficult and costly. Simulationsare not restricted by safety factors and can realis-tically replicate many environments that are other-wise unachievable in an Operational Test andEvaluation (OT&E)—nuclear effects, large

combined forces, Electronic Countermeasures(ECM), Electronic Counter-Countermeasures(ECCM), and many engagements. These effectscan be simulated repeatedly with no environmen-tal impacts, costing only for the simulationoperations.

Usually, insuff icient units are available tosimulate the organizational relationships andinteraction of the equipment with its operationalenvironment, particularly during the early OT&Econducted using prototype or pilot production-type equipment. Simulations are not constrainedby these limitations. Data obtained from alimited test can be plugged into a simulationthat is capable of handling many of the types ofequipment being tested. It can interface themwith other elements of the blue forces and oper-ate them against large elements of the red forcesto obtain interactions.

End-item simulators can be used to evaluatedesign characteristics of equipment and can beused to augment the results obtained using pro-totype or pilot production-type equipment that isrepresentative of the final item. The simulatormay be used to expand test data to obtain therequired iterations or to indicate that the humaninterface with the prototype equipment will notsatisfy the design requirements.

It is often necessary to use substitute or surro-gate equipment in testing; e.g., American equip-ment is used to represent threat-force equipment.In some cases the substitute equipment may havegreater capabilities than the real equipment; inother cases it may have less. Simulations arecapable of representing the real characteristics ofequipment and, therefore, can be used as a meansof modifying raw data collected during the testto reflect real characteristics.

As an example, if the substitute equipment is anAAA gun with a tracking rate of 30 degrees persecond and the equipment for which it is substi-tuted has a tracking rate of 45 degrees per second,

Page 157: DAU T&E Management Guide.pdf

14-9

the computer simulation could be used to augmentthe collected, measured data by determining howmany rounds could have been fired against eachtarget; or whether targets that were missed be-cause the tracking rate was too slow could havebeen engaged by the actual equipment. Consid-eration of other differing factors simultaneouslycould have a positive or negative synergistic ef-fect on test results, and could allow evaluationmore quickly through a credible simulation de-signed with the flexibility to replicate operationalvariations.

14.6 SUPPORT TO ANALYSIS AND TESTREPORTING

M&S may be used in post-test analysis to extendand generalize results and to extrapolate to otherconditions. The difficulty of instrumentation andcontrolling large exercises and collecting and re-ducing the data and resource costs, to some de-gree, limits the size of T&E. This makes the pro-cess of determining the suitability of equipmentto include compatibility, interoperability, organi-zation, etc., a difficult one. To a large degree thelimited interactions, interrelationships and com-patibility of large forces may be supplementedby using actual data collected during the test andapplying it in the simulation.

Simulations can be used to extend test results,save considerable energy (fuel and manpower),and save money by reducing the need to repeatdata points to improve the statistical sample orto determine overlooked or directly unmeasuredparameters. Sensitivity analyses can be run usingsimulations to evaluate the robustness of thedesign.

In analyzing the test results, data can be com-pared to the results predicted by the simulationsused early in the planning process. Thus, thesimulation is validated by the actual live testresults, but the test results are also validated bythe simulation.

14.7 SIMULATION INTEGRATION

Simulations have become so capable and wide-spread in their application, and the ability tonetwork dissimilar and/or geographically separatesimulators in real time to synergistically simu-late more than ever before, that whole newapplications are being discovered. Simulations areno longer stove-piped tools in distinct areas, butare increasingly crossing disciplines and differentuses. The F-35 Joint Strike Fighter (JSF) programuses nearly 200 different models and simulationsthroughout its many development and testingactivities, in both government centers and con-tractor facilities. Increasingly, operational issuesare being determined through operational analy-sis simulations, which will subsequently impactOT much later in a program. Elements of the T&Ecommunity should be involved increasinglyearlier in every program to insure such resultsare compatible with the OT objectives and re-quirements. For example, the F-35 program net-worked eight different war game simulations to-gether to develop the initial operational require-ments that subsequently became documented intheir Operational Requirements Document (ORD)for the many different required Service applica-tions, i.e., Navy, Air Force, Marine Corps, andBritish Royal Navy. Only through this extensive“Virtual Strike Warfare Environment” were thebroad and concurrent range of operational require-ments able to be evaluated and established. Newcapabilities were achieved, but the extensivemulti-simulation network demanded new tech-niques for establishing the technical confidencein the results.

14.8 SIMULATION PLANNING

With M&S becoming increasingly more complex,more expensive, and more extensive, testers mustbe thorough in planning their use and subsequenttechnical confidence. Testers are expected to beinvolved increasingly earlier, including the Op-erational Test Agencies (OTAs), if the subsequentT&E results are to be accepted. Simulation Sup-port Plans (SSPs) are program documents that

Page 158: DAU T&E Management Guide.pdf

14-10

span the many simulations, their purpose, and theirexpected credibility. The Army, Air Force, andMarine Corps have policies requiring the earlyestablishment of an SSP process, and stand-aloneSSP documents. Usually this starts with a pro-gram office-level simulation support group ofService M&S experts who advise the programon M&S opportunities, establish the Verification,Validation, and Accreditation (VV&A) process,and document these procedures in the SSP, whichextends across the life cycle of the system devel-opment, testing, and employment. It is vital thatthe SSP be fully coordinated with the Test andEvaluation Master Plan (TEMP). Without earlyplanning of what each model or simulation is for,their expense, impact on system design as wellas testing, earlier programs have ended up with avolume of different data for different purposes,which could not be analyzed, as subsequently

found not to be credible, or delayed testing orprogram development.

14.9 SUMMARY

M&S in T&E can be used for concept evalua-tion, extrapolation, isolation of design effects,efficiency, representation of complex environ-ments, and overcoming inherent limitations inactual testing. The use of M&S can validate testresults, increase confidence levels, reduce testcosts, and provide opportunities to shorten theoverall acquisition cycle by providing more dataearlier for the decision maker. It takes appropri-ate time and funding to bring models and simu-lations along to the point that they are usefulduring an acquisition.

Page 159: DAU T&E Management Guide.pdf

14-11

ENDNOTES

1. DoDI 5000.2, Operation of the DefenseAcquisition System, May 12, 2003.

2. Watt, Charles K. The Role of Test Beds in C3I,Armed Forces Communications and Electron-ics Association (AFCEA) 35th Annual Con-vention, June 15-17, 1982.

3. DoD Directive (DoDD) 5000.59, DoD Mod-eling and Simulation Management, January20, 1998, and DoDI 5000.61, DoD Modelingand Simulation (M&S) Verification, Validationand Accreditation (VV&A), May 13, 2003.

Page 160: DAU T&E Management Guide.pdf

14-12

Page 161: DAU T&E Management Guide.pdf

15-1

1515TEST RESOURCES

15.1 INTRODUCTION

This chapter describes the various types ofresources available for testing, explains testresource planning in the Services, and discussesthe ways in which test resources are funded.

According to the Defense Acquisition Guidebook,the term “test resources” is a collective term thatencompasses elements necessary to plan, conduct,collect, and analyze data from a test event or pro-gram.1 These elements include: funding (todevelop new resources or use existing ones), man-power for test conduct and support, test articles,models, simulations, threat simulators, surrogates,replicas, test-beds, special instrumentation, testsites, targets, tracking and data acquisition instru-mentation, equipment (for data reduction, com-munications, meteorology, utilities, photography,calibration, security, recovery, maintenance, andrepair), frequency management and control, andbase/facility support services. “Testing planningand conduct shall take full advantage of existinginvestment in DoD [Department of Defense]ranges, facilities, and other resources, whereverpractical, unless otherwise justified in the Testand Evaluation Master Plan [TEMP].”

Key DoD test resources are in great demand bycompeting acquisition programs. Often special,unique, or one-of-a-kind test resources must bedeveloped specifically for the test program. It isimperative that the requirements for these testresources be identified early in the acquisitionprocess so adequate funding can be allotted fortheir development and they will be available whenthe test is scheduled.

15.2 OBTAINING TEST RESOURCES

15.2.1 Identify Test Resources andInstrumentation

As early as possible, but not later than programstart, the test facilities and instrumentationrequirements to conduct program Test andEvaluation (T&E) should be identified and a ten-tative schedule of test activities prepared. Thisinformation is recorded in the TEMP and Servicetest resource documentation.

15.2.2 Require Multi-Service OT&E

Multi-Service Operational Test and Evaluation(MOT&E) should be considered for weaponsystems requiring new operational concepts in-volving other Services. If multi-Service testingis used, an analysis of the impact of demonstra-tion on time and resources needed to execute themulti-Service tests should be conducted beforethe low rate production decision.

15.2.3 Military Construction ProgramFacilities

Some programs cannot be tested without Mili-tary Construction Program (MCP) facilities. Toconstruct these facilities will require long leadtimes; therefore, early planning must be done toensure that the facilities will be ready whenrequired.

15.2.4 Test Sample Size

The primary basis for the test-sample size isusually based on one or more of the following:

Page 162: DAU T&E Management Guide.pdf

15-2

• Analysis of test objectives;

• Statistical significance of test results at somespecified confidence level;

• Availability of test vehicles, items, etc.;

• Support resources or facilities available;

• Time available for the test program.

15.2.5 Test Termination

Testers should not hesitate to terminate a testbefore its completion if it becomes clear that themain objective of the test has already beenachieved, is unachievable (due to hardware fail-ure, unavailability of resources, etc.), or if addi-tional samples will not change the outcome andconclusions of the test.

15.2.6 Budget for Test

The Acquisition Strategy, TEMP, and budgetingdocuments should be reviewed regularly to ensurethat there are adequate identified testing fundsrelative to development and fabrication funds.

The Acquisition Strategy, TEMP, and budgetingdocuments need careful scrutiny to ensure thatthere are adequate contingency funds to covercorrection of difficulties at a level that matchesindustry/government experience on the contract.(Testing to correct deficiencies found duringtesting, without sufficient funding for propercorrection, results in Band-Aid® approaches, whichrequire corrections at a later and more expensivetime period.)

15.2.7 Test Articles

A summary of important test planning items thatwere identified by the Defense Science Board(DSB) is provided below:

• Ensure that the whole system, including thesystem user personnel, is tested. Realisticallytest the complete system, including hardware,software, people, and all interfaces. Get userinvolved from the start and understand userlimitations;

• Ascertain that sufficient time and test articlesare planned. When the technology is stressed,the higher risks require more test articles andtime;

• In general, parts, subsystems, and systemsshould be proven in that order before incorpo-rating them into the next higher assembly formore complete tests. The instrumentationshould be planned to permit diagnosis oftrouble;

• Major tests should never be repeated withoutan analysis of failure and corrective action.Allow for delays of this nature.

15.2.8 Major Range and Test Facility Base(MRTFB)

The National Defense Authorization Act (NDAA)of Fiscal Year (FY) 2003 directed the DoD estab-lish a Test Resource Management Center(DTRMC), with the director (three star or seniorcivilian) reporting directly to the Under Secre-tary of Defense for Acquisition, Technology, andLogistics (USD(AT&L)). The DTRMC providesoversight for T&E strategic planning and budgetsfor the Department’s T&E activities, including theMRTFB, Central Test and Evaluation InvestmentProgram (CTEIP), and the T&E Science andTechnology (S&T) Program.

All Services operate ranges and test facilities fortest, evaluation, and training purposes. Theseactivities constitute the DoD Major Range andTest Facility Base.2 This MRTFB is described as“a national asset which shall be sized, operated,and maintained primarily for DoD T&E supportmissions, but also is available to all users having

Page 163: DAU T&E Management Guide.pdf

15-3

Figure 15-1. DoD Major Range and Test Facility Base

SOURCE: DOD 3200.IID

a valid requirement for its capabilities. TheMRTFB consists of a broad base of T&E activi-ties managed and operated under uniform guide-lines to provide T&E support to DoD Compo-nents responsible for developing or operatingmateriel and weapon systems.”3 The list ofMRTFB activities and their locations are shownon Figure 15-1. Summaries of the capabilities ofeach of these activities (with points of contactlisted for further information) may be found inMajor Range and Test Facility Base Summary ofCapabilities.4

The MRTFB facilities are available for use byall the Services, other U.S. government agenciesand, in certain cases, allied foreign governmentsand contractor organizations. Scheduling is basedon a priority system; and costs for usage are billeduniformly.5 The Director, Test Ranges Manage-ment Center sets policy for the composition, use,and test program assignments of the MRTFB. Inturn, the individual Services must fund, manage,and operate their activities. They are reimbursedfor direct costs by each user of the activity. The

DTRMC sponsors a Joint Test Assets Database,which lists MRTFB and Operational Test Agency(OTA) test facilities, test area and range data, in-strumentation, and test systems.

The DoD Components wishing to use an MRTFBactivity must provide timely and complete noti-fication of their requirements, such as specialinstrumentation or ground-support equipmentrequirements, to the particular activity using thedocumentation formats prescribed by UniversalDocumentation System Handbook.6 The require-ments must be stated in the TEMP discussedbelow. Personnel at the MRTFB activity willcoordinate with and assist prospective users withtheir T&E planning, to include conducting trade-off analyses and test scenario optimization basedon test objectives and test support capabilities.

15.2.9 Central Test and EvaluationInvestment Program (CTEIP)

In 1994 the Principal Deputy Under Secretary ofDefense for Acquisition and Technology

UTAH TEST ANDTRAINING RANGE

DUGWAY PROVING GROUNDUTAH

ABERDEEN PROVING GROUNDMARYLAND

NAVAL AIR WARFARECENTER AIRCRAFT DIV

PATUXENT RIVERMARYLAND

ARNOLD ENGINEERINGDEVELOPMENT CENTER

TENNESSEE

45TH SPACE WINGFLORIDA

ATLANTIC UNDERSEATEST AND EVALUATION

CENTER

AIR FORCE DEVELOPMENTTEST CENTER

FLORIDA

WHITE SANDSMISSILE RANGE

NEW MEXICO

JOINT INTEROPERABILITYTEST COMMAND

ARIZONA

ELECTRONIC PROVING GROUNDFT HUACHUCA, ARIZONA

YUMA PROVING GROUNDARIZONA

US ARMYKWAJALEIN ATOLL

AIR FORCEFLIGHT TEST CENTER

CALIFORNIA

NAVAL AIR WARFARECENTER WEAPONS DIVPT MUGU, CALIFORNIA

30TH SPACE WINGCALIFORNIA

AIR FORCENEVADA TEST & TRAINING RANGE

NAVAL AIR WARFARECENTER WEAPONS DIV

CHINA LAKE, CALIFORNIA

NAVYPMRF,HI

Page 164: DAU T&E Management Guide.pdf

15-4

(PDUSD(A&T)) chartered the Director, Test, Sys-tems Engineering, and Evaluation (DTSE&E) toestablish and chair a steering group that wouldoversee the acquisition and integration of all train-ing and associated test range instrumentation anddevelop related policy. As a result of the reorgani-zation of the Office of the Secretary of Defense(OSD) T&E functions, the Director, OperationalTest and Evaluation (DOT&E) and subsequentlythe DTRMC, manages the implementation of theJoint Training and Test Range Roadmap (JTTR)and executes the CTEIP. The CTEIP provides OSDfunding and a mechanism for the development andacquisition of new test capabilities to satisfy multi-Service testing requirements.

15.2.10 Service Test Facilities

There are other test resources available besidesMRTFB. Frequently, National Guard or Reserveunits, commercial or international test facilities,and war reserve assets are available to supportDoD T&E. Testers can determine resources avail-able by contacting their Service headquarters staffelement. Within the Army, consult documents in-clude: Army Test and Evaluation Command(ATEC)’s database on existing Army major testfacilities, major instrumentation, and test equip-ment; the Project Manager for Instrumentation,Targets, and Threat Simulators (PM, ITTS) Tar-gets Information Manual for a descriptive cata-logue of Army targets and foreign ground assetsavailable (or in development) for support of T&Eand training; and, PM ITTS’ Threat InventoryDatabase on available assets for both hardwaresimulators and software simulation systems. In-formation on specific Navy test resources is foundin user manuals published by each range and theCommander Operational Test and EvaluationForce (COMOPTEVFOR) catalog of availablesupport.

15.3 TEST RESOURCE PLANNING

The development of special test resources tosupport a weapon system test can be costly and

time-consuming. This, coupled with the compe-tition for existing test resources and facilities,requires that early planning be accomplished todetermine all test resource requirements forweapon system T&E. The tester must use govern-ment facilities whenever possible instead of fundingconstruction of contractor test capabilities.

Problems associated with range and facility plan-ning are that major systems tend to get top prior-ity; i.e., B-1B, M-1, etc. Range schedules areoften in conflict due to system problems, whichcause schedule delays during testing; and thereis often a shortage of funds to complete testing.

15.3.1 TEMP Resource Requirements

The Program Manager (PM) must state all keytest resource requirements in the TEMP and mustinclude items such as unique instrumentation,threat simulators, surrogates, targets, and testarticles. Included in the TEMP are a critical analy-sis of anticipated resource shortfalls, their effecton system T&E, and plans to correct resourcedeficiencies. As the first TEMP must be preparedfor program initiation, initial test resource plan-ning must be accomplished very early. Refine-ments and reassessments of test resource require-ments are included in each TEMP update. Theguidance for the content of the test resource sum-mary (Part V) of the TEMP is in Appendix 2,Test and Evaluation Master Plan, of the DefenseAcquisition Guidebook (see Table 15-1). Onceidentified, the PM must then work within theService headquarters and range managementstructure to assure the assets are available whenneeded.

15.3.2 Service Test Resource Planning

More detailed listings of required test resourcesare generated in conjunction with the detailed testplans written by the materiel developer andoperational tester. These test plans describe testobjectives, Measures of Effectiveness (MOEs), testscenarios, and specific test resource requirements.

Page 165: DAU T&E Management Guide.pdf

15-5

Table 15-1. TEMP Test Resource Summary Section

PART V – TEST AND EVALUATION RESOURCE SUMMARY

PROVIDE A SUMMARY (PREFERABLY IN A TABLE OR MATRIX FORMAT) OF ALL KEY TEST AND EVALUATIONRESOURCES, BOTH GOVERNMENT AND CONTRACTOR, THAT WILL BE USED DURING THE COURSE OF THEACQUISITION PROGRAM.

THE TEMP SHOULD PROJECT THE KEY RESOURCES NECESSARY TO ACCOMPLISH DEVELOPMENTAL ANDVALIDATION TESTING AND OPERATIONAL TEST AND EVALUATION. THE TEMP SHOULD ESTIMATE, TO THE DEGREEKNOWN AT MILESTONE B, THE KEY RESOURCES NECESSARY TO ACCOMPLISH DEVELOPMENTAL TEST ANDEVALUATION, LIVE FIRE TEST AND EVALUATION, AND ALL OPERATIONAL TEST AND EVALUATION. THESE SHOULDINCLUDE ELEMENTS OF THE NATIONAL TEST FACILITIES BASE (WHICH INCORPORATES THE MAJOR RANGEAND TEST FACILITY BASE (MRTFB), CAPABILITIES DESIGNATED BY INDUSTRY AND ACADEMIA, AND MRTFBTEST EQUIPMENT AND FACILITIES), UNIQUE INSTRUMENTATION, THREAT SIMULATORS, AND TARGETS. ASSYSTEM ACQUISITION PROGRESSES, THE PRELIMINARY TEST RESOURCE REQUIREMENTS SHALL BEREASSESSED AND REFINED AND SUBSEQUENT TEMP UPDATES SHALL REFLECT ANY CHANGED SYSTEMCONCEPTS, RESOURCE REQUIREMENTS, OR UPDATED THREAT ASSESSMENT. ANY RESOURCE SHORTFALLSTHAT INTRODUCE SIGNIFICANT TEST LIMITATIONS SHOULD BE DISCUSSED WITH PLANNED CORRECTIVE ACTIONOUTLINED. SPECIFICALLY, IDENTIFY THE FOLLOWING TEST RESOURCES:

• TEST ARTICLES

• TEST SITES AND INSTRUMENTATION

• TEST SUPPORT EQUIPMENT

• THREAT REPRESENTATION

• TEST TARGETS AND EXPENDABLES

• OPERATIONAL FORCE TEST SUPPORT

• SIMULATORS, MODELS, AND TEST-BEDS

• SPECIAL REQUIREMENTS

• TEST AND EVALUATION FUNDING REQUIREMENTS

• MANPOWER/PERSONNEL TRAINING

Source: Defense Acquisition Guidebook, September 2004.

15.3.2.1 Army Test Resource Planning

In the Army, the independent evaluator, with theassistance of the developmental and operationaltesters, prepares input to the TEMP and developsthe System Evaluation Plan (SEP), the primaryplanning documents for integrated T&E of thesystem. These documents should be prepared earlyin the acquisition cycle (at the beginning of systemacquisition activities). They describe the entireT&E strategy including critical issues, test meth-odology, MOEs, and all significant test resources.The TEMP and SEP provide the primary input tothe Outline Test Plan (OTP), which contains a

detailed description of each identified required testresource, where and when it is to be provided, andthe providing organization.

The tester must coordinate the OTP with all majorcommands or agencies expected to provide testresources. Then, the OTP is submitted to ATECfor review by the Test Schedule and Review Com-mittee (TSARC) and for incorporation into theArmy’s Five-Year Test Program (FYTP). The ini-tial OTP for each test should be submitted to theTSARC as soon as testing is identified in theTEMP. Revised OTPs are submitted as moreinformation becomes available or requirements

Page 166: DAU T&E Management Guide.pdf

15-6

change, but a final comprehensive version of theOTP should be submitted at least 18 monthsbefore the resources are required.

The TSARC is responsible for providing high-level, centralized management of T&E resourceplanning. The TSARC is chaired by theCommanding General, ATEC, and consists of ageneral officer or equivalent representatives fromthe Army staff and major commands. The TSARCmeets semiannually to review all OTPs, resolveconflicts, and coordinate all identified test re-source requirements for inclusion in the FYTP.The FYTP is a formal resource tasking documentfor current and near-term tests and a planningdocument for tests scheduled for the out-years.All OTPs are reviewed during the semiannualreviews to ensure that any refinements or revi-sions are approved by the TSARC and reflectedin the FYTP.

The TSARC-approved OTP is a tasking documentby which the tester requests Army test resources.The TSARC coordinates resource requests, setspriorities, resolves conflicts and schedules re-sources. The resultant FYTP, when approved bythe Headquarters, Department of the Army(HQDA) Deputy Chief of Staff for G-3 (DCS G-3), is a formal tasking document that reflects theagreements made by the resource providers (ArmyMateriel Command (AMC), Training and Doc-trine Command (TRADOC), Forces Command(FORSCOM), etc.) to make the required test re-sources available to the designated tests. If testresources from another Service, a non-DoD gov-ernmental agency (such as the Department ofEnergy (DOE) or National Aeronautics and SpaceAdministration (NASA)), or a contractor are re-quired, the request is coordinated by ATEC. Forexample, the request for a range must be made atleast 2 years in advance to ensure availability.However, due to the long lead time required toschedule these non-Army resources, their avail-ability cannot be guaranteed if testing is delayedor retesting is required. The use of resources out-side the United States, such as in Canada,

Germany, or other North Atlantic Treaty Organi-zation (NATO) countries, is also handled byATEC.

15.3.2.2 Navy Test Resource Planning

In the Navy, the developing agency and theoperational tester are responsible for identifyingthe specific test resources required in testing theweapon system. In developing requirements fortest resources, the PM and Operational TestDirector (OTD) refer to documents such as theMission Need Statement (MNS), AcquisitionStrategy, Navy Decision Coordinating Paper(NDCP), Operational Requirement Document(ORD), threat assessments, Secretary of the NavyInstruction (SECNAVINST) 5000.2B, and theOperational Test Director’s Guide (Commander,Operational Test and Evaluation Force (Navy)(COMOPTEVFOR Instruction 3960.1D). UponChief of Naval Operations’ (CNO) approval, theTEMP becomes the controlling managementdocument for all T&E of the weapon system. Itconstitutes direction by the CNO to conduct theT&E program defined in the TEMP, includingthe commitment of Research, Development, Testand Evaluation (RDT&E) financial support andof fleet units and schedules. It is prepared by thePM, who is provided OT&E input by theCOMOPTEVFOR OTD. The TEMP defines allT&E (DT&E, OT&E and Production AcceptanceTest and Evaluation (PAT&E)) to be conductedfor the system and describes, in as much detailas possible, the test resources required.

The Navy uses its operational naval forces toprovide realistic T&E of new weapon systems.Each year, the CNO (N-091) compiles all Fleetsupport requirements for RDT&E program sup-port from the TEMPs and publishes the CNOLong-Range RDT&E Support Requirementsdocument for the budget and out-years. In addi-tion, a quarterly forecast of support requirementsis published approximately 5 months before theFleet Employment Scheduling Conference for thequarter in which the support is required. These

Page 167: DAU T&E Management Guide.pdf

15-7

documents summarize OT&E requirements forFleet services and are used by the Fleet forscheduling services and out-year budget projections.

Requests for use of range assets are usually initi-ated informally with a phone call from the PMand/or OTD to the range manager and followedby formal documentation. Requests for Fleet sup-port are usually more formal. The COMOPTEV-FOR, in coordination with the PM, forwards theTEMP and a Fleet RDT&E Support Request tothe CNO. Upon approval of the request, the CNOtasks the Fleet Commander by letter or messageto coordinate with the Operational Test and Evalu-ation Force (OPTEVFOR) to provide the requestedsupport.

Use of most Navy ranges must be scheduled atleast a year in advance. Each range consolidatesand prioritizes user requests, negotiates conflicts,and attempts to schedule range services to satisfyall requests. If the desired range services cannotbe made available when required, the test mustwait or the CNO resolves the conflict. Becauseranges are fully scheduled in advance, it is diffi-cult to accommodate a test that is delayed orrequires additional range time beyond that origi-nally scheduled. Again, the CNO can examinethe effects of delays or retest requirements andissue revised priorities, as required.

Requests for use of non-Navy OT&E resourcesare initiated by COMOPTEVFOR. The OPTEV-FOR is authorized direct liaison with other Ser-vice-independent OTAs to obtain OTA-controlledresources. Requests for other government-ownedresources are forwarded to the CNO (N-091) forformal submission to the Service Chief (for Serviceassets) or to the appropriate government agency(e.g., DOE or NASA). Use of contractor resourcesis usually handled by the PM, although contrac-tor assets are seldom required in OT&E, sincethe Fleet is used to provide an operational envi-ronment. Requests for use of foreign ranges arehandled by the N-091 Assistant for InternationalResearch and Development (IR&D).

15.3.2.3 Air Force Test Resource Planning

The test resources required for OT&E of an AirForce weapon system are identified in detail inthe Test Resources Plan (TRP), which is preparedby the responsible Air Force OT&E organization.In general, the Air Force Operational Test andEvaluation Center (AFOTEC) is the test organi-zation for OT&E programs; it obtains supportfrom a Service major command test agency fornon-major programs, with AFOTEC directing andproviding assistance, as required.

During the Advanced Planning Phase of a weaponsystem acquisition (5 to 6 years before OT&E),AFOTEC prepares the OT&E section of the firstfull TRP, coordinates the TRP with all support-ing organizations, and assists the Resource Man-ager (RM) in programming required resources.The resource requirements listed in the ResourceInformation Network TRP are developed by thetest manager, RM, and test support group, usingsources such as the ORD and threat assessments.The TRP should specify, in detail, all the re-sources necessary to successfully conduct a testonce it is entered in the Test Resource Informa-tion Management System (TRIMS).

The TRP is the formal means by which testresource requirements are communicated to theAir Staff and to the appropriate commands andagencies tasked to supply the needed resources.Hence, if a required resource is not specified inthe TRP, it is likely the resource will not beavailable for the test. The TRP is revised andupdated on a continuous basis, since the testresource requirements become better defined asthe OT&E plans mature. The initial TRP servesas a baseline for comparison of planned OT&Eresources with actual expenditures. Comparisonsof the initial TRP with subsequent updates providean audit trail of changes in the test program andits testing requirements. The AFOTEC maintainsall TRPs on TRIMS; this permits immediate re-sponse to all queries regarding test resourcerequirements.

Page 168: DAU T&E Management Guide.pdf

15-8

The AFOTEC/RM consolidates the resourcerequirements from all TRPs coordinating withparticipating and supporting organizations andagencies outside AFOTEC. Twice yearly, the RMoffice prepares a draft of the USAF Program forOperational Test (PO). The PO is a master plan-ning and programming document for resourcerequirements for all HQ USAF-directed OT&Eand is distributed to all concerned commands,agencies, and organizations for review and coor-dination. It is then submitted to the Air Staff forreview.

All requests for test resources are coordinated byHQ AFOTEC as part of the TRP preparation pro-cess. When a new weapon system developmentis first identified, AFOTEC provides a Test Man-ager (TM) who begins long-term OT&E planning.The TM begins identifying needed test resources,such as instrumentation, simulators and models,and works with the resources directorate to ob-tain them. If the required resource does not be-long to AFOTEC, it will negotiate with the com-mands having the resource. In the case of mod-els and simulators, AFOTEC surveys what isavailable, assesses credibility, and then coordi-nates with the owner or developer to use it. TheJoint Technical Coordinating Group (JTCG) pub-lishes a document on Electronic Warfare (EW)models.

Range scheduling should be done early. At leasta year is required, but often a test can beaccommodated with a few months’ notice if thereis no requirement for special equipment ormodifications to be provided at the range. Someof the Air Force ranges are scheduled well inadvance and cannot accommodate tests thatencounter delays or retest requirements.

The RM attempts to resolve conflicts among vari-ous systems competing for scarce test resourcesand elevates the request to the Commander, AFO-TEC, if necessary. Decisions on resource utiliza-tion and scheduling are based on the weaponsystem’s assigned priority.

The RM and the TM also arrange for use of theresources of other Services, non-DoD governmentagencies and contractors. Use of non-U.S. re-sources, such as a Canadian range, are coordi-nated by Air Force, Chief of Staff/Directorate ofTest and Evaluation (AF/TE) and based on for-mal Memoranda of Understanding (MOU). TheU.S. Air Force-Europe/Directorate of Operations-Operations (USAFE/DOQ) handles requests forEuropean ranges. Use of a contractor-owned re-source, such as a model, is often obtained throughthe System Program Office (SPO) or a generalsupport contract.

15.4 TEST RESOURCE FUNDING

The Future Years Defense Program (FYDP),incorporating a biennial budgeting process, is thebasic DoD programming document that records,summarizes, and displays Secretary of Defense(SECDEF) decisions. In the FYDP, costs aredivided into three categories for each acquisitionProgram Element (PE): R&D costs, investmentcosts, and operating costs. The Congress appro-priates to the Office of Management and Budget(OMB), and OMB apportions funding throughthe SECDEF to the Services and to other defenseagencies. The Services and defense agencies thenallocate funds to others (claimants, sub-claimants,administering offices, commanding generals,etc.). (See DoD 7000.14-4, Financial Manage-ment Regulation, Vol. 2A.)

The Planning, Programming, Budgeting andExecution (PPBE) system is a DoD internal sys-tem used to develop input to the Congress for eachyear’s budget while developing future-year budgets.The PPBE is calendar oriented. There areconcurrent 2-year PPBE cycles ongoing at onetime. These cycles are: planning, programming,budgeting, and execution. At any one time thereare three budgets being worked by the Services.The current 2-year budget is being executed. Thenext 6 years of defense planning are being pro-grammed, and long-range program plans and plan-ning guidance are being reviewed for updating.

Page 169: DAU T&E Management Guide.pdf

15-9

There are various types of funding in the PPBE:R&D funding for maintaining the technology base;exploratory development funding for conductingthe concept assessments; advanced developmentfunding for conducting both the concept develop-ment and the early prototyping; engineering de-velopment funding for demonstrating the Engi-neering Development Model (EDM); procure-ment funding for conducting LRIP, Full Rate Pro-duction (FRP), system deployment and opera-tional support. RDT&E management and supportfunding is used throughout the development cycleuntil the system is operationally deployed whenOperations and Maintenance (O&M) funding isused. The RDT&E appropriation funds the costsassociated with R&D intended to improve perfor-mance, including test items, DT&E and test sup-port of OT&E of the system, or equipment testitems.

Funding that is planned, programmed, and bud-geted through the PPBE cycle is not always thesame funding amount that the Congress appropri-ates or the PM receives. If the required funding fora test program is not authorized by the Congress,the PM has four ways to react. The PM can sub-mit a supplemental budget (for unfunded portionsof the program); request deficiency funding (forunforeseen program problems); use transferauthority (from other programs within the Ser-vice); or the PM can try to reprogram the neededfunds (to restructure the program).

Generally, testing that is accomplished for a spe-cific system before the production decision isfunded from RDT&E appropriations, and testingthat is accomplished after the production deci-sion is funded from other procurement or O&Mappropriations. Testing of Product Improvements(PIs), block upgrades, and major modificationsis funded from the same appropriations as theprogram development. Follow-on Test and Evalu-ations (FOT&Es) are normally funded from O&Mfunds.

Funding associated with T&E (including instru-mentation, targets, and simulations) are identi-fied in the system acquisition cost estimates,Service acquisition plans, and the TEMP. Gen-eral funding information for development andoperational tests follows:

Development Test (DT) Funding. Funds re-quired for conduct of engineering and develop-ment tests are programmed and budgeted by themateriel developer, based upon the requirementsof the TEMP. These costs may include, but arenot limited to, procuring test samples/prototypes;support equipment; transportation costs; techni-cal data; training of test personnel; repair parts;and test-specific instrumentation, equipment, andfacilities. The DT&E funds are expended forcontractor and government developmental testactivities.

The Service PM may be required to pay for theuse of test resources, such as the MRTFB, andfor the development of specialized resourcesneeded specifically for testing the weapon systembeing developed.

Operational Test (OT) Funding. Funds requiredto conduct OT are usually programmed and bud-geted by the Service PM organization. The fundsare programmed in the Service’s long-range testprogram, and the funds requirements are obtainedfrom the test resourcing documentation andTEMP. The Air Force funds OT&E separate fromthe program office through a dedicated PE forAFOTEC conducted operational testing.

15.4.1 Army Funding

Test resources are developed and funded undervarious Army appropriations. The AMC and itscommodity commands provide test items, spareparts, support items (such as diagnostic equip-ment), and ammunition. Soldiers, ranges, fuel,test support personnel, and maneuver areas areprovided by the TRADOC or FORSCOM.Weapon system PMs use RDT&E funds to

Page 170: DAU T&E Management Guide.pdf

15-10

reimburse these supporting commands for costsdirectly related to their tests. Weapon systemmateriel developers are also responsible forfunding the development of new test resourcesspecifically needed to test the weapon system.Examples of such special-purpose resources in-clude models, simulations, special instrumenta-tion and test equipment, range modifications,EW simulators and, sometimes, threat simula-tors. Although the Army has a separate budgetand development plan for threat simulators—thePM ITTS threat simulators program—manyweapon system developers still have to fund thecost of new threat systems that are specificallyneeded to test their weapon system. Funding forArmy operational testing is through the PM’sProgram Element (PE) and is given to ATECfor direct control of funds for each program.Funding requirements are developed in conso-nance with the Outline Test Plan (OTP).

15.4.2 Navy Funding

In the Navy, the weapon system PM is respon-sible for funding the development of all requiredtest-specif ic resources from the program’sRDT&E funds. These resources include testarticles, expendables, one-of-a-kind targets, datacollection/reduction, and instrumentation. Thedevelopment of generic test resources that canbe used in OT&E of multiple weapon systemssuch as targets, threat simulators, and rangecapabilities, is funded from the Office of theChief of Naval Operations (OPNAV) genericaccounts (such as target development) and notfrom weapon systems RDT&E. The PM’sRDT&E funds pay for all DT and OT throughOPEVAL. The PM pays for all post-productionOT with program funds.

15.4.3 Air Force Funding

In the Air Force, direct-cost funding requires thattest-peculiar (direct) costs associated with a par-ticular test program be reimbursed by the SPO tothe designated test agency. The RDT&E appro-

priation funds the cost associated with R&D,including test items, DT&E, and Air ForceMateriel Command (AFMC) support of OT&Eof the system or equipment and the test items.Costs associated with Initial Operational Test andEvaluation (IOT&E) are RDT&E-funded, andcosts of Qualification Operational Test and Evalu-ation (QOT&E) are O&M-funded. The AFOTECis funded through its own PE and has direct con-trol of OT&E funds for all programs. The IOT&Emanager prepares a TRP that summarizes theresource requirements for IOT&E and related testsupport. All pre-test IOT&E planning is budgetedthrough and paid out of the O&M appropriation.The FOT&E costs are paid by AFOTEC and/orthe MAJCOM operating the system and fundedby the O&M appropriation.

15.5 SUMMARY

Test resources have many conflicting demands,and their use must be scheduled well in advanceof a test. Resources specific to a particular testmust often be developed and funded from thePM’s own RDT&E budget. Thus, the PM andtesters must ensure that test resource requirementsare identified early in the acquisition cycle, thatthey are documented in the initial TEMP, and thatmodifications and refinements are reported in theTEMP updates.

Funds for testing are provided by congressionalappropriation to the OMB, which apportions thefunds to the Services through the SECDEF. ThePPBE is the DoD process used to formulate bud-get requests to the Congress. Requests by PMsfor test resources are usually outlined in theTEMP. Generally, system development is fundedfrom RDT&E funds until the system is opera-tionally deployed and maintained. O&M fundsare used for FOT&E and system maintenance.The weapon system materiel developer is also re-sponsible for funding the development of new testresources specifically needed to test the weaponsystem. The Air Force OTA develops and directlycontrols OT&E funds.

Page 171: DAU T&E Management Guide.pdf

15-11

ENDNOTES

1. Defense Acquisition Guidebook, http://akss.dau.mil/DAG/.

2. Department of Defense Directive (DoDD)3200.11, DoD Major Range and Test FacilityBase, May 1, 2002.

3. DoD 5000.3-M-1, Test and Evaluation MasterPlan (TEMP) Guidelines, October 1986.Cancelled.

4. DoD 3200.11-D, Major Range and TestFacility Base Summary of Capabilities, June1983.

5. Ibid.

6. Range Commanders Council (RCC),Universal Documentation System Handbook,Document 501-84, Vol. 3, August 1989.

Page 172: DAU T&E Management Guide.pdf

15-12

Page 173: DAU T&E Management Guide.pdf

16-1

1616TEST AND EVALUATION

MASTER PLAN

16.1 INTRODUCTION

Guidance contained in Operation of the DefenseAcquisition System and the Defense AcquisitionGuidebook stipulates that a Test and EvaluationMaster Plan (TEMP) format shall be used for allAcquisition Category (ACAT) I and IA or Officeof the Secretary of Defense (OSD)-designatedoversight acquisition programs.1 This reinforcesthe philosophy that good planning supports goodoperations. For effective engineering developmentand decision-making processes, an early Test andEvaluation (T&E) strategy in the TechnologyDevelopment Strategy (TDS) must be evolvedinto an overall integrating master plan detailingthe collection and evaluation of test data onrequired performance parameters. Less thanACAT I programs are encouraged to tailor theirT&E strategy using the TEMP format as a guide.The TEMP relates program schedule, test man-agement strategy and structure, and requiredresources to: Critical Operational Issues (COIs);Critical Technical Parameters (CTPs); MinimumAcceptable Values (MAVs) (thresholds); acquisi-tion strategy; and, milestone decision points.Feedback about the degree of system performancematurity and its operational effectiveness andsuitability during each phase is essential to thesuccessful fielding of equipment that satisfiesuser requirements.

16.2 TEMP DEVELOPMENT

The development of system T&E strategy beginsduring the Technology Development (TD) effortwith the test planning in the TDS document. Thisevolves as the system is better defined and T&E

events are consolidated in the TEMP. Effectivemanagement of the various test processes are theprimary functions of a Program Management Of-fice (PMO) T&E Working-level Integrated Prod-uct Team (WIPT). The T&E strategy is highlycontingent on early system concept(s) that aredeemed appropriate for satisfying user require-ments. As outlined in The Defense AcquisitionSystem,2 the priority for selecting a solution is:

(1) A non-materiel solution, such as changesto Doctrine, Organization, Training, Lead-ership and education, Personnel, and Facili-ties (DOT_LPF);

(2) A Materiel (M) alternative, chosen accord-ing to the following sequence:

(a) The procurement or modif ication ofcommercially available products, ser-vices, and technologies from domesticor international systems, or the devel-opment of dual-use technologies.

(b) The additional production or modifica-tion of previously developed U.S. and/or allied military systems or equipment.

(c) A cooperative development programwith one or more allied nations.

(d) A new, joint, Department of Defense(DoD) Component or governmentagency development program.

(e) A new DoD Component-unique devel-opment program.

Page 174: DAU T&E Management Guide.pdf

16-2

The quality of the test program may directly re-flect the level of effort expended in its develop-ment and execution. This varies in direct rela-tionship to the management imposed by theProgram Manager (PM) and, to some extent, bythe system engineer. The PM must evaluate theutility of dedicated T&E staff versus matrixsupport from the development command. Thelevels of intensity for planning and executing T&Efluctuate with changes in phases of the acquisitionprocess and in T&E staff support, as appropriate.

Early planning of long-range strategies can besupported with knowledgeable planning teams(T&E Integrated Product Teams (IPTs)) andreviews by panels of senior T&E managementofficials— “gray beards.” As the tempo of actualtest activities begins to build concept to proto-type to Engineering Development Model (EDM)to pre-Low Rate Initial Production (LRIP), inter-nal T&E management staff are needed to controlthe processes and evaluate results.

16.2.1 Program Management OfficeResponsibilities

The PMO is the focal point of the development,review and approval process for the programTEMP. The DoD acquisition process requires aTEMP as one of the primary management strat-egy documents supporting the decision to startor terminate development efforts. This task is a“difficult do” prior to program start since someServices do not formulate or staff a PMO untilformal program initiation. An additional compli-cating factor is the nebulous condition of otherprogram source documents (Capability Develop-ment Document (CDD), Technical ManagementPlan (TMP) , Acquisition Strategy, System ThreatAssessment (STA), Logistics Support Plan (LSP),etc.) that are also in early stages of development/updating for the milestone review. Since theTEMP must conform to the acquisition strategyand other program management documents, itfrequently lags in the development process anddoes not receive the attention needed from PMO

or matrix support personnel. PMO emphasis onearly formulation of the test planning teams (T&EWIPT) is critical to the successful developmentof the program TEMP. These teams should con-sist of the requisite players so a comprehensiveand integrated strategy compatible with otherengineering and decision-making processes isdeveloped. The PMO will find that the numberof parties desiring coordination on the TEMP farexceeds the “streamlined” approval process sig-natories; however, it must be coordinated. Anearly start in getting Service-level concurrence isimportant so the milestone decision document-submission schedule can be supported with thedraft and final versions of the TEMP. Subsequentupdates do not become easier, as each acquisi-tion phase brings new planning, coordination, andtesting requirements.

16.2.2 T&E Planning

Developing an overall strategy provides the frame-work for incorporating phase-oriented T&E ac-tivities that will facilitate the acquisition process.The T&E strategy should be consistent with theprogram acquisition strategy, identifyingrequirements for contractor and governmentDevelopment Test and Evaluation (DT&E), in-teractions between DT&E and Operational Testand Evaluation (OT&E), and provisions for theseparate Initial Operational Test and Evaluation(IOT&E).

An evolutionary acquisition strategy will gener-ally include an incremental or spiral approachwith moderate- to low-risk technologies thatshould reduce the intensity and duration of theT&E program. It does, however, include arequirement for post-production test activities asthe system is modified to accommodate previ-ously unknown new technologies, new threats, orother performance enhancements.

A revolutionary acquisition strategy (single step)incorporates all the latest technologies in the finalproduction configuration and is generally a

Page 175: DAU T&E Management Guide.pdf

16-3

higher-risk approach. As the contractor works onmaturing emerging technologies, the T&Eworkload increases in direct proportion to thetechnology risk and difficulty in fixing problems.There is a much higher potential for extendedschedules with iterative test-fix-test cycles.

16.2.3 General T&E Planning Issues

The Defense Science Board (DSB) report pre-sented guidance on T&E at two levels.3 On a gen-eral level it discussed a number of issues thatwere appropriate to all weapon acquisitionprograms. These issues, along with a summarydiscussion, are given below.

16.2.3.1 Effects of Test Requirements onSystem Acquisition

The acquisition strategy for the system shouldallow sufficient time between the end of demon-stration testing and procurement, as contractedwith limited production decisions, to allow flex-ibility for modification of plans that will berequired. It should ensure that sufficient dollarsare available not only to conduct T&E, but toallow for additional T&E that is always requireddue to failure, design changes, etc. It should beevaluated relative to constraints imposed by:

• The level of system testing at various stagesof the Research, Development, Test, andEvaluation (RDT&E) cycle;

• The number of test items available and theschedule interface with other systems neededin the tests, such as aircraft, electronics, etc.;

• The support required to assist in preparing forand conducting tests and analyzing the testresults;

• Being evaluated to minimize the so-called T&E“gap” caused by lack of hardware during thetest phase.

16.2.3.2 Test Requirements and Restrictions

Tests should:

• Have specific objectives;

• List, in advance, actions to be taken as aconsequence of the test results;

• Be instrumented to permit diagnosis of the causeof lack of performance including random, design-induced wear-out and operator error failure;

• Not be repeated if failures occur without firstconducting a detailed analysis of the failure.

16.2.3.3 Trouble Indicators

Establish an early detection scheme to identifyprogram illness.

When a program begins to have trouble, thereare indicators that will show up during testing.Some of these indicators are:

• A test failure;

• Any repetitive failure;

• A revision of schedule or incremental fundingthat exceeds the original plan;

• Any relaxation of the basic requirements suchas lower performance.

16.2.3.4 Requirement for Test Rehearsals

Test rehearsals should be conducted for each newphase of testing.

16.2.4 Scheduling

Specific issues associated with test schedulingare listed below.

Page 176: DAU T&E Management Guide.pdf

16-4

16.2.4.1 Building Block Test Scheduling

The design of a set of tests to demonstrate feasi-bility prior to testing the system-level EDMshould be used. This will allow early testing ofhigh-technical-risk items, and subsequent tests canbe incorporated into the hardware as the systemconcept has been demonstrated as feasible.

16.2.4.2 Component and Subsystem TestPlans

Ensure a viable component and subsystem testplan. Studies show that almost all component fail-ures will be the kind that cannot be easily detectedor prevented in full system testing. System failuremust be detected and fixed in the component/subsystem stage, as detecting and correcting fail-ure only at the operational test level results inhigh cost.

16.2.4.3 Phasing of DT&E and IOT&E

Problems that become apparent in operationaltesting can often be evaluated faster with theinstrumented DT&E hardware. The IntegratedTest Plan (ITP) should provide time and moneyto investigate test failures and eliminate causesof failures before other, similar tests take place.

16.2.4.4 Schedule IOT&E to Include SystemInterfaces with Other Systems

Whenever possible, the IOT&E/FOT&E (Follow-on Operational Test and Evaluation) of a weaponsystem should be planned to include other sys-tems that must have a technical interface withthe new system. For example, missiles should betested on most of the platforms for which theyare programmed.

A Preplanned Product Improvements (P3I) or in-crement strategy is a variant of the evolutionarydevelopment process in which PMs and other ac-quisition managers recognize the high-risk tech-nologies/subsystems and put them on parallel

development tracks. The testing strategy should an-ticipate the requirements to evaluate P3I item tech-nology maturity and then test the system duringthe integration of the additional capability.

Advanced Technology Demonstrations (ATD) orAdvanced Concept Technology Demonstrations(ACTD) may provide early insights into avail-able technologies for incorporation into develop-mental or mature, post-prototype systems. Usingproven, mature technology provides a lower-riskstrategy and may significantly reduce the devel-opment testing workload. To assess and managerisk, PMs and other acquisition managers shoulduse a variety of techniques, including technologydemonstrations, prototyping, and T&E. The pro-cess for verifying contract performance and itemspecifications, T&E of threshold performance re-quirements in the Operational RequirementsDocument (ORD), exit criteria, or the Acquisi-tion Program Baseline (APB) performance shouldbe addressed in the DT&E strategy. The DT&Eis an iterative process starting at configurationitem/software module levels and continuingthroughout the component integration into sub-assemblies and, finally, system-level performanceevaluations. OT&E is interwoven into early DT&Efor maximizing the efficient use of test articlesand test schedules. However, OT&E must remaina distinct thread of activity that does not lose itsidentity in the tapestry of test events. Planningfor test resources is driven by the sequence andintensity of Development Test (DT) and Opera-tional Test (OT) events. Resource coordination isan equally arduous task, which frequently has leadtimes equal to major program development ac-tivities. Included in the program T&E strategyshould be an overshadowing evaluation plan,outlining methodologies, models, simulations,and test data required at periodic decision points.

The TEMP should: (a) address critical humanissues to provide data to validate the results ofhuman factors engineering analyses; and (b) requireidentification of mission critical operational andmaintenance tasks.

Page 177: DAU T&E Management Guide.pdf

16-5

A reliability growth (Test, Analyze, Fix, and Test(TAFT)) program should be developed to satisfythe reliability levels required at Full Rate Pro-duction (FRP). Reliability tests and demonstra-tions will be based on actual or simulatedoperational conditions.4

Maintainability will be verified with a maintain-ability demonstration before FRP.5

As early as practicable, developers and test agen-cies will assess survivability and validate criticalsurvivability characteristics at as high a systemlevel as possible. The TEMP will identify themeans by which the survivability objective willbe validated.

Field engineering test facilities and testing in theintended operational environments are requiredto: (1) verify electric or electronic systems’predicted performance; (2) establish confidencein electromagnetic compatibility design based onstandards and specifications; and (3) validate elec-tromagnetic compatibility analysis methodology.

The TEMP will address health hazard and safetycritical issues to provide data to validate theresults of system safety analyses.

The TEMP strategy should directly support thedevelopment of more detailed planning andresource documents needed to execute the actualtest events and subsequent evaluations.

The TEMP shall provide a roadmap for integratedsimulation, test, and evaluation plans, schedules,and resource requirements necessary to accom-plish the T&E program. T&E planning shouldaddress Measures of Effectiveness/Suitability(MOEs/MOSs) with appropriate quantitative cri-teria, test event or scenario description, resourcerequirements, and test limitations. Test planning,at a minimum, must address all system compo-nents that are critical to the achievement and dem-onstration of contract technical performance

specifications and threshold values specified inthe Capability Production Document (CPD).

16.3 TEMP FORMAT

The format specified in the Defense AcquisitionGuidebook, Appendix 2, is required for all ac-quisition category I, IA, and OSD-designatedoversight programs (Table 16-1). It may be tai-lored as needed for lesser category acquisitionprograms at the discretion of the milestone deci-sion authority. The TEMP is intended to be asummary document outlining DT&E and OT&Emanagement responsibilities across all phases ofthe acquisition process. When the developmentis a multi-Service or joint acquisition program,one integrated TEMP is developed with Serviceannexes, as required. A Capstone TEMP may notbe appropriate for a single major weapon plat-form but could be used to encompass testing of acollection of individual systems, each with its ownannex (e.g., Missile Defense Agency (MDA),Family of Tactical Vehicles, Future Combat Sys-tems (FCSs)). A program TEMP is updated atmilestones, upon program baseline breach and forother significant program changes. Updates mayconsist of page changes and are no longer re-quired when a program has no further develop-ment activities.

The TEMP is a living document that must addresschanges to critical issues associated with anacquisition program. Major changes in programrequirements, schedule, or funding usually resultin a change in the test program. Thus, the TEMPmust be reviewed and updated on programchange, on baseline breach and before each mile-stone decision, to ensure that T&E requirementsare current. As the primary document used in theOSD review and milestone decision process toassess the adequacy of planned T&E, the TEMPmust be of sufficient scope and content to ex-plain the entire T&E program. The key topics inthe TEMP are shown in Table 16-1.

Page 178: DAU T&E Management Guide.pdf

16-6

Table 16-1. Test and Evaluation Master Plan Format

PART I SYSTEM INTRODUCTIONMISSION DESCRIPTIONSYSTEM DESCRIPTIONSYSTEM THREAT ASSESSMENTMEASURES OF EFFECTIVENESS AND SUITABILITYCRITICAL TECHNICAL PARAMETERS

PART II INTEGRATED TEST PROGRAM SUMMARYINTEGRATED TEST PROGRAM SCHEDULEMANAGEMENT

PART III DEVELOPMENT TEST AND EVALUATION OUTLINEDEVELOPMENT TEST AND EVALUATION OVERVIEWFUTURE DEVELOPMENTAL TEST AND EVALUATION LIMITATIONS

PART IV OPERATIONAL TEST AND EVALUATION OUTLINEOPERATIONAL TEST AND EVALUATION OVERVIEWCRITICAL OPERATIONAL ISSUESFUTURE OPERATIONAL TEST AND EVALUATION LIMITATIONSLIVE FIRE TEST AND EVALUATION

PART V TEST AND EVALUATION RESOURCE SUMMARYTEST ARTICLESTEST SITES AND INSTRUMENTATIONTEST SUPPORT EQUIPMENTTHREAT REPRESENTATIONTEST TARGETS AND EXPENDABLESOPERATIONAL FORCE TEST SUPPORTSIMULATIONS, MODELS, AND TEST BEDSSPECIAL REQUIREMENTSTEST AND EVALUATION FUNDING REQUIREMENTSMANPOWER/PERSONNEL TRAINING

APPENDIX A BIBLIOGRAPHYAPPENDIX B ACRONYMSAPPENDIX C POINTS OF CONTACT

ATTACHMENTS (AS APPROPRIATE)

SOURCE: Defense Acquisition Guidebook, September 2004.

Each TEMP submitted to OSD should be asummary document, detailed only to the extentnecessary to show the rationale for the type,amount, and schedules of the testing planned. Itmust relate the T&E effort clearly to technicalrisks, operational issues and concepts, systemperformance, Reliability, Availability, and

Maintainability (RAM), logistics objectives andrequirements, and major decision points. It shouldsummarize the testing accomplished to date andexplain the relationship of the various models andsimulations, subsystem tests, integrated systemdevelopment tests, and initial operational teststhat, when analyzed in combination, provide

Page 179: DAU T&E Management Guide.pdf

16-7

confidence in the system’s readiness to proceedinto the next acquisition phase. The TEMP mustaddress the T&E to be accomplished in each pro-gram phase, with the next phase addressed in themost detail. The TEMP is also used as a coordi-nation document to outline each test and supportorganization’s role in the T&E program and iden-tify major test facilities and resources. The TEMPsupporting the production and initial deploymentdecision must include the T&E planned to verifythe correction of deficiencies and to completeproduction qualification testing and FOT&E.

The objective of the OSD TEMP review processis to ensure successful T&E programs that willsupport decisions to commit resources at majormilestones. Some of the T&E issues consideredduring the TEMP review process include:

(1) Are DT&E and OT&E initiated early toassess performance, identify risks, andestimate operational potential?

(2) Are critical issues, test directives, and evalu-ation criteria related to mission need andoperational requirements established wellbefore testing begins?

(3) Are provisions made for collecting sufficienttest data with appropriate test instrumenta-tion to minimize subjective judgment?

(4) Is OT&E conducted by an organizationindependent of the developer and user?

(5) Do the test methodology and instrumenta-tion provide a mature and flexible networkof resources that stresses (as early as pos-sible) the weapon system in a variety ofrealistic environments?

16.4 SUMMARY

The PMO is directly responsible for the contentand quality of the test strategy and planningdocument. The TEMP, as an integrated summarymanagement tool, requires an extensive commit-ment of manhours and PM guidance. The inter-actions of the various T&E players and supportagencies in the T&E WIPT must be woven intothe fabric of the total system acquisition strategy.Cost and schedule implications must be negotiatedto ensure a viable T&E program that providestimely and accurate data to the engineering andmanagement decision makers.

Page 180: DAU T&E Management Guide.pdf

16-8

ENDNOTES

1. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition System, May 12,2003, and Defense Acquisition Guidebook,http://akss.dau.mil/DAG.

2. DoD Directive (DoDD) 5000.1, The DefenseAcquisition System, May 12, 2003.

3. Defense Science Board, Report of Task Forceon Test and Evaluation, April 1, 1974.

4. DoD 3235.1-H, Test and Evaluation of Sys-tem Reliability, Availability and Maintainability– A Primer, March 1982.

5. Ibid.

Page 181: DAU T&E Management Guide.pdf

VVMODULE

SPECIALIZEDTESTING

Many program managers face several T&E issues that must beresolved to get their particular weapon system tested andultimately fielded. These issues may include modeling andsimulation support, combined and concurrent testing, test re-sources, survivability and lethality testing, multi-Service testing,or international T&E. Each issue presents a unique set ofchallenges for program managers when they develop the inte-grated strategy for the T&E program.

Page 182: DAU T&E Management Guide.pdf
Page 183: DAU T&E Management Guide.pdf

17-1

1717SOFTWARE

SYSTEMS TESTING

17.1 INTRODUCTION

Software development presents a major develop-ment risk for military weapons and NationalSecurity Systems (NSS). Software is found in Ma-jor Automated Information Systems (MAISs) andweapon system software. High-cost Software sys-tems, such as personnel records management sys-tems, financial accounting systems, or logisticsrecords, that are the end-item solution to userrequirements, fall in the MAIS category. Perfor-mance requirements for the MAIS typically drivethe host hardware configurations and are man-aged by the Information Technology AcquisitionBoard (ITAB) chaired by the Assistant Secretaryof Defense (Networks and Information Integra-tion (ASD(NII))/Chief Information Officer (CIO).The Director, Operational Test and Evaluation(DOT&E) is a principal member of the ITAB.

Software developments, such as avionics systems,weapons targeting and control, and navigationcomputers, that are a subset of the hardwaresolution to user requirements fall in the weapon-system software category. Performance require-ments for the system hardware are flowed downto drive the functionality of the software residentin onboard computers. The effectiveness of theweapon system software is reviewed as part ofthe overall system review by the Defense Acquisi-tion Board (DAB), chaired by the Under Secretaryof Defense for Acquisition, Technology, and Logis-tics (USD(AT&L)). The DOT&E is a principalmember and the Deputy Director, Development,Test and Evaluation (DT&E) is an advisor to theDAB. No Milestone A, B, or Full Rate Produc-tion (FRP) decision (or their equivalent) shall begranted for a MAIS until the Department of

Defense (DoD) CIO certifies that the MAIS pro-gram is being developed in accordance with theClinger-Cohen Act provisions.1 For MDAP andMAIS programs, the DoD Component CIO’s con-firmation (for Major Defense Acquisition Pro-grams (MDAPs)) and certification (for MAISs)shall be provided to both the DoD CIO and theMilestone Decision Authority (MDA).

Software development historically has escalatedthe cost and reduced the reliability of weaponsystems. Embedded computer systems that arephysically incorporated into larger weapon systems,have a major data processing function. Typically,the output of the systems is information, controlsignals, or computer data required by the hostsystem to complete its mission. Although hard-ware and software often contribute in equal mea-sure to successful implementation of system func-tions, there have been relative imbalances in theirtreatment during system development.

Automated Information Systems (AISs), oncedeveloped, integrated, and tested in the host hard-ware, are essentially ready for production. Soft-ware in weapon systems, once integrated in thehost hardware, continues to be tested as a compo-nent of the total system and is not ready for pro-duction until the total system has successfully dem-onstrated required performance. Any changes toweapon system hardware configuration may stimu-late changes to the software. The development ofall software systems involves a series of activitiesin which there are frequent opportunities for er-rors. Errors may occur at the inception of the pro-cess, when the requirements may be erroneouslyspecified, or later in the development cycle, whenSystems Integration (SI) is implemented. This

Page 184: DAU T&E Management Guide.pdf

17-2

chapter addresses the use of testing to obtain in-sights into the development risk of AIS andweapon system software, particularly as it per-tains to the software development processes.

17.2 DEFINITIONS

The term Automated Information System (AIS)is defined as a combination of computer hard-ware and software, data, or telecommunicationsthat performs functions such as collecting,processing, transmitting, and displaying informa-tion.2 Excluded are computer resources, bothhardware and software, that are: physically partof, dedicated to, or essential in real time to themission performance of weapon systems.3

The term weapon system software includes Au-tomated Data Processing Equipment (ADPE),software, or services; and the function, operation,or use of the equipment software or servicesinvolves:

(1) Intelligence activities;

(2) Cryptologic activities related to nationalsecurity;

(3) Command and control of military forces;

(4) Equipment that is an integral part of aweapon system;

(5) Critical, direct fulfillment of military orintelligence missions.

Acquisition of software for DoD is described inMilitary Standard (MIL-STD)-498, Software De-velopment and Documentation, which althoughrescinded has been waived for use until commer-cial standards such as EIA 640 or J-Std-016 (Soft-ware Life Cycle Processes, Software Develop-ment, September 1995) become the guidance forsoftware development.4 Guidance may also befound in the Defense Acquisition Guidebook.

17.3 PURPOSE OF SOFTWARE TESTAND EVALUATION

A major problem in software development is alack of well-defined requirements. If require-ments are not well-defined, errors can multiplythroughout the development process. As illus-trated in Figure 17-1, errors may occur at theinception of the process. These errors may occurduring requirements definition, when objectivesmay be erroneously or imperfectly specified;during the later design and development stages,when these objectives are implemented; andduring software maintenance and operationalphases, when software changes are needed toeliminate errors or enhance performance. Esti-mates of increased software costs arising fromincomplete testing help to illustrate the dimensionof software Life-Cycle Costs (LCCs). Averagedover the operational life cycle of a computersystem, development costs encompass approxi-mately 30 percent of total system costs. Theremaining 70 percent of LCCs are associatedwith maintenance, which includes system en-hancements and error correction. Complete test-ing during earlier development phases may havedetected these errors. The relative costs of errorcorrection increase as a function of time fromthe start of the development process. Relativecosts of error correction rise dramatically be-tween requirements and design phases and moredramatically during code implementation.

Previous research in the area of software Test andEvaluation (T&E) reveals that half of allmaintenance costs are incurred in the correctionof previously undetected errors. Approximatelyone-half of the operational LCCs can be traceddirectly to inadequate or incomplete testing ac-tivities. In addition to cost increases, operationalimplications of software errors in weapon systemscan result in mission critical software failures thatmay impact mission success and personnel safety.

A more systematic and rigorous approach to soft-ware testing is required. To be effective, this

Page 185: DAU T&E Management Guide.pdf

17-3

Figure 17-1. The Error Avalanche

approach must be applied to all phases of the de-velopment process in a planned and coordinatedmanner, beginning at the earliest design stagesand proceeding through operational testing of theintegrated system. Early, detailed software T&Eplanning is critical to the successful developmentof a computer system.

17.4 SOFTWARE DEVELOPMENTPROCESS

Software engineering technologies used to pro-duce operational software are key risk factors ina development program. The T&E programshould help determine which of these technolo-gies increase risk and have a life-cycle impact. A

principal source of risk is the support softwarerequired to develop operational software. In termsof life-cycle impact, operational software prob-lems are commonly associated with the difficultyin maintaining and supporting the software oncedeployed. Software assessment requires an analy-sis of the life-cycle impact, which varies depend-ing on the technology used to design and imple-ment the software. One approach to reducinglong-term life-cycle risks is to use a commerciallanguage and common hardware throughout thedevelopment and operation of the software. Theselife-cycle characteristics that affect operational ca-pabilities must be addressed in the Test and Evalu-ation Master Plan (TEMP), and tests should bedeveloped to identify problems caused by these

NEED

CORRECTREQUIREMENTS

REQUIREMENTERRORS

REQUIREMENTSDEFINITION

BUILD/PROGRAMMING

DESIGN CORRECTDESIGN

DESIGNERRORS

REQUIREMENT-INDUCEDERRORS

CORRECTHARDWARE AND/OR

SOFTWARE

HARDWARE AND/ORSOFTWARE

ERRORS

DESIGN-INDUCEDERRORS

REQUIREMENT-INDUCEDERRORS

CORRECTPERFORMANCE

TEST ANDINTEGRATION

CORRECTEDERRORS

UNKNOWNERRORS

KNOWN BUTUNCORRECTED

ERRORS

SYSTEM WITH KNOWN AND UNKNOWN DEFICIENCIES

Page 186: DAU T&E Management Guide.pdf

17-4

Figure 17-2. System Development Process

characteristics. The technology used to design andimplement the software may significantly affectsoftware supportability and maintainability.

The TEMP must sufficiently describe the accep-tance criteria or software maturity metrics fromthe written specifications that will lead to opera-tional effectiveness and suitability. The specifica-tions must define the required software metrics toset objectives and thresholds for mission criticalfunctions. Additionally, these metrics should beevaluated at the appropriate stage of system de-velopment rather than at some arbitrarily imposedmilestone.

17.5 T&E IN THE SOFTWARE LIFECYCLE

Software testing is an iterative process executedat all development stages to examine programdesign and code to expose errors. Software testplanning should be described in the TEMP withthe same care as test planning for other systemcomponents (Figures 17-2, 17-3).

17.5.1 Testing Approach

The integration of software development into theoverall acquisition process dictates a testingprocess consistent with the bottom-up approachtaken with hardware development. The earlieststage of software testing is characterized by heavyhuman involvement in basic design and codingprocesses. Thus, human testing is defined asinformal, non-computer-based methods of evalu-ating architectures, designs, and interfaces. It canconsist of:

• Inspections: Programmers explain their work toa small group of peers with discussion and di-rect feedback on errors, inconsistencies, andomissions.

• Walk-through: A group of peers develop testcases to evaluate work to date and give directfeedback to the programmer.

• Desk Checking: A self evaluation is made byprogrammers of their own work. There is a low

SYSTEMREQMTS

ANALYSIS

SYSTEMDESIGNSRR SFR

PDRCDR

TRR

SYSTEMINTEGANDTEST

CDR

PDR

SSR

HWCITEST

CSC INTEG& TEST

FABRICATIONAND TEST

HARDWAREDEVELOPMENT

SOFTWAREDEVELOPMENT

CODING &CSU TEST

DETAILEDDESIGN

PRELIMDESIGN

HW RQMTSANALYSIS

DETAILEDDESIGN

PRELIMDESIGNSW RQMTS

ANALYSIS

CSCITEST

SYSTEMDT&E

ITEM SPECPERFORMANCE SPEC

Page 187: DAU T&E Management Guide.pdf

17-5

Fig

ure

17-

3. S

pir

al M

od

el fo

r A

IS D

evel

op

men

t P

roce

ss

DETE

RM

INE

OBJ

ECTI

VES,

ALTE

RNAT

IVES

, AND

CONS

TRAI

NTS

Supp

ort a

ndM

aint

enan

ce

Impl

emen

tatio

n

Desi

gnRi

skAn

alys

is

Risk

Anal

ysis

Risk

Anal

ysis

Upda

ted

EVAL

UATE

ALT

ERNA

TIVE

S,ID

ENTI

FY A

ND R

ESO

LVE

RISK

S

Desi

gn

Softw

are

Softw

are

Deta

iled

Inte

grat

ion

Inte

grat

ion

Qua

lific

atio

nIO

CDE

LIVE

RY

FOC

DELI

VER

Y

FCA/

PCA

DEVE

LOP

NEX

T LE

VEL

PRO

DUC

TPL

AN N

EXT

PHA

SE

Form

alTe

stin

gUs

erAc

cept

ance

Test

and

Trai

ning

Upda

ted

Deta

iled

Requ

ire- Ar

chite

ctur

e

Desi

gn

and

Test

and

Test

Test

ing

Desi

gn

men

tsSp

ec

and

Upda

ted Pr

elim

inar

y

Syst

em

SDDs

Unit

Test

Unit

Test

Sim

ulat

ions

, Mod

els,

and

Benc

hmar

ks

Code

Code

Spec

ifica

tion

Softw

are

Ope

ratio

nal

Asse

ssm

ent

Dem

onst

ratio

n

Ope

ratio

nal

Prot

otyp

ing

Prot

otyp

ing

Prot

otyp

ing

Prot

otyp

ing

Risk

Anal

ysis

Risk

Anal

ysis

Proj

ect

Defin

ition

Engi

neer

ing

and

Proj

ect

Plan

ning

Conc

ept o

fO

pera

tion

Syst

emSo

ftwar

eSp

ec

Conc

eptu

alPr

otot

ypin

g

Syst

em

Obj

ectiv

es,

Obj

ectiv

es,

Obj

ectiv

es, Obj

ectiv

es,

Alte

rnat

ives

,

Alte

rnat

ives

, Alte

rnat

ives

,

Alte

rnat

ives

,

and

and

and

and

Cons

train

ts

Cons

train

ts

Desi

gnCS

CIEn

hanc

edTr

ansi

tion

Test

Site

Inte

grat

ion,

Plan

ning

Activ

atio

nAc

tivat

ion

and

Trai

ning

Trai

ning

Plan

ning

Plan

ning

and

Inte

grat

ion

Ope

ratio

nal

Deve

lopm

ent

and

Capa

bilit

y

Cons

train

ts

Cons

train

tsSy

stem

Revi

ewRq

mts

Revi

ewDe

sign

Revi

ewPr

oduc

tRe

view

DETE

RM

INE

OBJ

ECTI

VES,

ALTE

RNAT

IVES

, AND

CONS

TRAI

NTS

Supp

ort a

ndM

aint

enan

ce

Impl

emen

tatio

n

Desi

gnRi

skAn

alys

is

Risk

Anal

ysis

Risk

Anal

ysis

Upda

ted

EVAL

UATE

ALT

ERNA

TIVE

S,ID

ENTI

FY A

ND R

ESO

LVE

RISK

S

Desi

gn

Softw

are

Softw

are

Deta

iled

Inte

grat

ion

Inte

grat

ion

Qua

lific

atio

nIO

CDE

LIVE

RY

FOC

DELI

VER

Y

FCA/

PCA

DEVE

LOP

NEX

T LE

VEL

PRO

DUC

TPL

AN N

EXT

PHA

SE

Form

alTe

stin

gUs

erAc

cept

ance

Test

and

Trai

ning

Upda

ted

Deta

iled

Requ

ire- Ar

chite

ctur

e

Desi

gn

and

Test

and

Test

Test

ing

Desi

gn

men

tsSp

ec

and

Upda

ted Pr

elim

inar

y

Syst

em

SDDs

Unit

Test

Unit

Test

Sim

ulat

ions

, Mod

els,

and

Benc

hmar

ks

Code

Code

Spec

ifica

tion

Softw

are

Ope

ratio

nal

Asse

ssm

ent

Dem

onst

ratio

n

Ope

ratio

nal

Prot

otyp

ing

Prot

otyp

ing

Prot

otyp

ing

Prot

otyp

ing

Risk

Anal

ysis

Risk

Anal

ysis

Proj

ect

Defin

ition

Engi

neer

ing

and

Proj

ect

Plan

ning

Conc

ept o

fO

pera

tion

Syst

emSo

ftwar

eSp

ec

Conc

eptu

alPr

otot

ypin

g

Syst

em

Obj

ectiv

es,

Obj

ectiv

es,

Obj

ectiv

es, Obj

ectiv

es,

Alte

rnat

ives

,

Alte

rnat

ives

, Alte

rnat

ives

,

Alte

rnat

ives

,

and

and

and

and

Cons

train

ts

Cons

train

ts

Desi

gnCS

CIEn

hanc

edTr

ansi

tion

Test

Site

Inte

grat

ion,

Plan

ning

Activ

atio

nAc

tivat

ion

and

Trai

ning

Trai

ning

Plan

ning

Plan

ning

and

Inte

grat

ion

Ope

ratio

nal

Deve

lopm

ent

and

Capa

bilit

y

Cons

train

ts

Cons

train

tsSy

stem

Revi

ewRq

mts

Revi

ewDe

sign

Revi

ewPr

oduc

tRe

view

Page 188: DAU T&E Management Guide.pdf

17-6

probability of identifying their errors of logicor coding.

• Peer Ratings: Mutually supportive, anonymousreviews are performed by groups of peers withcollaborative evaluations and feedback.

• Design Reviews: Preliminary Design Reviews(PDRs) and Critical Design Reviews (CDRs)provide milestones in the development effortsthat review development and evaluations todate. An Independent Verification and Vali-dation (IV&V) contractor may facilitate thegovernment’s ability to give meaningfulfeedback.

Once the development effort has matured beyondthe benefits of human testing, computerizedsoftware-only testing may be appropriate. It is per-formed to determine the functionality of the soft-ware when tested as an entity or “build.” Docu-mentation control is essential so that test resultsare correlated with the appropriate version of thebuild. Software testing is usually conducted usingsome combination of “black box” and “whitebox” testing.

• Black Box: Functional testing of a software unitwithout knowledge of how the internal struc-ture or logic will process the input to obtain thespecified output. Within-boundary and out-of-boundary stimulants test the software’s abilityto handle abnormal events. Most likely casesare tested to provide a reasonable assurance thatthe software will demonstrate specified perfor-mance. Even the simplest software designs rap-idly exceed the capacity to test all alternatives.

• White Box: Structural testing of the internallogic and software structure provides an oppor-tunity for more extensive identification and test-ing of critical paths. The process and objectivesare otherwise very similar to black box testing.

Testing should be performed from the bottom up.The smallest controlled software modules—

computer software units—are tested individually.They are then combined or integrated and testedin larger aggregate groups or builds. When thisprocess is complete, the software system is testedin its entirety. Obviously, as errors are found inthe latter stages of the test program, a return toearlier portions of the development program toprovide corrections is required. The cost impact oferror detection and correction can be diminishedusing the bottom-up testing approach.

System-level testing can begin once all modulesin the Computer Software Configuration Item(CSCI) have been coded and individually tested.A Software Integration Lab (SIL), with adequatemachine time and appropriate simulations, willfacilitate hardware simulation/emulation and theoperating environment. If data analysis indicatesproper software functioning, it is time to advanceto a more complex and realistic test environment.

• Hot Bench Testing: Integration of the soft-ware released from the SIL for full-up testingwith actual system hardware in a Hardware-in-the-Loop (HWIL) facility marks a signifi-cant advance in the development process.Close approximation of the actual operatingenvironment should provide test sequences andstress needed to evaluate the effectiveness ofthe software system(s). Problems stimulatedby the “noisy environment,” interface prob-lems, Electromagnetic Interference (EMI), anddifferent electrical transients should surface.Good hardware and software test programsleading up to HWIL testing should aid in iso-lating problems to the hardware or softwareside of the system. Caution should be taken toavoid any outside stimuli that might triggerunrealistic responses.

• Field Testing: Development Test and Evaluation(DT&E) and Operational Test and Evaluation(OA (Operational Assessment), IOT&E) eventsmust be designed to provide for data collectionprocesses and instrumentation that will mea-sure system responses and allow data analysts

Page 189: DAU T&E Management Guide.pdf

17-7

to identify the appropriate causes of malfunc-tions. Field testing should be rigorous, pro-viding environmental stresses and missionprofiles likely to be encountered in operationalscenarios. Government software support facili-ties personnel tasked for future maintenanceof the software system should be brought onboard to familiarize them with the system op-erating characteristics and documentation.Their resultant expertise should be includedin the software T&E process to assist in theselection of stimuli likely to expose softwareproblems.

It is critical that adequate software T&E infor-mation be contained in documents such as TEMPsand test plans. The TEMP must define character-istics of critical software components that effec-tively address objectives and thresholds for mis-sion critical functions. The Measures of Effec-tiveness (MOEs) must support the critical soft-ware issues. The test plan should specify the testmethodologies that will be applied. Test method-ologies consist of two components. The first isthe test strategy that guides the overall testingeffort, and the second is the testing technique thatis applied within the framework of a test strategy.

Effective test methodologies require realistic soft-ware test environments and scenarios. The testscenarios must be appropriate for the test objec-tives; i.e., the test results must be interpretable interms of software test objectives. The test scenariosand analysis should actually verify and validateaccomplishment of requirements. Informationassurance testing must be conducted on any sys-tem that collects, stores, transmits, or processesclassified or unclassified information. In the caseof Information Technology (IT) systems, includ-ing NSS, DT&E should support the DoD Infor-mation Technology Security Certification andAccreditation Process (ITSCAP) and Joint In-teroperability Certification (JIC) processes.5 Thetest environments must be chosen on a carefulanalysis of characteristics to be demonstrated andtheir relationship to the development, operational,

and support environments. In addition, environ-ments must be representative of those in whichthe software will be maintained. At Milestone C,for MAIS, the MDA shall approve, in coordina-tion with the DOT&E, the quantity and locationof sites for a limited deployment for IOT&E.6

17.5.2 Independent Verification andValidation

IV&V are risk-reducing techniques that areapplied to major software development efforts.The primary purpose of IV&V is to ensure thatsoftware meets requirements and is reliable andmaintainable. The IV&V is effective only ifimplemented early in the software developmentschedule. Requirements analysis and risk assess-ment are the most critical activities performedby IV&V organizations; their effectiveness islimited if brought on board a project after thefact. Often, there is a reluctance to implementIV&V because of the costs involved, but earlyimplementation of IV&V will result in loweroverall costs of error correction and softwaremaintenance. As development efforts progress,IV&V involvement typically decreases. This isdue more to the expense of continued involve-ment than to a lack of need. For an IV&V programto be effective, it must be the responsibility of anindividual or organization external to the softwaredevelopment Program Manager (PM).

The application of the IV&V process to softwaredevelopment maximizes the maintainability of thefielded software system, while minimizing the costof developing and fielding it. Maintenance of asoftware system falls into several major categories:corrective maintenance, modifying software tocorrect errors in operation; adaptive maintenance,modifying the software to meet changing re-quirements; and perfective maintenance, modi-fying the software to incorporate new featuresor improvements.

The IV&V process maximizes the reliability ofthe software product, which eases the performance

Page 190: DAU T&E Management Guide.pdf

17-8

of and minimizes the need for corrective mainte-nance. It attempts to maximize the flexibility ofthe software product, which eases the perfor-mance of adaptive and perfective maintenance.These goals are achieved primarily by determin-ing at each step of the software development pro-cess that the software product completely and cor-rectly meets the specific requirements determinedat the previous step of development. This step-by-step, iterative process continues from the initialdefinition of system performance requirementsthrough final acceptance testing.

The review of software documentation at eachstage of development is a major portion of theverification process. The current documentationis a description of the software product at thepresent stage of development and will define therequirements laid on the software product at thefollowing stage. Careful examination and analy-sis of the development documentation ensure thateach step in the software design process is con-sistent with the previous step. Omissions, incon-sistencies, or design errors can then be identifiedand corrected early in the development process.

Continuing participation in formal and informaldesign reviews by the IV&V organization main-tains the communication flow between softwaresystem “customers” and developers, ensuring thatsoftware design and production proceeds withminimal delays and misunderstandings. Frequentinformal reviews, design and code walk-throughand audits ensure that the programming standards,software engineering standards, Software QualityAssurance (SQA), and configuration managementprocedures designed to produce a reliable, main-tainable operational software system are followedthroughout the process. Continuous monitoring ofcomputer hardware resource allocation through-out the software development process also ensuresthat the fielded system has adequate capacity tomeet operation and maintainability requirements.

The entire testing process, from the planning stagethrough final acceptance test, is also approached

in a step-by-step manner by the IV&V process.At each stage of development, the functional re-quirements determine test criteria as well as de-sign criteria for the next stage. An important func-tion of the IV&V process is to ensure that thetest requirements are derived directly from theperformance requirements and are independentof design implementation. Monitoring of, partici-pation in, and performance of the various testingand inspection activities by the IV&V contractorensure that the developed software meets require-ments at each stage of development.

Throughout the software development process,the IV&V contractor reviews any proposals forsoftware enhancement or change, proposedchanges in development baselines, and proposedsolutions to design or implementation problemsto ensure that the original performance require-ments are not forgotten. An important facet ofthe IV&V contractor’s role is to act as the objec-tive third party, continuously maintaining the “au-dit trail” from the initial performance require-ments to the final operational system.

17.6 SUMMARY

There is a useful body of software testing tech-nologies that can be applied to testing of AIS andweapon system software. As a technical disci-pline, though, software testing is still maturing.There is a growing foundation of guidance docu-ments to guide the PM in choosing one testingtechnique over another. One example is the U.S.Air Force Software Technology Support Center’s(STSC’s) Guidelines for Successful Acquisitionand Management of Software-intensive Systems.The Air Force Operational Test and EvaluationCenter (AFOTEC) has also developed a courseon software OT&E. It is apparent that systematicT&E techniques are far superior to ad-hoc test-ing techniques. Implementation of an effectivesoftware T&E plan requires a set of strong tech-nical and management controls. Given the in-creasing amount of AIS and weapon system soft-ware being acquired, there will be an increased

Page 191: DAU T&E Management Guide.pdf

17-9

emphasis on tools and techniques for softwareT&E. Another resource is the Software ProgramManager’s Network that publishes guides on Best

Practices in Software T&E, Vol. I & II. (http://www.spmn.com)

Page 192: DAU T&E Management Guide.pdf

17-10

ENDNOTES

1. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition Program, May 12,2003.

2. DoD 5000.2-R, Mandatory Procedures forMajor Defense Acquisition Programs(MDAPs) and Major Automated InformationSystem (MAIS) Acquisition Programs, June2001.

3. There is some indication that DoD Directive(DoDD) 8000.1, Management of DoD Infor-mation Resources and Information Technol-ogy, February 27, 2002, Defense InformationManagement Program, which provides guid-ance on AIS development, will be incorpo-rated in a future change to the 5000 Series onacquisition management.

4. MIL-STD-498, Software Development andDocumentation, December 4, 1994, re-scinded; EIA 640 or J-Std-016 (Software LifeCycle Processes, Software Development,September 1995).

5. DoDD 4630.5, Interoperability and Support-ability of Information Technology (IT) and Na-tional Security Systems (NSS), May 5, 2004,and Chairman of the Joint Chiefs of StaffInstruction (CJCSI) 6212.01C, Interopera-bility and Supportability of Information Tech-nology and National Security Systems,November 20, 2003.

6. DoDI 5000.2.

Page 193: DAU T&E Management Guide.pdf

18-1

1818TESTING FOR VULNERABILITY

AND LETHALITY

Table 18-1. Relationships Between Key Concepts

18.1 INTRODUCTION

This chapter addresses the need to explore thevulnerability and lethality aspects of a systemthrough Test and Evaluation (T&E) practices andprocedures. In particular, this chapter describesthe legislatively mandated Live Fire Test Program,which has been established to evaluate the sur-vivability and lethality of developing systems.(Table 18-1) It also discusses the role of T&E inassessing a system’s ability to perform in anuclear combat environment. The discussion oftesting for nuclear survivability is based prima-rily on information contained in the NuclearSurvivability Handbook for OT&E.1

18.2 LIVE FIRE TESTING

18.2.1 Background

In March 1984, the Office of the Secretary ofDefense (OSD) chartered a joint T&E programdesignated “The Joint Live Fire Program.” Thisprogram was to assess the vulnerabilities and le-thality of selected U.S. and threat systems already

fielded. The controversy over joint Live Fire Test-ing (LFT) of the Army’s Bradley Fighting VehicleSystem, subsequent congressional hearings, andmedia exposure resulted in provisions being in-corporated in the National Defense AuthorizationAct (NDAA) of Fiscal Year (FY) 87. This act re-quired an OSD-managed LFT program for majoracquisition programs fitting certain criteria. Sub-sequent amendments to legislative guidance havedictated the current program. The Department ofDefense (DoD) implementation of congressionalguidance in Defense Acquisition Guidebook, re-quires that “covered systems, major munitions pro-grams, missile programs, or Product Improvements(PIs) to these” (i.e., Acquisition Category (ACAT)I and II programs) must execute survivability andlethality testing before Full Rate Production (FRP).The legislation dealing with the term “coveredsystem” has been clarified as a system “thatDOT&E, acting for the Secretary of Defense [SEC-DEF], has determined to be: a major system withinthe meaning of that term in Title 10, United StatesCode (U.S.C.) 2302(5) that is (1) user-occupiedand designed to provide some degree of protectionto its occupants in combat; or (2) a conventional

PERSPECTIVE

TERMINOLOGY DEFENSIVE OFFENSIVE MEANING

SURVIVABILITY X PROBABILITY OF ENGAGEMENTEFFECTIVENESS X

VULNERABILITY X PROBABILITY OF KILL GIVEN A HITLETHALITY X

SUSCEPTIBILITY X PROBABILITY OF ENGAGEMENT

Source: Adapted from Live Fire Testing: Evaluating DoD’s Programs,U.S. General Accounting Office, GAO/PEMD-87-17, August 1987, page 15.

Page 194: DAU T&E Management Guide.pdf

18-2

munitions program or missile program; or (3) aconventional munitions program for which morethan 1,000,000 rounds are planned to be acquired;or (4) a modification to a covered system that islikely to affect significantly the survivability orlethality of such a system.”

The Secretary of Defense (SECDEF) has delegatedthe authority to waive requirements for the full-up, system-level Live Fire Test and Evaluation(LFT&E) before the system passes the programinitiation milestone, to the Under Secretary of De-fense (Acquisition, Technology, and Logistics)[USD(AT&L)] for ACAT ID and the CAE forACAT II programs, when it would be unreason-ably expensive and impractical. An alternativevulnerability and lethality T&E program must stillbe accomplished. Programs subject to LFT or des-ignated for oversight are listed on the OSD annualT&E oversight list. The DoD agent for manage-ment of the Live Fire Test program is the Director,Operational Test and Evaluation (DOT&E). Thistype of Development Test and Evaluation (DT&E)must be planned to start early enough in the de-velopment process to impact design and to pro-vide timely test data for the OSD Live Fire TestReport required for the Full Rate ProductionDecision Review (FRPDR) and congressionalcommittees. The Service-detailed Live Fire TestPlan must be reviewed and approved by the

DOT&E, and LFT must be addressed in Part IVof the program Test and Evaluation Master Plan(TEMP). The OSD had previously publishedguidelines, elements of which have subsequentlybeen incorporated into the latest revision to the5000 Series Defense Acquisition Guidebook.

18.2.2 Live Fire Tests

There are varying types and degrees of live firetests. The matrix in Table 18-2 illustrates the vari-ous possible combinations. Full-scale, full-up test-ing is usually considered to be the most realistic.

The importance of full-scale testing has been welldemonstrated by the Joint Live Fire (JLF) tests.In one case, these tests contradicted earlier con-clusions concerning the flammability of a newhydraulic fluid used in F-15 and F-16 aircraft.Laboratory tests had demonstrated that the newfluid was less flammable than the standard fluid.However, during the JLF tests, 30 percent of theshots on the new fluid resulted in fires contrastedwith 15 percent of the shots on the standard fluid.2

While much insight and valuable wisdom are tobe obtained through the testing of components orsubsystems, some phenomena are only observablewhen full-up systems are tested. The interactionof such phenomena has been termed “cascading

Table 18-2. Types of Live Fire Testing

LOADING

FULL-UP INERT A

FULL SCALE COMPLETE SYSTEM: WITH COMBUSTIBLES COMPLETE SYSTEM: NO COMBUSTIBLES (E.G., TESTS OF(E.G., BRADLEY PHASE II TESTS, AIRCRAFT NEW ARMOR ON ACTUAL TANKS, AIRCRAFT FLIGHT“PROOF” TESTS) CONTROL TESTS)

SUB SCALE COMPONENTS, SUBCOMPONENTS: WITH COMPONENTS, SUBCOMPONENTS: STRUCTURES,COMBUSTIBLES (E.G., FUEL CELL TESTS, TERMINAL BALLISTICS, MUNITIONS PERFORMANCE,BEHIND ARMOR, MOCK-UP AIRCRAFT, BEHIND-ARMOR TESTS, WARHEAD CHARACTERIZATIONENGINE FIRE TESTS) (E.G., ARMOR/WARHEAD INTERACTION TESTS, AIRCRAFT

COMPONENT STRUCTURAL TESTS)a IN SOME CASES, TARGETS ARE “SEMI-INERT,” MEANING SOME COMBUSTIBLES ARE ON BOARD, BUT NOT ALL.

(EXAMPLE: TESTS OF COMPLETE TANKS WITH FUEL AND HYDRAULIC FLUID, BUT DUMMY AMMUNITION)

Source: Live Fire Testing: Evaluating DoD’s Program, General Accounting Office, GAO/PEMD-87-17, August 1987.

Page 195: DAU T&E Management Guide.pdf

18-3

Figure 18-1. Live Fire Test and Evaluation Planning Guide

damage.” Such damage is a result of the syner-gistic damage mechanisms that are at work inthe “real world” and likely to be found duringactual combat. LFT provides a way of examiningthe damages inflicted not only on materiel butalso on personnel. The crew casualty problem isan important issue that the LFT program isaddressing. The program provides an opportunityto assess the effects of the complex environmentsthat crews are likely to encounter in combat (e.g.,fire, toxic fumes, blunt injury shock and acousticinjuries).3

18.2.3 Use of Modeling and Simulation(M&S)

Survivability and lethality assessments havetraditionally relied largely on the use of model-ing and simulation techniques. The LFT Programdoes not replace the need for such techniques; infact, the LFT Guidelines issued by OSD in May1987 (Figure 18-1) required that no shots beconducted until pre-shot model predictions weremade concerning the expected damage. Such pre-dictions are useful for several reasons. First, they

LFT&E PLANNING GUIDE IDENTIFICATIONAS LFT&E

CANDIDATE

CONSIDERATION OFISSUES TO SUPPORT

DEVELOPMENT OFLFT&E STRATEGY

DEVELOPMENTOF LFT&ESTRATEGY

OK?

OK?

CONDUCTTESTS

LFT&EWITNESS

OSDASSESSMENT

SECDEF/CONGRESSIONAL

COMMITTEES

DEVELOPLFT&E PLAN

YES

YES

NO

NO

WHATWHENWHERE

WHYWHO

IDENTIFICATIONOF MISSING

INFORMATION

ACQUISITIONOF NEEDED

INFORMATION

LFT&ESTRATEGY

(TEMP)

DETAILEDLFT&EPLAN

TESTREPORT

OSDAPPROVAL

OSDAPPROVAL

OSD/LFT&EREVIEW AND

COMMENT

OSD/LFT&EREVIEW AND

COMMENT

OSD/LFT&EREVIEW

OSD/LFT&EREVIEW

Page 196: DAU T&E Management Guide.pdf

18-4

assist in the test planning process. If a modelpredicts that no damage will be inflicted, testdesigners and planners should reexamine theselection of the shot lines and/or reassess theaccuracy of the threat representation. Second, pre-shot model predictions provide the Services withthe opportunity to validate the accuracy of the mod-els by comparing them with actual LFT results.At the same time, the LFT program reveals areasof damage that may be absent from existing mod-els and simulations. Third, pre-shot model predic-tions can be used to help conserve scarce target re-sources. For example, models can be used todetermine a sequence of shots that provides forthe less damaging shots to be conducted first, fol-lowed by the more catastrophic shots resulting inmaximum target damage.

18.2.4 Live Fire Test Best Practices

The Defense Acquisition Guidebook states thatplans for LFT must be included in the TEMP.Key points covered in the LFT guidelines includethe following:

• The LFT&E Detailed T&E Plan is the basicplanning document used by OSD and theServices to plan, review, and approve LFT&E.Services will submit the plan to the DOT&Efor comment at least 30 days prior to testinitiation.

• The LFT&E plan must contain generalinformation on the system’s required perfor-mance, operational and technical characteris-tics, critical test objectives, and the evaluationprocess.

• Each LFT&E plan must include testing ofcomplete systems. A limited set of live firetests may involve production components con-figured as a subsystem before full-up testing.

• A Service report must be submitted within 120days of the completion of the live fire test.The report must include the firing results, test

conditions, limitations and conclusions, and besubmitted in classified and unclassified form.

• Within 45 days of receipt of the Service report,a separate Live Fire Test Report4 will beproduced by the DOT&E, approved by theSecretary of Defense, and transmitted to Con-gress. The conclusions of the report will beindependent of the conclusions of the Servicereport. Reporting on LFT&E may be includedin the weapon system’s Beyond Low RateInitial Production (BLRIP) Report completedby the DOT&E.

• The Congress shall have access to all live firetest data and all live fire test reports held byor produced by the Secretary of the concernedService or by OSD.

• The costs of all live fire tests shall be paidfrom funding for the system being tested. Insome instances, the Deputy DOT&E-Live Firemay elect to supplement such funds for theacquisition of targets or target simulators, al-though the ultimate responsibility rests on theconcerned Service.

The Survivability, Vulnerability InformationAnalysis Center (SURVIAC) is the DoD focalpoint for nonnuclear survivability/vulnerabilitydata, information, methodologies, models, andanalysis relating to U.S. and foreign aeronauti-cal and surface systems. Data on file for con-ventional missiles and guns includes informa-tion on acquisition, detection, tracking, launch,fly-out and fuzing characteristics, countermea-sures and counter-countermeasures, and termi-nal effects. Other weapons systems informationincludes physical and functional characteristics;design, performance, and operational informa-tion; acoustics, infrared, optical, electro-optical,and radar signature; combat damage and repair;and system, subsystem, and component probabil-ity of kill given a hit (pk/h) functions. SURVIACis sponsored by the Joint Logistics Command-ers’ Joint Aircraft Survivability Program Office

Page 197: DAU T&E Management Guide.pdf

18-5

(http://jas.jcs.mil) and the Joint Technical Coor-dinating Group on Munitions Effectiveness (http://jtcg.amsaa.army.mil/index.html).

18.3 TESTING FOR NUCLEARHARDNESS AND SURVIVABILITY

18.3.1 Background

Nuclear survivability must be incorporated intothe design, acquisition, and operation of all sys-tems that must perform critical missions in anuclear environment. Nuclear survivability isachieved through a combination of four methods:hardness, avoidance, proliferation, and reconsti-tution. Hardness allows a system to physicallywithstand a nuclear attack. Avoidance encom-passes measures taken to avoid encountering anuclear environment. Proliferation involves hav-ing sufficient systems to compensate for probablelosses. Reconstitution includes the actions takento repair or resupply damaged units in time tocomplete a mission satisfactorily.

There is a wide variety of possible effects from anuclear detonation. They include: ElectromagneticPulse (EMP), ionizing radiation, thermal radiation,blast, shock, dust, debris, blackout, and scintilla-tion. Each weapon system is susceptible to somebut not all of these effects. Program Managers(PMs) and their staff must identify the effects thatmay have an impact on the system underdevelopment and manage the design, development,and testing of the system in a manner that mini-mizes degradation. The variety of possible nucleareffects is described more fully in the NuclearSurvivability Handbook for Air Force OT&E.5

18.3.2 Assessing Nuclear SurvivabilityThroughout the System AcquisitionCycle

The PM must ensure that nuclear survivabilityissues are addressed throughout the systemacquisition cycle. During assessment of concepts,the survivability requirements stated in the

Service requirements document should be veri-fied, refined, or further defined. During thesystem’s early design stages, tradeoffs betweenhardness levels and other system characteristics(such as weight, decontaminability, and compat-ibility) should be described quantitatively.Tradeoffs between hardness, avoidance, prolifera-tion, and reconstitution as a method for achiev-ing survivability should also be considered at thistime. During advanced engineering development,the system must be adequately tested to confirmthat hardness objectives, criteria, requirements,and specifications are met. Plans for NHS test-ing should be addressed in the TEMP. The ap-propriate commands must make provision for testand hardness surveillance equipment and proce-dures so required hardness levels can be main-tained once the system is operational.

During FRP, deployment, and operational support,system hardness is maintained through an activehardness assurance program. Such a program en-sures that the end product conforms to hardnessdesign specifications and that hardness aspectsare reevaluated before any retrofit changes aremade to existing systems.

Once a system is operational, a hardness surveil-lance program may be implemented to maintainsystem hardness and to identify any further evalu-ation, testing, or retrofit changes required to ensuresurvivability. A hardness surveillance programconsists of a set of scheduled tests and inspectionsto ensure that a system’s designed hardness is notdegraded through operational use, logistics sup-port, maintenance actions, or natural causes.

18.3.3 Test Planning

The Nuclear Survivability Handbook for Air ForceOT&E describes the following challenges asso-ciated with NH&S testing:

(1) The magnitude and range of effects from anuclear burst are much greater than thosefrom conventional explosions that may be

Page 198: DAU T&E Management Guide.pdf

18-6

Table 18-3. Nuclear Hardness and Survivability Assessment Activities

CONCEPT ASSESSMENT

• PREPARATION OF TEST AND EVALUATION MASTER PLAN (TEMP) THAT INCLUDES INITIAL PLANS FORNUCLEAR HARDNESS AND SURVIVABILITY (NH&S) TESTS

– IDENTIFICATION OF NH&S REQUIREMENTS IN VERIFIABLE TERMS

– IDENTIFICATION OF SPECIAL NH&S TEST FACILITY REQUIREMENTS WITH EMPHASIS ON LONG LEADTIME ITEMS

• DEVELOPMENT OF NUCLEAR CRITERIA

PROTOTYPE TESTING

• INCREASE TEST PLANNING

• TEMP UPDATE

• CONDUCT OF NH&S TRADE STUDIES

– NH&S REQUIREMENTS VERSUS OTHER SYSTEM REQUIREMENTS

– ALTERNATE METHODS FOR ACHIEVING NH&S

• CONDUCT OF LIMITED TESTING

– PIECE-PART HARDNESS TESTING

– DESIGN CONCEPT TRADE-OFF TESTING

– TECHNOLOGY DEMONSTRATION TESTING

• DEVELOPMENT OF PERFORMANCE SPECIFICATIONS THAT INCLUDE QUANTITATIVE HARDNESS LEVELS

ENGINEERING DEVELOPMENT MODEL (EDM)

• FIRST OPPORTUNITY TO TEST PROTOTYPE HARDWARE

• TEMP UPDATE

• DEVELOPMENT OF NUCLEAR HARDNESS DESIGN HANDBOOK

– PRIOR TO PRELIMINARY DESIGN REVIEW

– USUALLY PREPARED BY NUCLEAR EFFECTS SPECIALTY CONTRACTOR

• CONDUCT OF TESTING

– PRE-CRITICAL DESIGN REVIEW (CDR) DEVELOPMENT AND QUALIFICATION TEST

– DEVELOPMENTAL TESTING ON NUCLEAR-HARDENED PIECE PARTS, MATERIALS, CABLING, ANDCIRCUITS

– NH&S BOX AND SUBSYSTEM QUALIFICATION TESTS (POST-CDR)

– ACCEPTANCE TESTS TO VERIFY HARDWARE MEETS SPECIFICATIONS (POST-CDR, PRIOR TO FIRSTDELIVERY)

– SYSTEM-LEVEL HARDNESS ANALYSIS (USING BOX AND SUBSYSTEM TEST RESULTS)

– SYSTEM-LEVEL NH&S TEST

PRODUCTION (LRIP, FULL RATE), DEPLOYMENT AND OPERATIONAL SUPPORT

• IMPLEMENTATION OF PROGRAM TO ENSURE SYSTEM RETAINS ITS NH&S PROPERTIES

• SCREENING OF PRODUCTION HARDWARE FOR HARDNESS

• IMPLEMENTATION BY USER OF PROCEDURES TO ENSURE SYSTEM’S OPERATION, LOGISTICS SUPPORTAND MAINTENANCE DO NOT DEGRADE HARDNESS FEATURES

• REASSESSMENT OF SURVIVABILITY THROUGHOUT SYSTEM LIFE CYCLE

Page 199: DAU T&E Management Guide.pdf

18-7

used to simulate nuclear bursts. Nucleardetonations have effects not found in con-ventional explosions. The intense nuclearradiation, blast, shock, thermal, and EMPfields are difficult to simulate. In addition,systems are often tested at stress levels thatare either lower than those established bythe criteria or lower than the level neededto cause damage to the system.

(2) The yields and configurations for under-ground testing are limited. It is generallynot possible to test all relevant effectssimultaneously or to observe possiblyimportant synergism between effects.

(3) System-level testing for nuclear effects isnormally expensive, takes years to plan andconduct, and requires specialized expertise.Often, classes of tests conducted early inthe program are not repeated later. There-fore, operational requirements should befolded into these tests from the start, oftenearly in the acquisition process. This man-dates a more extensive, combined DT&E/OT&E (Operational Test and Evaluation)test program than normally found in othertypes of testing.

PMs and Test Managers (TMs) must remain sen-sitive to the ambiguities involved in testing fornuclear survivability. For example, there is nouniversal quantitative measure of survivability;and statements of survivability may lend them-selves to a variety of interpretations. Moreover,it can be difficult to combine system vulnerabilityestimates for various nuclear effects into an as-sessment of overall survivability. As a result, PMs/TMs must exercise caution when developing testobjectives and specifying measures of meritrelated to nuclear survivability.

18.3.4 Test Execution

For NH&S testing, Development Test (DT) andOperational Test (OT) efforts are often combined

because it is not possible to test in an operationalnuclear environment. The use of an integrated DT/OT program requires early and continuous dia-logue between the two test communities so eachunderstands the needs of the other and maximumcooperation in meeting objectives is obtained.

T&E techniques available to validate the nuclearsurvivability aspects of systems and subsystemsinclude underground nuclear testing, environmen-tal simulation (system level, subsystem level, andcomponent level), and analytical simulation. Table18-3 outlines the major activities relevant to theassessment of NH&S and the phases of the ac-quisition cycle in which they occur.

18.4 SUMMARY

The survivability and lethality aspects of certainsystems must be evaluated through live fire tests.These tests are used to provide insights into theweapon system’s ability to continue to operate/fight after being hit by enemy threat systems. Itprovides a way of examining the damages in-flicted not only on materiel but also on person-nel. LFT also provides an opportunity to assessthe effects of complex environments that crewsare likely to encounter in combat.

Nuclear survivability must be carefully evaluatedduring the system acquisition cycle. Tradeoffsbetween hardness levels and other system char-acteristics, such as weight, speed, range, cost, etc.,must be evaluated. Nuclear survivability testingis difficult, and the evaluation of test results mayresulted in a variety of interpretations. Therefore,PMs must exercise caution when developing testobjectives related to nuclear survivability.

Page 200: DAU T&E Management Guide.pdf

18-8

ENDNOTES

1. Air Force Operational Test and EvaluationCenter (AFOTEC), Nuclear SurvivabilityHandbook for Air Force OT&E.

2. DoD Instruction (DoDI) 5030.55, Joint AEC-DoD Nuclear Weapons Development Proce-dures, January 4, 1974.

3. Office of the Deputy Director, Test and Evalu-ation (Live Fire Test), Live Fire Test andEvaluation Planning Guide, June 1989.

4. Defense Acquisition Guidebook, Appendix 3.http://akss.dau.mil/DAG/.

5. AFOTEC.

Page 201: DAU T&E Management Guide.pdf

19-1

1919LOGISTICS

INFRASTRUCTURE T&E

19.1 INTRODUCTION

In all materiel acquisition programs, the logisticssupport effort begins in the capabilities assess-ments of mission needs, leads to the developmentof the Initial Capabilities Document (ICD) beforeprogram initiation, continues throughout theacquisition cycle, and extends past the deploy-ment phase. Logistics support system testingmust, therefore, extend over the entire acquisi-tion cycle of the system and be carefully plannedand executed to ensure the readiness and support-ability of the system. Discussion of PerformanceBased Logistics (PBL) can be found at the De-fense Acquisition University (DAU) Web site(http://www.dau.mil), Acquisition, Technology,and Logistics (AT&L) Knowledge Sharing Site(AKSS) for the Logistics Community of Practice(CoP). This chapter covers the development oflogistics support test requirements and the con-duct of supportability assessments to ensure thatreadiness and supportability objectives are iden-tified and achieved. The importance of the logis-tics manager’s participation in the Test and Evalu-ation Master Plan (TEMP) development processshould be stressed. The logistics manager mustensure the logistics system Test and Evaluation(T&E) objectives are considered and thatadequate resources are available for logisticssupport system T&E.

Logistics system support planning is a disciplined,unified, and iterative approach to the managementand technical activities necessary to integratesupport considerations into system and equipmentdesign; develop support requirements that arerelated consistently to readiness objectives, design,and each other; acquire the required support; and

provide the required support during deploymentand operational support at affordable cost.

Logistics support systems are usually categorizedinto 10 specific components, or elements:

(1) Maintenance planning

(2) Manpower and personnel

(3) Supply support

(4) Support equipment

(5) Technical data

(6) Training and training support

(7) Computer resources support

(8) Facilities

(9) Packaging, handling, storage, and transportation

(10) Design interface.

19.2 PLANNING FOR LOGISTICSSUPPORT SYSTEM T&E

19.2.1 Objectives of Logistics System T&E

The main objective of logistics system T&E is toverify that the logistics support being developedfor the materiel system is capable of meeting therequired objectives for peacetime and wartimeemployment. This T&E consists of the usualDevelopment Test and Evaluation (DT&E) andOperational Test and Evaluation (OT&E) but also

Page 202: DAU T&E Management Guide.pdf

19-2

Fig

ure

19-

1. L

og

isti

cs S

up

po

rtab

ility

Ob

ject

ives

in t

he

T&

E P

rog

ram

CONC

EPT

TECH

NOLO

GY

SYST

EM D

EVEL

OPM

ENT

PRO

DUCT

ION

&O

PERA

TIO

NS &

TEST

REFI

NEM

ENT

DEVE

LOPM

ENT

AND

DEPL

OYM

ENT

SUPP

ORT

TYPE

DEM

ON

STRA

TIO

N

DEVE

LOPM

ENTA

L•

ASSE

SS P

REFE

RRED

ASSE

SS T

ECHN

ICAL

•ID

ENTI

FY D

ESIG

N•

VERI

FY C

PD L

OG

ISTI

CS•

EVAL

UATE

IMPA

CT

T&E

SYST

EM S

UPPO

RTAP

PRO

ACH,

LO

GIS

TICS

PRO

BLEM

S AN

D AS

SESS

PARA

MET

ERS

FOR

OF

ENG

INEE

RNIG

CO

NCE

PTS

RISK

S, A

ND A

LTER

NATI

VESO

LUTI

ONS

FO

R SY

STEM

PRO

DU

CTIO

NC

HANG

ES O

NSU

PPO

RTAB

ILIT

Y SO

LUTI

ONS

LOG

ISTI

CS E

LEM

ENTS

SYST

EMS

LOG

ISTI

CS•

INPU

T TO

AO

APA

RAM

ETER

S•

EVAL

UATE

LO

GIS

TICS

-•

EVAL

UATE

SYS

TEM

CD

D•

PRO

VIDE

INSI

GHT

INTO

•IN

PUT

TO T

DSRE

LATE

D TE

CHNO

LOG

IES

LOG

ISTI

CS P

ARAM

ETER

S,

READ

INES

S FO

R O

PERA

TIO

NAL

•AS

SESS

ADE

QUA

CYAN

D RA

M P

REDI

CTIO

NSSU

ITAB

ILIT

Y TE

STIN

G O

FO

FFI

XES

FOR

•SU

PPO

RTAB

ILIT

Y•

TDS

LOG

ISTI

CS

LOG

ISTI

CSEL

EMEN

TSLO

GIS

TICS

M&S

ASSE

SSM

ENT

STRA

TEG

Y•

COND

UCT

EARL

YDE

FICI

ENCI

ESEX

PAND

ED F

OR

TEM

PM

AINT

ENAN

CE D

EMO

S,•

COND

UCT

ASSE

SSM

ENTS

IDEN

TIFI

ED B

YLE

VEL

OF

REPA

IRO

F M

AINT

ENAN

CE C

ON

CEPT

SFI

ELD

UNIT

S•

ASSE

SS T

ESTA

BILI

TY O

FAS

SESS

MEN

TS, A

ND

AND

PRO

CESS

ESPO

TENT

IAL

LOG

ISTI

CSM

AINT

ENAN

CE T

ASK

•EV

ALUA

TE F

IXES

FO

OTP

RINT

ELE

MEN

TSAN

ALYS

ES•

PRO

VIDE

RAM

DAT

A TO

FO

R IO

T&E

SUPP

ORT

LCC

EDE

FICI

ENCI

ES•

ASSE

SS T

HRES

HOLD

S/•

CON

DU

CT F

AILU

RE M

ODE

OBJ

ECTI

VES

FOR

EFFE

CTS,

AND

CRI

TICA

LITY

•DE

MO

NSTR

ATE

LOG

ISTI

CS P

ARAM

ETER

SAN

ALYS

IS (F

MEC

A)AT

TAIN

MEN

T O

F IN

CDD

RAM

/REA

DINE

SS•

SUPP

ORT

SYS

TEM

S C

RITE

RIA

ENG

INEE

RING

DES

IGN

REVI

EWS

•RE

LIAB

ILIT

Y G

ROW

TH T

&E

OPE

RATI

ONA

L•

ASSE

SS P

OTE

NTIA

L•

EXAM

INE

OPE

RATI

ONA

L•

ASSE

SS A

BILI

TY O

F•

ASSE

SS O

PERA

TIO

NAL

EVAL

UATE

T&

EFO

R O

PERA

TIO

NAL

ASPE

CTS

OF

SUPP

ORT

ABIL

ITY

LOG

ISTI

CS S

YSTE

M T

O

SUIT

ABIL

ITY

FOR

OPE

RATI

ONA

LSU

ITAB

ILIT

Y O

FTE

CHNI

CAL

SOLU

TIO

NSPR

OVI

DE S

UPPO

RT IN

APR

OD

UCT

ION

SYST

EMS

SUIT

ABIL

ITY

AND

PREF

ERRE

D CO

NCE

PTRE

LEVA

NT O

PERA

TIO

NAL

SUPP

ORT

ABIL

ITY

OF

• DE

VELO

P M

EASU

ES A

ND C

OI

ENVI

RONM

ENT

• PR

OVI

DE R

ELIA

BILI

TY D

ATA

DESI

GN

CHA

NGES

•AS

SIST

IN S

ELEC

TING

FOR

FUTU

RE S

UITA

BILI

TYFO

R RE

LIAB

ILIT

Y G

ROW

THPR

EFER

RED

SYST

EMAS

SESS

MEN

TS-IN

PUT

FOR

•AS

SESS

IMPR

OVE

MEN

T IN

MO

DEL

•ID

ENTI

FY

SUP

PORT

CO

NCE

PTS

TEM

PO

PERA

TIO

NAL

SUIT

ABIL

ITY

IMPR

OVE

MEN

TS F

OR

FRO

M P

ROTO

TYPE

TO

EDM

SUPP

ORT

ABIL

ITY

•IN

PUT

TO T

DS•

ESTI

MAT

E PO

TENT

IAL

SYST

EMS

PARA

MET

ERS

OPE

RATI

ONA

L SU

ITAB

ILIT

Y

OF

LOG

ISTI

CS S

YSTE

MS

•RE

FINE

CO

NCE

PTS

FOR

•PR

OVI

DE D

ATA

FOR

AND

SUST

AINM

ENT

EVAL

UATI

ON

OF

CPD

ADJU

STIN

GPL

ANNI

NGLO

GIS

TICS

PAR

AMET

ERS

SUPP

ORT

ABIL

ITY

MO

DELS

•EV

AUAT

E LO

GIS

TICS

PARA

MET

ERS

WAI

VED

FOR

IOT&

E

ACQ

UISI

TIO

NPH

ASE

Page 203: DAU T&E Management Guide.pdf

19-3

includes post-deployment supportability assess-ments. The formal DT&E and OT&E begin withsubcomponent assembly and prototype testing,and continue throughout advanced engineeringdevelopment, production, deployment, and opera-tional support. Figure 19-1 describes the specificDevelopment Test (DT), Operational Test (OT),and supportability assessment objectives for eachacquisition phase. “The PM shall apply HumanSystems Integration (HSI) to optimize total sys-tem performance (hardware, software, andhuman), operational effectiveness, and suitability,survivability, safety, and affordability.”1

19.2.2 Planning for Logistics SupportSystem T&E

19.2.2.1 Logistics Support Planning

“The PM shall have a comprehensive plan forHSI in place early in the acquisition process tooptimize total system performance, minimizeTotal Ownership Costs [TOCs], and ensure thatthe system is built to accommodate the charac-teristics of the user population that will operate,maintain, and support the system.” HSI includesdesign for: human factors engineering; person-nel; habitability; manpower; training; Environ-ment, Safety, and Occupational Health (ESOH);and survivability.2 The logistics support managerfor a materiel acquisition system is responsiblefor developing the logistics support planning,which documents planning for and implement-ing the support of the fielded system. It is ini-tially prepared during preparation for program ini-tiation, and progressively developed in more de-tail as the system moves through the acquisitionphases. Identification of the specific logisticssupport test issues related to the individual logis-tics elements and the overall system support andreadiness objectives are included.

The logistics support manager is assisted through-out the system’s development by a Logistics Sup-port Integrated Product Team (IPT), which isformed early in the acquisition cycle. The

Logistics Support IPT is a coordination/advisorygroup composed of personnel from the ProgramManagement Office (PMO), the using command,and other commands concerned with acquisitionactivities such as logistics, testing, and training.

19.2.2.2 Supportability Assessment Planning

Based upon suitability objectives, the logisticssupport manager, in conjunction with the system’sTest Manager (TM), develops suitability assess-ment planning. This planning identifies the test-ing approach and the evaluation criteria that willbe used to assess the supportability-related designrequirements (e.g., Reliability and Maintainability(R&M)) and adequacy of the planned logisticssupport resources for the materiel system.Development of the suitability T&E planningbegins during concept refinement; the planningis then updated and refined in each successiveacquisition phase. The logistics support managermay apply the best practices of logistics supportanalysis as described in Military Standard (MIL-STD)-1388-1A.3

The T&E strategy is formulated, T&E programobjectives and criteria are established, and re-quired test resources are identified. The logisticssupport manager ensures that T&E strategy isbased upon quantified supportability require-ments and addresses supportability issues includ-ing those with a high degree of associated risk.Also, the logistics support manager ensures thatthe necessary quantities and types of data will becollected during system development and afterdeployment of the system to validate the variousT&E objectives. The T&E objectives and criteriamust provide a basis that ensures critical support-ability issues and requirements are resolved orachieved within acceptable confidence levels.

19.2.2.3 Test and Evaluation Master Plan

The Program Manager (PM) must include suit-ability T&E information in the TEMP as speci-fied in the Defense Acquisition Guidebook. The

Page 204: DAU T&E Management Guide.pdf

19-4

input, derived from logistics supportability plan-ning with the assistance of the logistics supportmanager and the tester, includes descriptions ofrequired operational suitability, specific plans fortesting logistics supportability, and required test-ing resources. It is of critical importance that allkey test resources required for ILS testing (DT,OT, and post deployment supportability) be iden-tified in the TEMP because the TEMP providesa long-range alert upon which test resources arebudgeted and obtained for testing.

19.2.3 Planning Guidelines for LogisticsSupport System T&E

The following guidelines were selected from thoselisted in the Acquisition Logistics Guide:4

(1) Develop a test strategy for each logisticssupport-related objective. Ensure thatOT&E planning encompasses all logisticssupport elements. The general objectivesshown in Figure 19-1 must be translated intodetailed quantitative and qualitative require-ments for each acquisition phase and eachT&E program.

(2) Incorporate logistics support testing require-ments (where feasible) into the formalDT&E/OT&E plans.

(3) Identify logistics support T&E that will beperformed outside of the normal DT&E andOT&E. Include subsystems that require off-system evaluation.

(4) Identify all required resources, includingtest articles and logistics support items forformal DT/OT and separate logistics sup-port system testing (participate with testplanner).

(5) Ensure establishment of an operationallyrealistic test environment, to include per-sonnel representatives of those who willeventually operate and maintain the fielded

system. These personnel should be trainedfor the test using prototypes of the actualtraining courses and devices. They shouldbe supplied with drafts of all technicalmanuals and documentation that will beused with the fielded system.

(6) Ensure planned OT&E will provide sufficientdata on high-cost and high-maintenanceburden items (e.g., for high-cost criticalspares, early test results can be used toreevaluate selection).

(7) Participate early and effectively in theTEMP development process to ensure theTEMP includes critical logistics T&E des-ignated test funds from program and budgetdocuments.

(8) Identify the planned utilization of all datacollected during the assessments to avoid mis-matching of data collection and informationrequirements.

Additional guidance may be found in the Logis-tics Test and Evaluation Handbook developed bythe 412 Logistics Group, Edwards Air Force Base,California.

19.3 CONDUCTING LOGISTICSSUPPORT SYSTEM T&E

19.3.1 The Process

The purposes of logistics support system T&Eare to measure the supportability of a developingsystem throughout the acquisition process, toidentify supportability deficiencies and potentialcorrections/improvements as test data becomeavailable, and to assess the operational suitabilityof the planned support system. It also evaluatesthe system’s ability to achieve planned readinessobjectives for the system/equipment being devel-oped. Specific logistics support system T&E tasks(guidance in MIL-STD-1388-1A) include:5

Page 205: DAU T&E Management Guide.pdf

19-5

• Analysis of test results to verify achievementof specified supportability requirements;

• Determination of improvements in supportabil-ity and supportability-related design param-eters needed for the system to meet establishedgoals and thresholds;

• Identification of areas where established goalsand thresholds have not been demonstratedwithin acceptable confidence levels;

• Development of corrections for identified sup-portability problems such as modifications tohardware, software, support plans, logisticssupport resources, or operational tactics;

• Projection of changes in costs, readiness, andlogistics support resources due to implementationof corrections;

• Analysis of supportability data from the de-ployed system to verify achievement of the es-tablished goals and thresholds; and where op-erational results deviate from projections, de-termination of the causes and correctiveactions.

Logistics support system T&E may consist of aseries of logistics demonstrations and assessmentsthat are usually conducted as part of system per-formance tests. Special end-item equipment testsare rarely conducted solely for logistics parameterevaluation.

19.3.2 Reliability and Maintainability

System availability is generally considered to becomposed of two major system characteristics—Reliability and Maintainability (R&M). Reliabil-ity, Availability, and Maintainability (RAM) re-quirements should be based on operational require-ments and Life Cycle Cost (LCC) considerations;stated in quantifiable, operational terms; measur-able during developmental and operational test andevaluation; and defined for all elements of the

system, including support and training equipment.Relevant definitions from the DAU Glossary:6

Reliability (R) is the ability of a systemand its parts to perform its mission with-out failure, degradation, or demand on thesupport system. A common MOE has beenMean Time Between Failure (MTBF).

Maintainability (M) is the ability of anitem to be retained in or restored to speci-fied condition when maintenance is per-formed by personnel having specific skilllevels, using prescribed procedures and re-sources, at each prescribed level of main-tenance and repair. A common Measureof Effectiveness (MOE) has been MeanTime To Repair (MTTR).

Operational Reliability and MaintainabilityValue is any measure of reliability or main-tainability that includes the combined effectsof item design, quality, installation, environ-ment, operation, maintenance, and repair.

The R and M program objectives are to be definedas system parameters early in the developmentprocess. They will be used as evaluation criteriathroughout the design, development, and produc-tion processes. R&M objectives should be trans-lated into quantifiable contractual terms and al-located through the system design hierarchy. Anunderstanding of how this allocation affects test-ing operating characteristics below system levelcan be found in T&E of System Reliability, Avail-ability and Maintainability.7 This is especiallyimportant to testing organizations expected tomake early predictions of system performance.Guidance on testing reliability may also be foundin MIL-STD-78I.8

Reliability, Availability, and Maintainabil-ity (also known as RAM), are require-ments imposed on acquisition systems toinsure they are operationally ready for usewhen needed, will successfully perform

Page 206: DAU T&E Management Guide.pdf

19-6

assigned functions, and can be economi-cally operated and maintained within thescope of logistics concepts and policies.RAM development programs are appli-cable to materiel systems, test measure-ment and diagnostic equipment, trainingdevices, and facilities supporting weaponsystems.

Availability is a measure of the degree towhich an item is in an operable state andcan be committed at the start of a mis-sion when the mission is called for at anunknown (random) point in time.

Availability commonly uses different MOE basedon the maturity of the weapon system technol-ogy. Achieved availability (Aa) is measured forthe early prototype and EDM systems developedby the system contractor when the system is notoperating in its normal environment. InherentAvailability (Ai) is measured with respect onlyto operating time and corrective maintenance, ig-noring standby/delay time and mean logisticsdelay time. It may be measured during DT&E orwhen testing in the contractor’s facility under con-trolled conditions. Operational Availability (Ao),measured for mature systems that have been de-ployed in realistic operational environments, isthe degree to which a piece of equipment worksproperly when it is required for a mission. It iscalculated by dividing uptime by uptime plus alldowntime. The calculation is sensitive to low uti-lization rates for equipment. Ao is the quantita-tive link between readiness objectives andsupportability.9

19.3.2.1 Reliability

Guidelines for reliability evaluation are to be pub-lished in a non-government standard to replaceMIL-STD-785:

• Environmental Stress Screening (ESS) is atest, or series of tests during engineeringdevelopment, specifically designed to identify

weak parts or manufacturing defects. Testconditions should simulate failures typical ofearly field service rather than provide anoperational life profile.

• Reliability Development/Growth Testing(RDT/RGT) is a systematic engineering pro-cess of Test-Analyze-Fix-Test (TAFT) whereequipment is tested under actual, simulated,or accelerated environments. It is an iterativemethodology intended to rapidly and steadilyimprove reliability. Test articles are usuallysubjected to ESS prior to beginning RDT/RGT to eliminate those with manufacturingdefects.

• Reliability Qualification Test (RQT) is toverify that threshold reliability requirementshave been met before items are committed toproduction. A statistical test plan is used topredefine criteria, which will limit govern-ment risk. Test conditions must be operation-ally realistic.

• Production Reliability Acceptance Test(PRAT) is intended to simulate in-service useof the delivered item or production lot. “Be-cause it must provide a basis for determiningcontractual compliance, and because it appliesto the items actually delivered to operationalforces, PRAT must be independent of thesupplier if at all possible.” PRAT may requireexpensive test facilities, so 100 percentsampling is not recommended.

19.3.2.2 Maintainability

Maintainability design factors and test/demonstra-tion requirements used to evaluate maintainabil-ity characteristics must be based on programobjectives and thresholds. Areas for evaluationmight include:10

• Accessibility: Assess how easily the item canbe repaired or adjusted.

Page 207: DAU T&E Management Guide.pdf

19-7

• Visibility: Assess the ability/need to see theitem being repaired.

• Testability: Assess ability to detect and iso-late system faults to the faulty replaceable as-sembly level.

• Complexity: Assess the impact of the num-ber, location, and characteristic (standard orspecial purpose) on system maintenance.

• Interchangeability: Assess the level of diffi-culty encountered when failed or malfunction-ing parts are removed or replaced with anidentical part not requiring recalibration.

A true assessment of system maintainabilitygenerally must be developed at the system levelunder operating conditions and using productionconfiguration hardware. Therefore, a maintainabil-ity demonstration should be conducted prior toFull Rate Production (FRP).11

19.3.3 T&E of System Support Package

The T&E of the support for a materiel systemrequires a system support package consisting ofspares, support equipment, technical documentsand publications, representative personnel, anypeculiar support requirements, and the test articleitself; in short, all of the items that would even-tually be required when the system is operational.This complete support package must be at thetest site before the test is scheduled to begin.Delays in the availability of certain support itemscould prevent the test from proceeding on sched-ule. This could be costly due to onsite supportpersonnel on hold or tightly scheduled systemranges and expensive test resources not beingproperly utilized. Also, it could result in the testproceeding without conducting the completeevaluation of the support system. The logisticssupport test planner must ensure that the requiredpersonnel are trained and available, the testfacility scheduling is flexible enough to permit

normal delays, and the test support package ison site, on time.

19.3.4 Data Collection and Analysis

The logistics support manager must coordinatewith the testers to ensure that the methods usedfor collection, storage, and extraction of logisticssupport system T&E data are compatible withthose used in testing the materiel system. As withany testing, the test planning must ensure that allrequired data are identified; it is sufficient toevaluate a system’s readiness and supportability;and plans are made for a data management sys-tem that is capable of the data classification, stor-age, retrieval, and reduction necessary for statis-tical analysis. Large statistical sample sizes mayrequire a common database that integratescontractor, DT&E, and OT&E data so requiredperformance parameters can be demonstrated.

19.3.5 Use of Logistics Support System TestResults

The emphasis on the use of the results of testingchanges as the program moves from the conceptassessments to post-deployment. During earlyphases of a program, the evaluation results areused primarily to verify analysis and developfuture projections. As the program moves into ad-vanced engineering development and hardwarebecomes available, the evaluation addresses de-sign, particularly the R&M aspects; training pro-grams; support equipment adequacy; personnelskills and availability; and technical publications.

The logistics support system manager must makethe PM aware of the impact on the program oflogistical shortcomings that are identified duringthe T&E process. The PM, in turn, must ensurethat the solutions to any shortcomings are identi-fied and reflected in the revised specificationsand that the revised test requirements are includedin the updated TEMP as the program proceedsthrough the various acquisition stages.

Page 208: DAU T&E Management Guide.pdf

19-8

19.4 LIMITATIONS TO LOGISTICSSUPPORT SYSTEM T&E

Concurrent testing or tests that have acceleratedschedules frequently do not have sufficient testarticles, equipment, or hardware to achieve sta-tistical confidence in the testing conducted. Anacquisition strategy may stipulate that supportresources such as operator and maintenancemanuals, tools, support equipment, trainingdevices, etc., for major weapon system compo-nents would not be procured before the weaponssystem/component hardware and software designstabilizes. This shortage of equipment is oftenthe reason that shelf-life and service-life testingis incomplete, leaving the logistics support systemevaluator with insufficient data to predict futureperformance of the test item. Some evaluationsmust measure performance against a point on theparameter’s growth curve. The logistics supportsystem testing will continue post-production toobtain required sample sizes for verifying per-formance criteria. Many aspects of the logisticssupport system may not be available for InitialOperational Test and Evaluation (IOT&E) andbecome testing limitations. The PMO mustdevelop enough logistics support to ensure theuser can maintain the system during IOT&E

without requiring system contractor involvement(legislated constraints). Any logistics support sys-tem limitations upon IOT&E will likely be evalu-ated during Follow-on Operational Test andEvaluation (FOT&E).

19.5 SUMMARY

T&E are the logisticians’ tools for measuring theability of the planned support system to fulfillthe materiel system’s readiness and supportabilityobjectives. The effectiveness of logistics supportsystem T&E is based upon the completeness andtimeliness of the planning effort.

The logistics support system T&E requirementsmust be an integral part of the TEMP to ensurebudgeting and scheduling of required test re-sources. Data requirements must be completelyidentified, with adequate plans made for collec-tion, storage, retrieval, and reduction of test data.At the Full Rate Production Decision Review(FRPDR), decision makers can expect that somelogistics system performance parameters will nothave finished testing because of the large samplesizes required for high confidence levels instatistical analysis.

Page 209: DAU T&E Management Guide.pdf

19-9

ENDNOTES

1. DoD Directive (DoDD) 5000.1, The DefenseAcquisition System, May 12, 2003, p. 11.

2. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition System, May 12,2003, p. 43.

3. MIL-STD-1388-1A, Logistics SupportAnalysis. Cancelled.

4. Defense Systems Management College,Acquisition Logistics Guide, December 1997.

5. MIL-STD-1388-1A.

6. Defense Acquisition University, Glossary ofDefense Acquisition Acronyms & Terms, 11thedition, September 2003.

7. DoD 3235.1-H, Department of Defense T&Eof System Reliability, Availability and Main-tainability, March 1982.

8. MIL-STD-781, Reliability Testing for Engi-neering Development, Qualification, andProduction. Cancelled.

9. DoD 3235.1-H

10. Ibid.

11. Guidelines in MIL-HDBK-470, Designingand Developing Maintainable Products andSystems, 1998.

Page 210: DAU T&E Management Guide.pdf

19-10

Page 211: DAU T&E Management Guide.pdf

20-1

2020EC/C4ISR TEST

AND EVALUATION

20.1 INTRODUCTION

Testing of Electronic Combat (EC) and Command,Control, Communications, Computers, Intelli-gence, Surveillance, and Reconnaissance (C4ISR)systems poses unique problems for testers becauseof the difficulty in measuring their operational per-formance. Compatibility, Interoperability, and In-tegration (CII) are key performance areas for thesesystems. Special testing techniques and facilitiesare normally required in EC and C4ISR testing.This chapter discusses the problems associatedwith EC and C4ISR testing and presents method-ologies the tester can consider using to overcomethe problems.

20.2 TESTING EC SYSTEMS

20.2.1 Special Consideration When TestingEC Systems

Electronic combat systems operate across theelectromagnetic spectrum, performing offensiveand defensive support roles. Configurations varyfrom subsystem components to full-up indepen-dent systems. The EC systems are used to increasesurvivability, degrade enemy capability, and con-tribute to the overall success of the combat mis-sion. Decision makers want to know the incre-mental contribution to total force effectivenessmade by a new EC system when measured in aforce-on-force engagement. However, the contrac-tual specifications for EC systems are usuallystated in terms of engineering parameters suchas effective radiated power, reduction in commu-nications intelligibility, and jamming-to-signalratio. These measures are of little use by them-selves in assessing contribution to mission

success. The decision makers require that testingbe conducted under realistic operational condi-tions; but the major field test ranges, such as theshoreline at Eglin Air Force Base (AFB), Florida,or the desert at Nellis AFB, Nevada, cannot pro-vide the signal density or realism of threats thatwould be presented by regional conflicts in cen-tral Europe. In field testing, the tester can achieveone-on-one or, at best, few-on-few testing condi-tions. To do this the tester needs a methodologythat will permit extrapolation of engineering mea-surements and one-on-one test events to createmore operationally meaningful measures of mis-sion success in a force-on-force context, usuallyunder simulated conditions.

20.2.2 Integrated Test Approach

An integrated approach to EC testing using acombination of large-scale models, computersimulations, hybrid man-in-the-loop simulators,and field test ranges is a solution for the EC tester.No tool by itself is adequate to provide a com-prehensive evaluation. Simulation, both digitaland hybrid, can provide a means for efficient testexecution. Computer models can be used to simu-late many different test cases to aid the tester inassessing the critical test issues (i.e., sensitivityanalysis) and produce a comprehensive set ofpredicted results. As digital simulation modelsare validated with empirical data from testing,they can be used to evaluate the system undertest in a more dense and complex threat environ-ment and at expected wartime levels. In addition,the field test results are used to validate themodel; and the model is used to validate the fieldtests, thus lending more credibility to both results.Hybrid man-in-the-loop simulators, such as the

Page 212: DAU T&E Management Guide.pdf

20-2

Figure 20-1. Integrated EC Testing Approach

Real-Time Electromagnetic Digitally ControlledAnalyzer and Processor (REDCAP) and the AirForce Electronic Warfare Evaluation Simulator(AFEWES) can provide a capability to test againstnew threats. Hybrid simulators cost less and aresafer than field testing. The field test ranges areused when a wider range of actions and reactionsby aircraft and ground threat system operationsis required.

Where one tool is weak, another may be strong.By using all the tools, an EC tester can do a com-plete job of testing. An example of an integratedmethodology is shown in Figure 20-1. The ECintegrated testing can be summarized as:

(1) Initial modeling phase for sensitivity analysisand test planning,

(2) Active test phases at hybrid laboratorysimulator and field range facilities,

(3) Test data reduction and analysis,

(4) Post-test modeling phase repeating the firststep using test data for extrapolation,

(5) Force effectiveness modeling and analysisphase to determine the incremental con-tribution of the new system to total forceeffectiveness.

SYSTEM DESCRIPTIONASSUMPTIONS

MANY-ON-MANYMODEL*

DATAREDUCTION

AND ANALYSIS

TEST RESULTS

MANY-ON-MANYMODEL*

FORCE-ON-FORCEMODEL

SYSTEM EFFECTIVENESS

FORCE EFFECTIVENESS

FIELDTESTING

HYBRIDLABORATORY

SIMULATORTEST CONDUCT

EVALUATION

TEST DATA TEST DATA

*SAME MODEL

TEST DESIGN

FEEDBACK/VALIDATION

CRITICAL TEST ISSUESEXPECTED RESULTS

FEEDBACK

SYSTEM DESCRIPTIONASSUMPTIONS

MANY-ON-MANYMODEL*

DATAREDUCTION

AND ANALYSIS

TEST RESULTS

MANY-ON-MANYMODEL*

FORCE-ON-FORCEMODEL

SYSTEM EFFECTIVENESS

FORCE EFFECTIVENESS

FIELDTESTING

HYBRIDLABORATORY

SIMULATORTEST CONDUCT

EVALUATION

TEST DATA TEST DATA

*SAME MODEL

TEST DESIGN

FEEDBACK/VALIDATION

CRITICAL TEST ISSUESEXPECTED RESULTS

FEEDBACK

Page 213: DAU T&E Management Guide.pdf

20-3

Figure 20-2. EC Test Process Concept

Another alternative is the electronic combat testprocess.1 The six-step process described here isgraphically represented by Figure 20-2:

(1) Deriving test requirements,

(2) Conducting pre-test analysis to predict ECsystem performance,

(3) Conducting test sequences under progres-sively more rigorous ground- and flight-testconditions,

(4) Processing test data,

(5) Conducting post-test analysis and evaluationof operational effectiveness and suitability,

(6) Feeding results back to the system;development employment process.

As can be seen from Figure 20-3, assuming alimited budget and field test being the mostexpensive per number of trials, the cost of testtrials forces the developer and tester to maketradeoffs to obtain the necessary test data. Manymore iterations of a computer simulation can berun for the cost of an open-air test.

20.3 TESTING OF C4ISR SYSTEMS

20.3.1 Special Considerations When TestingC4ISR Systems

The purpose of a C4ISR system is to provide acommander with timely and relevant information

COMPUTER SIMULATIONAIDED ANALYSIS

EVALUATION

TESTING

INTEGRATIONLABORATORIES

MEASUREMENTFACILITIES

HARDWAREIN-THE-LOOP

TESTFACILITIES

INSTALLEDTEST

FACILITIES

FLIGHTTEST RANGES

MEASURED DATA

TEST PROCESS

PREDICTEDVALUES

MEASUREDVALUES

DATAPROCESSING& ANALYSIS

PRETESTSTRUCTURE TESTPREDICT RESULTS

POST TESTCORRELATE DATAUPDATE MODELS/TEST

DESIGNEXTRAPOLATE TPPs/MOEs

SYSTEMDEVELOPMENT

MOEsTPPs

TPP: TECHNICAL PERFORMANCE PARAMETER

MOE: MEASURE OF EFFECTIVENESS

SOURCE: USAF Electronic Combat Development Test Process Guide.

OPERATIONS

PRODUCE

SPECIFICATION

DESIGN

REQUIREMENTS

Page 214: DAU T&E Management Guide.pdf

20-4

Figure 20-3. EC Test Resource Categories

to support sound warfighting decisionmaking.A variety of problems faces the C4ISR systemtester. However, in evaluating command effec-tiveness, it is difficult to separate the contribu-tion made by the C4ISR system from the contri-bution made by the commander’s innate cogni-tive processes. To assess a C4ISR system in itsoperational environment, it must be connectedto the other systems with which it wouldnormally operate, making traceability of testresults difficult. Additionally, modern C4ISRsystems are software-intensive and highly inter-active, with complex man-machine interfaces.Measuring C4ISR system effectiveness thusrequires the tester to use properly trained usertroops during the test and to closely monitorsoftware Test and Evaluation (T&E). The C4ISRsystems of defense agencies and the Services(Army, Navy, Air Force, and Marines Corps) are

expected to interoperate with each other and withthose of the North Atlantic Treaty Organization(NATO) forces; hence, the tester must alsoensure inter-Service and NATO CII.

20.3.2 C4I Test Facilities

Testing of Command, Control, Communications,Computers and Intelligence (C4I) systems willhave to rely more on the use of computer simu-lations and C4I test-beds to assess their overalleffectiveness. The Defense Information SystemsAgency (DISA), which is responsible for ensur-ing interoperability among all U.S. tactical C4Isystems that would be used in joint or combinedoperations, directs the Joint Interoperability TestCommand (JITC) at Ft. Huachuca, Arizona. TheJITC is a test-bed for C4I systems CII.

INTEGRATIONLABORATORIES

(DIGITAL/HYBRID)

COMPUTERSIMULATION

NUMBER

OF

TRIALS

HARDWARE-IN-THE-LOOP TEST FACILITIES

INSTALLEDTEST

FACILITIES

FREE-SPACESIMULATORS

TYPES OFSIMULATORS

USED

OPEN-AIR

MEASUREMENT FACILITIES

TIME

HYBRID SIMULATORS

SOURCE: USAF Electronic Combat Development Test Process Guide.

DIGITAL COMPUTER SIMULATIONS

INTEGRATIONLABORATORIES

(DIGITAL/HYBRID)

COMPUTERSIMULATION

NUMBER

OF

TRIALS

HARDWARE-IN-THE-LOOP TEST FACILITIES

INSTALLEDTEST

FACILITIES

FREE-SPACESIMULATORS

TYPES OFSIMULATORS

USED

OPEN-AIR

MEASUREMENT FACILITIES

TIME

HYBRID SIMULATORS

SOURCE: USAF Electronic Combat Development Test Process Guide.

DIGITAL COMPUTER SIMULATIONS

Page 215: DAU T&E Management Guide.pdf

20-5

20.4 TRENDS IN TESTING C4I SYSTEMS

20.4.1 Evolutionary Acquisition of C4ISystems

Evolutionary Acquisition (EA) is a strategydesigned to provide an early, useful capabilityeven though detailed overall system requirementscannot be fully defined at the program’s incep-tion. The EA strategy contributes to a reduction inthe risks involved in system acquisition, since thesystem is developed and tested in manageable in-crements. The C4I systems are likely candidatesfor EA because they are characterized by systemrequirements that are difficult to quantify or evenarticulate and that are expected to change as a func-tion of scenario, mission, theater, threat, and emerg-ing technology. Therefore, the risk associated withdeveloping these systems can be very great.

Studies by the Defense Systems ManagementCollege2 and the International Test and Evalua-tion Association (ITEA) have addressed the is-sues involved in the EA and testing of Command,Control, and Communications (C3) systems. TheITEA study illustrated EA in Figure 20-4 andstated that:

With regard to the tester’s role in EA, thestudy group concluded that iterative testand evaluation is essential for success inan evolutionary acquisition. The testermust become involved early in the acqui-sition process and contribute throughoutthe development and fielding of the coreand the subsequent increments. The testerscontribute to the requirements processthrough feedback of test results to the user[and] must judge the ability of the systemto evolve.3

The testing of EA systems presents the tester witha unique challenge as the core system must betested during fielding and the first increment be-fore the core testing is completed. This could leadto a situation where the tester has three or four

tests ongoing on various increments of the samesystem. The PM must insist that the testing forEA systems be carefully planned to ensure thetest data are shared by all and there is a mini-mum of repetition or duplication in testing.

20.4.2 Radio Vulnerability

The Radio Vulnerability Analysis (RVAN) meth-odology is for assessing the anti-jam capabilityand limitations of Radio Frequency (RF) datalinks when operating in a hostile Electronic Coun-termeasures (ECM) environment. The RVANevolved from the test methodologies developedfor an Office of the Secretary of Defense (OSD)-chartered Joint Test on Data Link VulnerabilityAnalysis (DVAL). In 1983, OSD directed theServices to apply the DVAL methodology to allnew data links being developed.

The purpose of the DVAL methodology is toidentify and quantify the anti-jam capabilities andvulnerabilities of an RF data link operating in ahostile ECM environment. The methodology isapplied throughout the acquisition process andpermits early identification of needed designmodifications to reduce identified ECM vul-nerabilities. The following four components de-termine a data link’s Electronic Warfare (EW)vulnerability:

(1) The susceptibility of a data link; i.e., thereceiver’s performance when subjected tointentional threat ECM;

(2) The interceptibility of the data link; i.e., thedegree to which the transmitter could be in-tercepted by enemy intercept equipment;

(3 The accessibility of the data link; i.e., thelikelihood that a threat jammer coulddegrade the data link’s performance;

(4) The feasibility that the enemy would inter-cept and jam the data link and successfullydegrade its performance.

Page 216: DAU T&E Management Guide.pdf

20-6

Figure 20-4. The Evolutionary Acquisition Process

The analyst applying the DVAL methodology willrequire test data; and the Test Manager (TM) ofthe C3I system, of which the data link is a com-ponent, will be required to provide this data. TheDVAL joint test methodologies and test resultsare on file as part of the Joint Test Library beingmaintained by the Air Force Operational Test andEvaluation Center (AFOTEC), Kirtland AFB,New Mexico.

20.5 T&E OF SURVEILLANCE ANDRECONNAISSANCE SYSTEMS

Intelligence, Surveillance, and Reconnaissance(ISR) capabilities provide the requisite battlespace

awareness tools for U.S. Forces to take and holdthe initiative, increase operating tempo, and con-centrate power at the time and place of theirchoosing. These vital capabilities are achievedthrough highly classified sensor systems rangingfrom satellites, aircraft, maritime vessels, elec-tronic intercept, and the soldier in the field to thesystems required to analyze that data, synthesizeit into usable information, and disseminate thatinformation in a timely fashion to the warfighter.As a general rule, ISR systems are considered tobe Joint Systems.

Because of the multifaceted nature of ISRprograms, the classified nature of how data are

CAPABILITYNEED

STATEMENT

MS A(SYSTEM)

SYSTEMDEFINITION

MS A(SYSTEM)

MS B(SYSTEM)

ARCHITECTURE AND CORE MILESTONES

MS B(SYSTEM)

R&D CORE

DT/OT

DT/OT

DT/OT

REQTS

ARCHITECTURE

FIELDTEST

INCREMENT I

INCREMENT II

SOURCE: ITEA Study, "Test & Evaluation of C2 Systems Developed by Evolutionary Acquisition."

*Brassboard Rapid Prototyping System Simulation, Etc.

SOURCE: ITEA Study, "Test & Evaluation of C2 Systems Developed by Evolutionary Acquisition."

*Brassboard Rapid Prototyping System Simulation, Etc.

Page 217: DAU T&E Management Guide.pdf

20-7

gathered in the operational element, test planningto verify effectiveness and suitability can be com-plex. Adding to that inherent complexity is thevariable nature of organizational guidance direc-tives upon the tester. While the broad manage-ment principles enunciated by Department ofDefense Directive (DoDD) 5000.1, The DefenseAcquisition System, May 12, 2003, will apply tohighly sensitive classified systems and crypto-graphic and intelligence programs, the detailedguidance contained in Defense Acquisition Guide-book applies to Major Defense AcquisitionPrograms (MDAPs) and Major Automated Infor-mation Systems (MAISs). Many ISR programsfall below this threshold and the wise TM shouldanticipate that several agencies will have takenadvantage of this opening to tailor organizationalguidance.

Key issues for the T&E of ISR systems toconsider include compliance verification withthe CII requirements contained in DoDD 4630.5,Interoperability and Supportability of Informa-tion Technology (IT) and National SecuritySystems (NSS), Chairman of the Joint Chiefs ofStaff Instruction (CJCSI) 3170.01D, Joint Ca-pabilities Integration and Development System,and CJCSI 6212.01C, Interoperability and Sup-portability of Information Technology and Na-tional Security Systems,4 as certif ied by theJITC; completion of the system security certifi-cation required prior to processing classified orsensitive material; verification; and documenta-

tion of required support from interfaced C4ISRsystems in the Information Support Plan (ISP)to ensure the availability and quality of requiredinput data, characterization of the maturity ofmission critical software, finalization of the rangeof human factors analysis, and consideration ofinformation operations vulnerabilities/capabili-ties. In addition to this partial listing, many ofthese systems will operate inside a matrix of ISRsystem architectures, which must be carefullyconsidered for test planning purposes. As a finalissue, Advanced Concept Technology Demonstra-tion (ACTD) programs are being used to quicklydeliver capability to the user because of thecritical and time-sensitive nature of many ISR re-quirements. The TM must carefully considerstructuring the T&E effort in light of the inherentnature of ACTD programs.

20.6 SUMMARY

The EC systems must be tested under conditionsrepresentative of the dense threat signal environ-ments in which they will operate. The C4ISRsystems must be tested in representative environ-ments where their interaction and responsivenesscan be demonstrated. The solution for the testeris an integrated approach using a combination ofanalytical models, computer simulations, hybridlaboratory simulators and test beds, and actualfield testing. The tester must understand these testtechniques and resources and apply them in ECand C4ISR T&E.

Page 218: DAU T&E Management Guide.pdf

20-8

ENDNOTES

1. Proposed in the Air Force Electronic CombatDevelopment Test Process Guide, May 1991,issued by what is now the Air Staff T&EElement, AF/TE.

2. Defense Systems Management College, JointLogistics Commanders Guidance for the Useof an Evolutionary Acquisition (EA) Strategyin Acquiring Command and Control (C2) Sys-tems, March 1987.

3. International Test and Evaluation Association(ITEA), Test and Evaluation of Command andControl Systems Developed Evolutionary Ac-quisition, May 1987.

4. DoDD 4630.5, Interoperability and Support-ability of Information Technology (IT) and Na-tional Security Systems (NSS), May 5, 2004;CJCSI 3170.01D, Joint CapabilitiesIntegration and Development System, May 12,2004; and CJCSI 6212.01C, Interoperabilityand Supportability of Information Technologyand National Security Systems, November 20,2003.

Page 219: DAU T&E Management Guide.pdf

21-1

2121MULTI-SERVICE TESTS

21.1 INTRODUCTION

This chapter discusses the planning and manage-ment of a multi-Service test program. A multi-Service test program is conducted when a systemis to be acquired for use by more than one Serviceor when a system must interface with equipmentof another Service (Joint program). A multi-Service test program should not be confused withthe Office of the Secretary of Defense (OSD)-sponsored, non-acquisition-oriented Joint Test andEvaluation (JT&E) program. A brief descriptionof the JT&E program is provided in Chapter 6.The Defense Acquisition Guidebook providesguidance on management of joint acquisition pro-grams.1

21.2 BACKGROUND

The Defense Acquisition Guidebook states: TheComponent Acquisition Executive (CAE) of adesignated acquisition agent given acquisitionresponsibilities shall utilize the acquisition andtest organizations and facilities of the MilitaryDepartments to the maximum extent practicable;A designated joint program shall have…one in-tegrated test program and one set of documenta-tion (TEMP [Test and Evaluation Master Plan])and reports; The MDA [Milestone DecisionAuthority] shall designate a lead OTA [Opera-tional Test Agency] to coordinate all OT&E[Operational Test and Evaluation]. The lead OTAshall produce a single operational effectivenessand suitability report for the program; unlessstatute, the MDA, or a MOA [Memorandum ofAgreement] signed by all DoD [Department ofDefense] components directs otherwise, the leadexecutive component shall budget for and manage

the common RDT&E [Research, Development,Test and Evaluation] funds for assigned jointprograms; and, individual components shall budgetfor their unique requirements.

Formulation of a multi-Service Test and Evalua-tion (T&E) program designates the participantsin the program and gives a Lead Service respon-sibility for preparing a single report concerninga system’s operational effectiveness and suit-ability. (The Lead Service is the Service respon-sible for the overall management of a joint ac-quisition program. A “Supporting Service” is aService designated as a participant in the systemacquisition.)

A multi-Service T&E program may include eitherDevelopment Test and Evaluation (DT&E) orOperational Test and Evaluation (OT&E) or both.The Service’s OTAs have executed a formal MOAon multi-Service OT&E that provides a frame-work for the conduct of a multi-Service Opera-tional Test (OT) program.2 The Defense Informa-tion Systems Agency (DISA) has signed the MOAin relation to the Joint Interoperability Test andcertification for multi-Service T&E. The MOAincludes a glossary of common and Service-peculiar terms used in multi-Service T&E.3

Generally the process includes:

(1) In a multi-Service acquisition program,T&E is planned and conducted accordingto Lead Service regulations. The designatedLead Service will have the overall respon-sibility for management of the multi-Serviceprogram and will ensure that supportingService requirements are included. If an-other Service has certain unique T&E

Page 220: DAU T&E Management Guide.pdf

21-2

Figure 21-1. Simple Multi-Service OT&E Test Team Composition

requirements, testing for these uniquerequirements may be planned, funded, andconducted according to that Service’sregulations.

(2) Participating Services will prepare reportsin accordance with their respective regula-tions. The Lead Service will prepare andcoordinate a single DT&E report and asingle OT&E report, which will summarizethe conclusions and recommendations ofeach Service’s reports. Rationale will beprovided to explain any significant differ-ences. The individual Service reports maybe attached to this single report.

(3) Deviations from the Lead Service T&Eregulations may be accommodated by mutualagreement among the Services involved.

21.3 TEST PROGRAMRESPONSIBILITIES

The Lead Service has overall managementresponsibility for the program. It must ensure thatsupporting Service requirements are included inthe formulation of the basic resource and planningdocuments.

A multi-Service T&E Integrated Product Team(IPT) should be established for each multi-Servicetest program. Its membership consists of one se-nior representative from each participating Ser-vice or agency headquarters. The T&E IPT worksclosely with the Program Management Office(PMO) and is responsible for arbitrating all dis-agreements among Services that cannot be re-solved at the working level.

AIR FORCEOT&E DEPUTY TEST

DIRECTOR(AFOTEC)

ARMYOT&E DEPUTY TEST

DIRECTOR(ATEC)

NAVYOT&E DEPUTY TEST

DIRECTOR(OPTEVFOR)

MARINE CORPSOT&E DEPUTY TEST

DIRECTOR(MCOTEA)

XXXXOT&E DEPUTY TEST

DIRECTOR (1)

AIR FORCETEST TEAMSTRUCTURE

XXXXTEST TEAMSTRUCTURE

NAVYTEST TEAMSTRUCTURE

ARMYTEST TEAMSTRUCTURE

MARINE CORPSTEST TEAMSTRUCTURE

ASSISTANT FOR PLANSAND TEST OPERATIONS

(LEAD SERVICE)

OT&ETEST DIRECTOR(LEAD SERVICE)

LEADOT&E AGENCY

TEST MANAGEMENTCOUNCIL

(1) USED FOR COMPLEX PROGRAMS WITH MANY PARTICIPANTS

SOURCE: Memorandum of Agreement on Multi-Service OT&E and Joint T&E, August 2003.

Page 221: DAU T&E Management Guide.pdf

21-3

Resource requirements are documented in theTEMP. Each participating Service is directed tobudget for the testing necessary to accomplishits assigned test objectives and for the partici-pation of its personnel and equipment in theentire test program. Separate annexes may beused to address each Service’s test requirements.Funding will be in accordance with Public Lawand Department of Defense (DoD) 7000.14-R.4

21.4 TEST TEAM STRUCTURE

A sample test team structure is shown in Figure21-1. As shown in the figure, Service test teamswork through a Service deputy test director orsenior representative. The test director exercisestest management authority but not operationalcontrol over the test teams. The responsibilitiesinclude integration of test requirements andefficient scheduling of test events. The deputytest directors exercise operational control or testmanagement authority over their Service test teamsin accordance with their Service directives. Addi-tionally, they act as advisers to the test director;represent their Service’s interests; and are re-sponsible, at least administratively, for resourcesand personnel provided by their Services.

21.5 TEST PLANNING

Test planning for multi-Service T&E is accom-plished in the manner prescribed by Lead Servicedirections and in accordance with the followinggeneral procedures extracted from the “Memo-randum of Agreement on Multi-Service OT&E[MOT&E] and Joint T&E:”5

(1) The Lead Service T&E agency begins theplanning process by issuing a call to thesupporting Service T&E agencies forcritical issues and test objectives.

(2) The Lead Service T&E agency consolidatesthe objectives into a list and coordinates thelist with the supporting Service T&Eagencies.

(3) The Lead Service T&E agency accommo-dates supporting Service T&E requirementsand input in the formal coordination actionof the TEMP.

(4) Participating T&E agency project officersassign responsibility for the accomplish-ment of test objectives (from the consoli-dated list) to each T&E agency. These as-signments are made in a mutually agreeablemanner. Each agency is then responsible forresource identification and accomplishmentof its assigned test objectives under thedirection of the Lead Service T&E agency.

(5) Each participating agency prepares theportion of the overall test plan(s) for its as-signed objectives, in the Lead Service testplan(s) format, and identifies its data needs.

(6) The Lead Service T&E agency prepares themulti-Service T&E test plan(s), consolidat-ing the input from all participating agencies.

21.6 DEFICIENCY REPORTING

In a multi-Service T&E program, a deficiencyreport is a report of any condition that reflectsadversely on the item being tested and that mustbe reported outside the test team for correctiveaction. The deficiency reporting system of theLead Service is normally used. All members ofthe multi-Service test team will report deficienciesthrough their Service’s system.

Items undergoing test will not necessarily be usedby each of the Services for identical purposes. Asa result, a deficiency considered disqualifying byone Service is not necessarily disqualifying for allServices. Deficiency reports of a disqualifyingnature must include a statement by the concernedService of why the def iciency has been soclassified. It also includes statements by the otherServices as to whether or not the deficiencyaffects them significantly.

Page 222: DAU T&E Management Guide.pdf

21-4

If one of the participating Services identifies adeficiency that it considers as warranting termi-nation of the test, the circumstances are reportedimmediately to the test director.

21.7 TEST REPORTING

The following test-reporting policy applies tomulti-Service IOT&E:

(1) The Lead OTA will prepare and coordi-nate the report reflecting the system’soperational effectiveness and suitabilityfor each Service. Interim reports will notnormally be prepared.

(2) It will synergize the different operationalrequirements and operational environmentsof the involved Services.

(3) It will state findings, put those findings intoperspective, and present rationale why thereis or is not consensus on the utility of thesystem.

(4) All participating OTAs will sign the report.

(5) Each participating OTA may prepare anIndependent Evaluation Report (IER) orfinal test report, as required, in its ownformat and process that report through nor-mal Service channels.

(6) The Lead OTA will ensure that all separateparticipating Service IERs/test reports areappended to the overall final report preparedby the Lead OTA for submission to theMDA.

(7) A single integrated multi-Service report willbe submitted 90 calendar days after the of-ficial end of test is declared, but not laterthan 45 days prior to a milestone decision.

21.8 SUMMARY

Multi-Service test programs are conducted bytwo or more Services when a system is to beacquired for employment by more than one Ser-vice or when a system must interface with equip-ment of another Service. Test procedures formulti-Service T&E follow those of the desig-nated Lead Service, with mutual agreementsresolving areas where deviations are necessary.The MOA on OT&E procedures provides guid-ance to the OTA. Care must be exercised whenintegrating test results and reporting discrepan-cies since items undergoing testing may be usedfor different purposes in different Services.Close coordination is required to ensure that anaccurate summary of the developing system’s ca-pabilities is provided to Service and DoDdecision authorities.

Page 223: DAU T&E Management Guide.pdf

21-5

ENDNOTES

1. Defense Acquisition Guidebook, Chapter 7,“Acquiring Information Technology andNational Security Systems,” http//akss.dau.mil/DAG.

2. Memorandum of Agreement of Multi-ServiceOT&E, with changes, August 2003.

3. AFOTEC 99-103, Capabilities Test andEvaluation, October 22, 2001, and the De-partment of the Army Pamphlet (DA PAM)73-1, Test and Evaluation Policy, January 7,2002, provide guidance for procedures followedin a multi-Service T&E program.

4. DoD Financial Management Regulation(FMR) 7000.14-R, Vol. 02B, Budget Formu-lation and Presentation, Chapter 5, “Re-search, Development, Test, and EvaluationAppropriations,” June 2004.

5. Memorandum.

Page 224: DAU T&E Management Guide.pdf

21-6

Page 225: DAU T&E Management Guide.pdf

22-1

2222INTERNATIONAL TEST ANDEVALUATION PROGRAMS

22.1 INTRODUCTION

This chapter discusses Test and Evaluation (T&E)from an international perspective. It describes theOffice of the Secretary of Defense (OSD)-sponsored Foreign Comparative Test (FCT)Program1 and the International Test OperationsProcedures (ITOPs). Factors that bear on the T&Eof multinational acquisition programs arediscussed also.

22.2 FOREIGN COMPARATIVE TESTPROGRAM

22.2.1 Program Objective

The FCT Program is designed to support theevaluation of a foreign nation’s weapons system,equipment, or technology in terms of its poten-tial to meet a valid requirement of one or moreof the U.S. Armed Forces. Additional goals ofthe FCT program include avoiding unnecessaryduplication in development, enhancing standard-ization and interoperability, and promoting inter-national technology exchanges. The FCT programis not intended for use in exploiting threat sys-tems or for intelligence gathering. The primaryobjective of the program is to support the U.S.national policy of encouraging internationalarmaments cooperation and to reduce the costsof Research and Development (R&D). Policy andprocedures for the execution of the program wereoriginally documented in the Department of De-fense (DoD), but now can be found in the For-eign Comparative Test Handbook and DoD In-struction (DoDI) 5000.2.2

22.2.2 Program Administration

Foreign Weapons Evaluation (FWE) activities andresponsibilities are assigned to the Director, Com-parative Test Office (CTO), Director of DefenseResearch and Engineering (DDR&E), OSD. Eachyear, sponsoring military Services forward Can-didate Nomination Proposals (CNPs) for systemsto be evaluated under the FCT program to theDirector, CTO. The Services are encouraged toprepare and submit a CNP whenever a promis-ing candidate that appears to satisfy a current orpotential Service requirement is found. A CNPmust contain the information as required by theFCT Handbook.3

The fundamental criterion for FCT programselection is the candidate system’s potential to sat-isfy operational or training requirements that existor are projected. Its possible contribution to theU.S. technology base is considered also. Additionalfactors influencing candidate selection include:candidate maturity, available test data, multi-Service interest, existence of a statement of op-erational requirement need, potential for subse-quent procurement, sponsorship by U.S.-based lic-ensee, realistic evaluation schedule cost, DoD com-ponent OSD evaluation cost-sharing proposal, andpreprogrammed procurement funds. For technol-ogy evaluation programs within the FCT program,the candidate nomination proposal must addressthe specific arrangements under which the UnitedStates and foreign participants (governments,armed forces, corporations) will operate. Thesemay include government-to-government Memo-randa of Agreement (MOA), private industry li-censing agreements, data exchange agreements,and/or cooperative technology exchange programs.

Page 226: DAU T&E Management Guide.pdf

22-2

FWE activities are funded by OSD and executedby the Service with the potential need for the sys-tem. Points of contact at the headquarters level ineach Service monitor the conduct of the programs.Work is performed in laboratories and test centersthroughout the country. Systems evaluated recentlyunder the FCT program include millimeter wavecommunications equipment, chemical defenseequipment, gunnery devices, maritime decoys, andnavigational systems. The Under Secretary ofDefense for Acquisition, Technology, and Logis-tics (USD(AT&L)) shall notify Congress a mini-mum of 30 days prior to the commitment of fundsfor initiation of new FCT evaluations.4

22.3 NATO COMPARATIVE TESTPROGRAM

The North Atlantic Treaty Organization (NATO)Comparative Test Program has been integratedwith the FCT program. It was created by an actof the Congress in the Fiscal Year (FY) 86Defense Authorization Act. The program sup-ported the evaluation of NATO nations’ weaponssystems, equipment and technology, and assessedtheir suitability for use by U.S. forces. The selec-tion criteria for the NATO Comparative Test Pro-gram were essentially the same as for the FCTprogram. The exception was that the equipmentmust be produced by a NATO member nation andbe considered as an alternative to a system thatwas either in a late stage of development in theUnited States or that offered a cost, schedule, orperformance advantage over U.S. equipment. Inaddition, the NATO Comparative Test Programrequired that notification be sent to the ArmedServices and Appropriations Committees of theHouse of Representatives and Senate beforefunds were obligated. With this exception, theNATO Comparative Test Program followed thesame nomination process and administrative pro-cedures. Guidelines for the program were alsocontained in the FCT Handbook.

Examples of proposals funded under the NATOComparative Test Program included T&E of a

German mine reconnaissance and detection sys-tem for the Army, a United Kingdom-designedmine sweeper for the Navy, and the NorwegianPenguin missile system for the Air Force.According to the FY88 Report of the Secretary ofDefense (SECDEF) to the Congress, the programgenerated considerable interest among NATO al-lied nations and became a primary way of pro-moting armaments cooperation within NATO.

Problems associated with testing foreign weaponsnormally stem from politics, national pride, anda lack of previous test data. When foreign com-panies introduce weapon systems for testing, theyoften will attempt to align the U.S. military/congressional organizations with their systems.For example, when a foreign nation introducedan antitank weapon to the Army, they did so byhaving a U.S. Senator write the Army stating aneed for the system. Attached to the letter was adocument containing doctrine to employ the sys-tem and a test concept to use when evaluatingthe system. Systems tested in the NATO Com-parative Test Program often become involved innational pride. The test community must be care-ful not to allow national pride to be a drivingforce in the evaluation. At times, the 9mm pistolcompetition in NATO resembled an internationalsoccer match, with each competing nation cheer-ing for their pistol and many other nationsselecting sides. Evaluating the 9mm pistol wasdifficult because of these forces. Thus, U.S. testersmust make every effort to obtain all available testdata on foreign systems. These data can be usedto help validate the evolving test data and addi-tional test data during the evaluation.

22.4 T&E MANAGEMENT INMULTINATIONAL PROGRAMS

22.4.1 Compatibility With Allies

Rationalization, Standardization, and Interoper-ability (RSI) have become increasingly importantelements in the materiel acquisition process. Pub-lic Law 94-3615 requires that “equipment for use

Page 227: DAU T&E Management Guide.pdf

22-3

of personnel of the Armed Forces of the UnitedStates stationed in Europe under the terms of theNorth Atlantic Treaty should be standardized orat least interoperable with equipment of othermembers of the North Atlantic TreatyOrganization.” Program Managers (PMs) and TestManagers (TMs) must, therefore, be fully awareof any potential international applications of thesystems for which they are responsible. Twoguidebooks published by the Defense Acquisi-tion University (DAU) are valuable compendiumsof information for the PM of a developing sys-tem with potential multinational applications.6

Additionally, on file at the DAU David D. AckerLibrary, is the research report “Operational Testand Evaluation in the IDEA [International De-fense Education Association] Nations” (1999)comparing the Operational Test and Evaluation(OT&E) processes of the United States with thoseof Great Britain, Germany and France. This re-port concluded that there were significant differ-ences in what was considered OT&E by the fourcountries. The Defense Acquisition Guidebookstates that Foreign Military Sales (FMS) of U.S.major systems (ACAT [Acquisition Category] I,II) prior to IOT&E must be approved by theUSD(AT&L).7

Representatives of the United States, UnitedKingdom, France, and Germany have signed aMOA concerning the mutual acceptability of eachcountry’s T&E data. This agreement seeks toavoid redundant testing by documenting the extentof understanding among involved governmentsconcerning mutual acceptability of respectiveT&E procedures for systems that are developedin one country and are candidates for procure-ment by one or more of the other countries. Focalpoints for development and operational testingin each of the countries are identified, and proce-dures governing generation and release of T&Edata are described in the Memorandum of Un-derstanding (MOU). The European concept ofOT&E is significantly different from that usedby the U.S. DoD.

Early and thorough planning is an importantelement of any successful T&E program but iseven more critical in a multinational program.Agreement must be reached concerning T&E pro-cedures, data requirements, and methodology.Differences in tactics, battlefield representations,and military organizations may make it difficultfor one nation to accept another’s test data. There-fore, agreement must be reached in advanceconcerning the Operational Test (OT) scenario andbattlefield representation that will be used.

22.4.2 International Test OperationsProcedures

The Defense Acquisition Guidebook8 states“results of T&E of systems using approved In-ternational Test Operations Procedures (ITOPs)may be accepted without repeating the testing.”The ITOPs are documents containing standard-ized state-of-the-art test procedures prepared bythe cooperative efforts of France, Germany, theUnited Kingdom, and the United States. Theiruse assures high quality, efficient, accurate, andcost-effective testing. The Director, OperationalTest and Evaluation (DOT&E) is the OSD spon-sor for providing basic guidance and directionto the ITOPs’ processes. The ITOPs program isintended to shorten and reduce costs of themateriel development and acquisition cycle,minimize duplicate testing, improve interoper-ability of U.S. and allied equipment, promotethe cooperative development and exchange ofadvanced test technology, and expand the cus-tomer base. Each Service has designated anITOPs point of contact. The Army uses the Testand Evaluation Management Agency (TEMA);in the Navy it is the Director, Navy T&E Divi-sion (N-912); and the Air Force has the Chief,Policy and Program Division (AF/TEP). TheArmy, who initiated the program in 1979, is thelead Service. A total of 75 ITOPs have been com-pleted and published in six technical areas underthe Four-Nation Test and Evaluation MOU. Ad-ditional ITOPs are under development by activeworking committees.9 Completed documents are

Page 228: DAU T&E Management Guide.pdf

22-4

submitted to the Defense Technical InformationCenter (DTIC) for official distribution.

22.5 U.S. AND NATO ACQUISITIONPROGRAMS

Some test programs involve combined develop-ment and test of new weapon systems for U.S.and other NATO countries. In these programs,some differences from the regular “way of doingthings” occur. For example, the formulation ofthe Request for Proposal (RFP) must be coordi-nated with the North Atlantic Program Manage-ment Agency (NAPMA); and their input to theStatement of Work (SOW), data requirements, op-erational test planning, and test schedule formu-lation must be included. Also, the U.S. Army op-erational user, Forces Command (FORSCOM),must be involved in the OT program. Usually, aMultinational Memorandum of Understanding iscreated concerning test program and productionfunding, test resources, test team composition, useof national assets for testing, etc.

Nations are encouraged to use the data that an-other nation has gathered on similar test programsto avoid duplication of effort. For example, dur-ing the U.S. and NATO Airborne Warning andControl System (AWACS) Electronic SupportMeasures (ESM) Program, both U.S. and NATOE-3As will be used for test aircraft in combinedDevelopment Test and Evaluation (DT&E) and

subsequent OT&E. Testing will be conducted inthe U.S. and European the Program ManagementOffice (PMO), contractor, U.S. operational users,Air Force Operational Test and Evaluation Cen-ter (AFOTEC), Force Command (NATO users),and logistics personnel for this program. A Mul-tinational Memorandum of Agreement for thisprogram was created. The U.S. program is man-aged by the AWACS Program Office, and theNATO program is managed by the NAPMA.

22.6 SUMMARY

The procurement of weapon systems from foreignnations for use by U.S. Armed Forces can providethe following advantages: reduced Research andDevelopment (R&D) costs, faster initial opera-tional capability, improved interoperability withfriendly nations, and lower procurement costsbecause of economies of scale. This is normallya preferred solution to user requirements beforeattempting to start a new development. Testingsuch systems presents specific challenges to ac-commodate the needs of all users. OT&E con-ducted by allied and friendly nations is not asrigorous as that required by U.S. public law andDoD acquisition regulations. Such testing requirescareful advance planning and systematic execu-tion. Expectations and understandings must bewell documented at an early stage to ensure thatthe test results have utility for all concerned.

Page 229: DAU T&E Management Guide.pdf

22-5

ENDNOTES

1. Title 10, United States Code (U.S.C.), Sec-tion 2350A, Cooperative research and devel-opment agreements: NATO organizations; al-lied and friendly foreign countries, March 18,2004.

2. DoD 5000.3-M-2, Foreign Comparative Test-ing Program Procedures Manual, January1994; DoDI 5000.2, Operation of the De-fense Acquisition System, May 12, 2003,Enclosure 5.

3. Office of the Under Secretary of Defense (Ac-quisition, Technology and Logistics) Director,Strategic & Tactical Systems, The ForeignComparative Testing (FCT) Handbook, March2001, http://www.acq.osd.mil/cto.

4. DoDI 5000.2.

5. Public Law 94-361, "DoD Authorization Act,1977," (Title 10, U.S.C. 2457), July 14, 1976.

6. Kausal, Tony, editor. Comparison of theDefense Acquisition Systems of France, GreatBritain, Germany, and the United States,Defense Acquisition University, September1999; Kausal, Tony, editor and StefanMarkowski. Comparison of the DefenseAcquisition Systems of Australia, Japan, SouthKorea, Singapore, and the United States,Defense Acquisition University, July 2000.

7. Defense Acquisition Guidebook, http://akss.dau.mil/DAG.

8. Ibid., Chapter 2, C2.9.2, “InternationalCooperation.”

9. Check the Developmental Test Command(DTC) Web site at http://www.dtc.army.mil/publications/topsindex.aspx for updates.

Page 230: DAU T&E Management Guide.pdf

22-6

Page 231: DAU T&E Management Guide.pdf

23-1

2323COMMERCIAL AND

NON-DEVELOPMENT ITEMS

Figure 23-1. The Spectrum of Technology Maturity

23.1 INTRODUCTION

Many options are available when an acquisitionstrategy calls for a material solution before elect-ing to develop a new system. They range fromthe preferred solution of using commerciallyavailable products from domestic or internationalsources to the least preferred option of a tradi-tional, single-Service new Research andDevelopment (R&D) program. Between these two

extremes are other acquisition strategies that callfor using modified systems at different compo-nent levels and unmodif ied or ruggedizedcooperative developments to various extents.1 Fig-ure 23-1 shows the broad spectrum of approachesthat can be taken in a system acquisition and pro-vides examples of systems that have been devel-oped using each approach. The PM [ProgramManager] is required to consider this spectrumof options for all levels of the system design.2

Source: Army Materiel Command Pamphlet 70-2, AMC-TRADOC Materiel Acquisition Handbook. March 26, 1987.

APACHEHELICOPTER

DEV W/STDCOMPONENTS

FULLDEVELOPMENT

DEV W/STDSUBSYSTEMSTD SUBSYSTEM

ASSEMBLYMILITARI-ZATIONRUGGED-

IZATIONOFF THESHELF

DEVELOPMENTTIME

MORE COST

LESS MORE

COUNTER OBSTACLEVEHICLE

COMMERCIALUTILITY

CARGO VEHICLEBERETTA 9MMHANDGUN

Page 232: DAU T&E Management Guide.pdf

23-2

23.1.1 Definitions

A commercial item is generally defined as anyitem, other than real property, that is of a typecustomarily used for non-governmental purposesand that: (1) has been sold, leased, or licensed tothe general public; (2) has been offered for sale,lease, or license to the general public; or any itemthat evolved through advances in technology orperformance and that is not yet available in thecommercial marketplace, but will be available inthe commercial marketplace in time to satisfy thedelivery requirements under a government solici-tation. Also included in the definition are ser-vices in support of a commercial item, or a typeoffered and sold competitively in substantialquantities in the commercial marketplace basedon established catalog or market prices for spe-cific tasks performed under standard commercialterms and conditions; this does not include ser-vices that are sold based on hourly rates withoutan established catalog or market price for a spe-cific service performed.

A Non-Developmental Item (NDI) is considered:(1) any previously developed item of supply usedexclusively for governmental purposes by a fed-eral agency, a state, or local government, or aforeign government with which the United Stateshas a mutual defense cooperation agreement; (2)any item described in (1) that requires only mi-nor modification or modifications of the type cus-tomarily available in the commercial marketplacein order to meet the requirements of the procur-ing department or agency; (3) any item describedin (1) or (2) solely because the item is not yet inuse.

All such systems are required to undergo techni-cal and Operational Test and Evaluation (OT&E)before the procurement decision, unless thedecision authority makes a definitive decision thatprevious testing or other data (such as user/marketinvestigations) provide sufficient evidence of ac-ceptability.3 See Federal Acquisition Regulation(FAR), Section 2.101 for more precise definitions

of commercial and NDIs; and SD-2, Buying Com-mercial and Nondevelopmental Items: A Hand-book, April 1, 1996.

23.1.2 Advantages and Disadvantages ofCommercial and NDI Approaches

The use of commercial and NDI offers thefollowing advantages:

• The time to field a system can be greatlyreduced, providing a quicker response to theuser’s needs;

• R&D costs or Total Ownership Costs (TOCs)may be reduced;

• State-of-the-art technology may be availablesooner.

Commercial and NDIs offer the followingdisadvantages:

• Acquisitions are difficult to standardize/inte-grate with the current fleet equipment;

• Acquisitions create logistics support difficulties;

• Acquisitions tend not to have comparable com-petition; therefore, there are fewer secondsources available;

• With commercial and NDI acquisitions, engi-neering and test data often are not available.

23.1.3 Application of Commercial and NDIs

Commercial items or NDIs may be used in thesame environment for which the items were de-signed. Such items normally do not require de-velopment testing prior to the Production Quali-fication Test (PQT) except in those cases wherea contract may be awarded to a contractor whohas not previously produced an acceptable fin-ished product and the item is assessed as highrisk. In that case, preproduction qualification

Page 233: DAU T&E Management Guide.pdf

23-3

testing would be required.4 An operational assess-ment or some more rigorous level of OT&E mightbe appropriate.

Commercial items or NDIs may be used in anenvironment other than that for which the itemswere designed. Such items may require modifi-cations in hardware and/or software. These itemsrequire testing in an operational environment,preproduction qualification testing (if previoustesting resulted in item redesign), and productionqualification testing.

Integration of commercial items or NDIs into anew development system may require someregression testing. These efforts require moreextensive research, development, and testing toachieve effective operation of the desired systemconfiguration. Testing required includes: feasibil-ity testing in a military environment, prepro-duction qualification testing, hardware/softwareintegration testing, operational testing, and pro-duction qualification testing. Given the varietyof approaches that may be employed, it isimperative that the acquisition strategy clearlyspecifies, with the agreement of the testingauthority, the level of testing that will be per-formed on commercial items and NDI systemsand the environment in which those systems willbe tested.

23.2 MARKET INVESTIGATION ANDPROCUREMENT

A market investigation is the central activity lead-ing to the program initiation decision regardingthe use of a commercial item or NDI acquisitionstrategy. The purpose of the market investigationis to determine the nature of available productsand the number of potential vendors. Marketinvestigations may vary from informal telephoneinquiries to comprehensive industry-wide reviews.During the market investigation, sufficient datamust be gathered to support a definitive decision,to finalize the requirements, and to develop anacquisition strategy that is responsive to these

requirements. Included in the search would bedual use technologies that meet military needs,yet have sufficient commercial application to sup-port a viable production base. An assessment ofTechnology Readiness Level will provide theprogram office with insights into the readinessof the commercial item technologies for militaryapplication.5

During the market investigation, a formal “requestfor information” process may be followed whereina brief narrative description of the requirement ispublished and interested vendors are invited to re-spond. Test samples or test items may be leased orpurchased at this time to support the conduct ofoperational suitability tests, to evaluate the abilityof the equipment to satisfy the requirements, andto help build the functional purchase descriptionor system specification. This type of preliminarytesting should not be used to select or eliminateany particular vendor or product unless it is pre-ceded by competitive contracting procedures.6

It is imperative that technical and operationalevaluators become involved during this early stageof any commercial item or NDI procurement andthat they perform an early assessment of the ini-tial issues. The evaluator must also relate theseissues to Test and Evaluation (T&E) criteria andprovide their Independent Evaluation Plans (IEPs)and reports to the decision authorities before theMilestone B decision review.

23.3 COMMERCIAL ITEM AND NDITESTING

23.3.1 General Considerations

Test planning for commercial and NDIs shall rec-ognize commercial testing and experience, butnonetheless determine the appropriate DT&E,OT&E, and LFT&E [Live Fire Test and Evalua-tion] needed to ensure effective performance inthe intended operational environment.7 The De-fense Acquisition Guidebook suggests that “thePM shall develop an appropriate T&E strategy

Page 234: DAU T&E Management Guide.pdf

23-4

for commercial items to include evaluating com-mercial items in a system test bed, when practi-cal; focusing test beds on high-risk items; andtesting commercial item upgrades for unantici-pated side effects in areas such as security, safety,reliability, and performance.” T&E must be con-sidered throughout the acquisition of a systemthat involves commercial items and NDI.

Program Managers (PMs) and their staff mustensure that the testing community is fully in-volved in the acquisition from the start. Theamount and level of testing required depends onthe nature of the commercial item or NDI andits anticipated use; it should be planned to sup-port the design and decision process. At a mini-mum, T&E will be conducted to verify integra-tion and interoperability with other system ele-ments. All commercial item and NDI modifica-tions necessary to adapt them to the weapon sys-tem environment will also be subject to T&E.Available test results from all commercial andgovernment sources will determine the actualextent of testing necessary. For example, a com-mercial item or NDI usually encompasses amature design. The availability of this maturedesign contributes to the rapid development ofthe logistics support system that will be needed.In addition, there are more “production” itemsavailable for use in a test program. PMs and theirstaff must remember that these systems also re-quire activity in areas associated with traditionaldevelopment and acquisition programs. For ex-ample, training and maintenance programs andmanuals must be developed; and sufficient timeshould be allowed for their preparation.

When the solicitation package for a commercialitem or NDI acquisition is assembled, PMs mustensure that it includes the following T&E-relateditems:

(1) Approved T&E issues and criteria;

(2) A requirement that the contractor providea description of the testing previously

performed on the system, including test pro-cedures followed, data, and results achieved;

(3) PQT and quality conformance requirements;

(4) Acceptance test plans for the system andits components.

23.3.2 Testing Before Program Initiation

Since an important advantage of using a com-mercial item or NDI acquisition strategy isreduced acquisition time, it is important thattesting not be redundant and that it is limited tothe minimum effort necessary to obtain therequired data. Testing can be minimized by:

(1) Obtaining and assessing contractor testresults;

(2) Obtaining usage/failure data from othercustomers;

(3) Observing contractor testing;

(4) Obtaining test results from independenttest organizations (e.g., Underwriter’sLaboratory);

(5) Verifying selected contractor test data.

If it is determined that more information is neededafter the initial data collection from the abovesources, commercial items or NDI candidates maybe bought or leased, and technical and operationaltests may be conducted.

23.3.3 Testing After Program Initiation

All testing to be conducted after the programinitiation milestone decision to proceed with thecommercial item or NDI acquisition should bedescribed in the Acquisition Strategy and the Testand Evaluation Master Plan (TEMP). Develop-ment testing is conducted only if specific infor-mation that cannot be satisfied by contractor or

Page 235: DAU T&E Management Guide.pdf

23-5

other test data sources is needed. Operationaltesting is conducted as needed. The IndependentOperational Test and Evaluation (IOT&E) agencyshould concur in any decisions to limit oreliminate operational testing.

T&E continue even after the system has beenfielded. This testing takes the form of a follow-on evaluation to validate and refine: operatingand support cost data; Reliability, Availability, andMaintainability (RAM) characteristics; logisticssupport plans; and training requirements, doctrine,and tactics.

23.4 RESOURCES AND FUNDING

Programming and budgeting for a commercialitem or NDI acquisition present a specialchallenge. Because of the short duration of theacquisition process, the standard lead timesrequired in the normal Planning, Programming,Budgeting and Execution (PPBE) system cyclemay be unduly restrictive. This situation can beminimized through careful, advanced planningand, in the case of urgent requirements,reprogramming/supplemental funding techniques.

Research, Development, Test, and Evaluation(RDT&E) funds are normally used to support theconduct of the Market Investigation Phase andthe purchase or lease of candidate systems/com-ponents required for T&E purposes. The RDT&E

funds are also used to support T&E activities suchas: modification of the test article; purchase ofspecifications, manufacturer’s publications, repairparts, special tools and equipment; transportationof the test article to and from the test site; andtraining, salaries, and temporary duty costs ofT&E personnel. Procurement, operations, andmaintenance funds are usually used to supportProduction and Deployment (P&D) costs.

One chief reason for using a commercial item orNDI acquisition strategy is reduced overall TOC.Additional cost savings can be achieved after acontract has been awarded if the PM ensures thatincentives are provided to contractors to submitValue Engineering Change Proposals (VECPs) tothe government when unnecessary costs areidentified.

23.5 SUMMARY

The use of commercial items and NDIs in a sys-tem acquisition can provide considerable time andcost savings. The testing approach used must becarefully tailored to the type of system, levels ofmodifications, technology risk areas, and theamount of test data already available. The T&Ecommunity must get involved early in the pro-cess so that all test issues are adequately ad-dressed and timely comprehensive evaluations areprovided to decision authorities.

Page 236: DAU T&E Management Guide.pdf

23-6

ENDNOTES

1. Department of Defense Directive (DoDD)5000.1, The Defense Acquisition System, May12, 2003.

2. DoD Instruction (DoDI) 5000.2, Operationof the Defense Acquisition System, May 12,2003.

3. Ibid.

4. Ibid.

5. Defense Acquisition Handbook, Appendix 6,http://akss.dau.mil/DAG/.

6. Department of the Army Pamphlet (DA PAM)73-1, Test and Evaluation in Support of Sys-tems Acquisition, February 1997.

7. DoDI 5000.2.

Page 237: DAU T&E Management Guide.pdf

24-1

2424TESTING THE

SPECIAL CASES

24.1 INTRODUCTION

This chapter covers the special factors and alter-native test strategies the tester must consider intesting dangerous or lethal weapons, systems thatinvolve one-of-a-kind or limited production,Advanced Concept Technology Demonstrations(ACTDs), and systems with high-cost and/orspecial security considerations. Examples include:chemical and laser weapons; ships; spaceweapons; and missile systems.

24.2 TESTING WITH LIMITATIONS

Certain types of systems cannot be tested usingrelatively standard Test and Evaluation (T&E)approaches for reasons such as a nonstandardacquisition strategy, resource limitations, cost,safety, or security constraints. The Test and Evalu-ation Master Plan (TEMP) must contain a state-ment that identif ies “those factors that willpreclude a full and completely realistic opera-tional test...(IOT&E [Initial Operational Test andEvaluation] and FOT&E [Follow-on OperationalTest and Evaluation]),” such as inability to real-istically portray the entire threat, limited resourcesor locations, safety, and system maturity. Theimpact of these limitations on the test’s CriticalOperational Issues (COIs) must also be addressedin the TEMP.

Nonstandard acquisition strategies are often usedfor one-of-a-kind or limited production systems.Examples of these include space systems; mis-siles; and ships. For one-of-a-kind systems, theproduction decision is often made prior to systemdesign; hence, testing does not support the tradi-tional decision process. In limited production

systems, there are often no prototypes availablefor test; consequently, the tester must developinnovative test strategies.

The tester of dangerous or lethal systems, likechemical and laser weapons, must considervarious safety, health, and medical factors indeveloping test plans, such as:

(1) Provision of medical facilities for pre- andpost-test checkups and emergency treatment;

(2) Need for protective gear for participating/observer personnel;

(3) Approval of the test plan by the SurgeonGeneral;

(4) Restrictions in selection of test participants(e.g., medical criteria or use of only volunteertroops);

(5) Restricted test locations;

(6) Environmental Impact Statements (EISs).

Also, the tester must allow for additional plan-ning time, test funds, and test resources toaccommodate such factors.

24.2.1 Chemical Weapons Testing

The testing of chemical weapons poses uniqueproblems, because the tester cannot performactual open-air field testing with real nerve agentsor other toxic chemicals. Since the United Statessigned and ratified the Geneva Protocol of 1925,U.S. policy has been that the United States will

Page 238: DAU T&E Management Guide.pdf

24-2

never be the first to use lethal chemical weapons;it may, however, retaliate with chemical weaponsif so attacked. In addition to the health and safetyfactors discussed in the last paragraph, test issuesthe chemical weapons tester must address include:

(1) All possible chemical reactions due tovariations such as moisture, temperature,pressure, and contamination;

(2) Physical behavior of the chemical; i.e., drop-let size, dispersion density, and ground con-tamination pattern when used operationally;

(3) Toxicity of the chemical; i.e., lethality andduration of contamination when usedoperationally;

(4) Safety of the chemical weapon duringstorage, handling, and delivery;

(5) Decontamination process.

Addressing all of these issues requires a combi-nation of laboratory toxic chamber tests and open-air field testing. The latter must be performedusing “simulants,” which are substances that rep-licate the physical and chemical properties of theagent but with no toxicity.

The development and use of simulants for test-ing will require increased attention as morechemical weapons are developed. Chemicalagents can demonstrate a wide variety of effectsdepending on such factors as moisture, tempera-ture, and contamination. Consequently, thesimulants must be able to replicate all possibleagent reactions; it is likely that several simulantswould have to be used in a test to produce allpredicted agent behaviors. In developing andselecting simulants, the tester must thoroughlyunderstand all chemical and physical propertiesand possible reactions of the agent.

Studies of the anticipated reactions can be per-formed in toxic-chamber tests using the real

agent. Here, factors such as changes in moisture,temperature, pressure, and levels of impurity canbe controlled to assess the agent’s behavior. Thetester must think through all possible environ-mental conditions in which the weapon couldoperate so all cases can be tested in the laboratorychamber with the real agent. For example, dur-ing development testing of the BIGEYE chemi-cal weapon, it was found that higher-than-expected temperatures due to aerodynamic heat-ing caused pressure buildup in the bomb bodythat resulted in the bomb exploding. This causedthe operational concept for the BIGEYE to bechanged from on-board mixing of the twochemicals to mixing after release of the bomb.

Tests to confirm toxicity must be conducted usingsimulants in the actual environment. Since theagent’s toxicity is dependent on factors such asdroplet size, dispersion density, ground contami-nation pattern, and degradation rate, a simulantthat behaves as the agent does must be used inactual field testing. Agent toxicity is determinedin the lab.

The Services publish a variety of technical docu-ments on specific chemical test procedures. Docu-ments such as the Index of Test Operations Lo-gistics Support: Developmental SupportabilityTest and Evaluation Guide, a bibliography thatincludes numerous reports on chemical testingissues and procedures, can be consulted forspecific documentation on chemical testing.1

24.2.2 Laser Weapons Testing

Many new weapon systems are being designedwith embedded laser range finders and laser des-ignators. Because of the danger to the human eyeposed by lasers, the tester must adhere to specialsafety requirements and utilize speciallyconfigured geographic locations during T&E. Forinstance, the program seeking to free-play non-eye safe airborne or ground lasers during testsinvolves significant precautions, such as the air-space must be restricted from overflight during

Page 239: DAU T&E Management Guide.pdf

24-3

active testing and guards must be posted to pre-vent anyone (hikers, bikers, off-road vehicles,equestrians) accidentally venturing into the area.A potential solution to the safety issue is to de-velop and use an “eye-safe” laser for testing. Thetester must ensure that eye-safe lasers producethe same laser energy as the real laser system.

Another concern of the laser/directed energyweapons tester is the accurate determination ofenergy levels and location on the target. Mea-surements of the energy on the target are usuallyconducted in the laboratory as part of Develop-ment Test (DT). In the field, video cameras areoften used to verify that a laser designator didindeed illuminate the target. Such determinationsare important when the tester is trying to attributeweapon performance to behavior of the directedenergy weapon, behavior of the guidance system,or some other factor.

A bibliography of Army test procedures, Test andEvaluation Command (ATEC) Pamphlet 310-4,lists several documents that cover the special is-sues associated with laser testing.

24.3 SPACE SYSTEM TESTING

From a historical perspective, space system ac-quisition has posed several unique problems tothe test process (especially the Operational Test(OT) process) that generally fall into four cat-egories: limited quantities/high cost, “incremen-tal upgrade” approach to acquisition, operatingenvironment (peacetime and wartime), and testenvironment. Expanded Air Force guidance canbe found in Space Systems Test and EvaluationProcess.2 More generic guidance is in NationalSecurity Space (NSS) Acquisition Policy thatoutlines a streamlined acquisition framework.3

(1) Limited quantities/high cost: Space sys-tems have traditionally involved the acqui-sition of relatively few (historically, less than20) systems at extremely “high per-unitcosts” (in comparison with more traditional

military systems). The high per-unit costsare driven by a combination of hightransportation costs (launch to orbit); highlife-cycle reliability requirements and as-sociated costs because of the lack of an“on-orbit” maintenance capability; and thehigh costs associated with “leading edge”technologies that tend to be a part of space-craft design. From a test perspective, thisserves to drive space system acquisitionstrategy into the “nonstandard” approachaddressed below. The problem is com-pounded by the “incremental upgrade”approach to acquisition.

(2) Incremental approach to acquisition: Dueto the “limited buy” and “high per-unit cost”nature of spacecraft acquisition, these sys-tems tend to be procured using an individualincrement acquisition strategy. Under thisconcept, “the decision to deploy” is oftenmade at the front end of the acquisitioncycle; and the first prototype to be placedin orbit becomes the first operational asset.As early and follow-on systems undergoground and on-orbit testing [either Devel-opment Test and Evaluation (DT&E) orOperational Test and Evaluation (OT&E)],discrepancies are corrected by “incrementchanges” to the next system in the pipe-line. This approach to acquisition can per-turb the test process as the tester may haveno formal milestone decisions to test to-ward. The focus must change toward beingable to influence the design of (and blockchanges to) systems further downstream inthe pipeline. As the first “on-orbit” assetusually becomes the first operational asset,pressure is created from the operational com-munity to expedite (and sometimes limit)testing so a limited operational capability canbe declared and the system can begin fulfill-ing mission requirements. Once the asset“goes operational,” any use of it for testingmust compete with operational missionneeds—a situation potentially placing the

Page 240: DAU T&E Management Guide.pdf

24-4

tester in a position of relatively low prior-ity. Recognition of these realities and care-ful “early-on” test planning can overcomemany of these problems, but the tester needsto be involved and ready much earlier inthe cycle than with traditional systems.

(3) Operating environment (peacetime andwartime): Most currently deployed spacesystems and near-term future space systemsoperate in the military support arena such astactical warning/attack assessment, commu-nications, navigation, weather, and intelli-gence; and their day-to-day peacetime oper-ating environment is not much different fromthe wartime operating environment except foractivity level (i.e., message throughput, moreobjects to track/see, etc.). Historically, spacehas been a relatively benign battlefield envi-ronment because of technology limitationsin the capability of potential adversaries toreach into space with weapons. But this isno longer valid. This combination of sup-port-type missions and a battlefield environ-ment that is not much different from thepeacetime environment has played a definiterole in allowing systems to reach limitedoperational capability without as much dedi-cated prototype system-level testing as seenon other systems. This situation is changingwith the advent of concepts like the MissileDefense Agency [MDA] system where ac-tual weapons systems (impact anti-satelliteand laser) may be in operation, and day-to-day peacetime operations will not mirror theanticipated battlefield environment as closely.Likewise, the elevation of the battlefield intospace and the advancing technologies thatallow potential adversaries to reach into spaceis changing the thrust of how space systemsneed to be tested in space. The Departmentof Defense (DoD) should anticipate an in-creased need for dedicated on-orbit testingon a type of space range where the battle-field environment that will be replicated canbe anticipated—a situation similar to the

dedicated testing done today on test rangeswith Army, Navy, and Air Force weapons.

(4) Test environment: The location of spaceassets in “remote” orbits also compoundsthe test problem. Space systems do not havethe ready access (as with ground or aircraftsystems) to correct deficiencies identifiedduring testing. This situation has driven themain thrust of testing into the “prelaunch”ground simulation environment where dis-crepancies can be corrected before the sys-tem becomes inaccessible. However, asmentioned previously, when space systemmissions change from a war-support focusto a warfighting focus, and the number ofsystems required to do the mission increasesfrom the “high reliability/limited number”mode to a more traditional “fairly largenumber buy” mode, future space systemtesting could be expected to become morelike the testing associated with currentground, sea, and air systems. From a testperspective, this could also create unique“test technology” requirements; i.e., withthese systems we will have to bring the testrange to the operating system as opposedto bringing the system to the range. Also,because the space environment tends to be“visible to the world” (others can observeour tests as readily as we can), unique testoperations security methodologies may berequired to allow us to achieve test realismwithout giving away system vulnerabilities.

In summary, current and near-term future spacesystems have unique test methodologies. How-ever, in the future space operations might entaildevelopment/deployment of weapon platforms onorbit with lower design-life reliability (becauseof cost); and day-to-day peacetime operations willnot mirror the wartime environment. Thus, spacesystem testing requirements may begin to moreclosely parallel those of traditional weaponsystems.

Page 241: DAU T&E Management Guide.pdf

24-5

24.4 OPERATIONS SECURITY AND T&E

Operations Security (OPSEC) issues must be con-sidered in all test planning. Security regulationsand contracting documents require the protectionof “sensitive design information and test data”throughout the acquisition cycle by:

(1) Protecting sensitive technology;

(2) Eliminating nonsecure transmittal data onand from test ranges;

(3) Providing secure communications linkingDoD agencies to each other and to theircontractors.

Such protection is obviously costly and will re-quire additional planning time, test resources, andtest constraints. The test planner must determineall possible ways in which the system could besusceptible to hostile exploitation during testing.For example, announcement of test schedule andlocation could allow monitoring by unauthorizedpersons. Knowledge of the locations of systemsand instrumentation or test concepts could revealclassified system capabilities or military concepts.Compilations of unclassified data could, as awhole, reveal classified information as could sur-veillance (electronic or photographic) of test ac-tivities or interception of unencrypted transmis-sions. The T&E regulations of each Service re-quire an operational security plan for a test. Adetailed list of questions the test planner can use

to identify the potential threat of exploitation isprovided in Management of Operational Test andEvaluation.4

24.5 ADVANCED CONCEPTTECHNOLOGY DEMONSTRATIONS

Systems with potential utility for the user andhaving relatively mature technology may be evalu-ated by a user in an operational field environ-ment. These programs are not an acquisition pro-gram and therefore are not subject to the normalacquisition T&E processes. A favorable evalua-tion may result in the decision to acquire addi-tional systems for Service use, bypassing a num-ber of the normal acquisition phases. The Ser-vices have been using their operational test agen-cies to assist the field commanders in structuringan evaluation process that would provide the docu-mented data necessary for an informed acquisi-tion decision.

24.6 SUMMARY

All weapon systems tests are limited to somedegree, but certain systems face major limitationsthat could preclude a comprehensive and realis-tic evaluation. The test planners of these specialsystems must allow additional planning time,budget for extra test resources, and devise alter-native test strategies to work around testing limi-tations caused by such factors as security restric-tions, resource availability, environmental safetyfactors, and nonstandard acquisition strategies.

Page 242: DAU T&E Management Guide.pdf

24-6

ENDNOTES

1. U.S. Army Test and Evaluation Command(ATEC) Pamphlet 310-4, Index of TestOperations Logistics Support: DevelopmentalSupportability Test and Evaluation Guide.

2. Air Force Manual (AFM) 99-113, Space Sys-tems Test and Evaluation Process, May 1,1996.

3. National Security Space (NSS) AcquisitionPolicy 03-01, October 3, 2003.

4. Air Force Regulation (AFR) 55-43, Manage-ment of Operational Test and Evaluation.

Page 243: DAU T&E Management Guide.pdf

25-1

2525BEST PRACTICES IN T&E

OF WEAPON SYSTEMS

25.1 INTRODUCTION

Numerous pre-millennium 2000 studies were con-ducted by various agencies that highlighted dif-ferent perspectives on best practices for Test andEvaluation (T&E). The Office of the Secretaryof Defense (OSD) published their study, BestPractices Applicable to DoD Developmental Testand Evaluation.1 The Executive Summary stated“While the study team found no ‘Silver Bullets,’it did identify some 20 practices used by com-mercial enterprises that are relevant to ODTSE&E[Office of the Director, Test Systems Engineer-ing and Evaluation] business practices. Thesepractices have been grouped under the categories‘Policy,’ ‘Planning,’ ‘Test Conduct,’ and ‘TestAnalysis’.”

Shortly thereafter in September 1999, the DefenseScience Board (DSB) Task Force released its re-port on a broad review of the entire range of ac-tivities relating to T&E. Their summary recom-mendations were: start T&E early—very early;make T&E part of the acquisition process—notadversarial to it; consolidate Development Test(DT) and Operational Test (OT); provide joint testleadership; fund Modeling and Simulation (M&S)support of T&E in program budgets; maintain in-dependence of evaluation process while integrat-ing all other activities; and, establish range own-ership and operation structure separate from theService DT/OT organizations. A follow-on DSBreport addressed the value of testing, manage-ment of T&E resources, quality of testing, spe-cific T&E investments, and the use of trainingfacilities/exercises for T&E events.2

In the same time frame, A. Lee Battershellreleased a study for the National DefenseUniversity (NDU) comparing the acquisitionpractices of the Boeing 777 and the Air Force C-17. Her most interesting yet not surprising con-clusion was that some commercial best practicesdo not transfer to government.

This was followed by the publication of the Gen-eral Accounting Office (GAO) Report Best Prac-tices: A More Constructive Test Approach is Keyto Better Weapon System Outcomes.”3 After com-paring commercial and defense system develop-ment practices, the three main findings were:problems found late in development signal weak-ness in testing and evaluation; testing early to vali-date product knowledge is a best practice; and,different incentives make testing a more construc-tive factor in commercial programs than inweapon system programs.

The following information offers guidance to De-partment of Defense (DoD) personnel who plan,monitor, and execute T&E. Checklists found inthe remainder of the chapter were obtained fromthe Report of Task Force on Test and Evaluation.4

This excellent study is highly regarded in the T&Ecommunity but has become somewhat dated; con-sequently, the Defense Acquisition University(DAU) decided to update the study findings andinclude those findings and summary checklistsin this guide. This discussion follows the systemfrom early concept through the various stages oftechnology maturation demonstrated in differentsystem/test article configurations.

Page 244: DAU T&E Management Guide.pdf

25-2

25.2 SPECIFIC WEAPON SYSTEMSTESTING CHECKLIST

The DSB report is the result of the study of pastmajor weapon systems acquisitions. It was hopedthat this study would enhance the testingcommunity’s understanding of the role that T&Ehas had in identifying system problems duringthe acquisition process. In the foreword of theDSB study, the authors made this statementabout including the obvious testing activity intheir checklist:

The T&E expert in reading this volumewill find many precepts which will strikehim as of this type. These items are in-cluded because examples were foundwhere even the obvious has been ne-glected, not because of incompetence orlack of personal dedication by the peoplein charge of the program, but because offinancial and temporal pressures whichforced competent managers to compro-mise on their principles. It is hoped thatthe inclusion of the obvious will preventrepetition of the serious errors whichhave been made in the past when suchpolitical, economical, and temporal pres-sures have forced project managers todepart from the rules of sound engineer-ing practices....In the long run, takingshort cuts during T&E to save time andmoney will result in significant increasesin the overall costs of the programs andin a delay of delivery of the correspond-ing weapon systems to combatant forces.

25.2.1 Aircraft Systems

25.2.1.1 Concept Assessment

• Test Program/Total Costs. Prior to program ini-tiation, all phases of the aircraft test programshould be considered so the total costs and thedevelopment schedules include considerationof all likely activities in the overall program.

• Test Facilities and Instrumentation. The testfacilities and instrumentation requirements toconduct tests should be generally identifiedalong with a tentative schedule of test activities.

• Test Resources and Failures. Ensure that thereare adequate funds, reasonable amounts oftime, and acceptable numbers of aircraft plan-ned for the various test program phases, andthat provisions are made for the occurrence offailures.

• System Interfaces. Consider all aircraft systeminterfaces, their test requirements, and probablecosts at the outset of the concept assessment.

• Major Weapon Subsystems. If the aircraft sys-tem relies on the successful development of aspecific and separately funded major weapon(such as a gun or missile) in order to accom-plish its primary mission, this major subsystemshould be developed and tested concurrentlywith, or prior to, the aircraft.

• Propulsion System. If the aircraft program ispaced by the propulsion system development,an early advanced-development project for thepropulsion may be appropriate for a newconcept.

• Operational Scenario. A conceptual opera-tional scenario for operation and use of theaircraft should be developed so that generaltest plans can be designed. This should includepurpose, roles and missions, threats, operat-ing environments, logistics and maintenance,and basing characteristics. The potential rangeof values on these aspects should be stated.

• Evaluation Criteria. Develop evaluation crite-ria to be used for selecting the final aircraftsystem design.

• Untried Elements. The aircraft developmentprogram should include conclusive testing toeliminate uncertainties of the untried elements.

Page 245: DAU T&E Management Guide.pdf

25-3

• Brassboard Avionics Tests. The use of brass-board or modified existing hardware to “prove”the concept will work should be seriously scru-tinized to ensure that the demonstrations andtests are applicable.

• Nuclear Weapons Effects. The subject ofnuclear weapons effects should be addressedin the test concept for all aircraft weapons sys-tems where operational suitability dictates thatsurvivable exposure to nuclear weapons effectsis a requirement.

25.2.1.2 Prototype Development

• T&E Strategy. T&E plans and test criteriashould be established so there is no questionon what constitutes a successful test and whatperformance is required.

• Milestones and Goals. Ensure an integratedsystem test plan that preestablishes milestonesand goals for easy measurement of programprogress at a later time.

• Operating Concept and Environment. Theoperational concept and the environments inwhich the aircraft will be expected to operateand be tested in Operational Test andEvaluation (OT&E) should be specified.

• Test Program Building Blocks. In testing theprototype, demonstrate that high-risk technol-ogy is in hand. In planning the system leveltest program, ensure that components and sub-systems are adequately qualified for incorpo-ration into the system tests.

• Technology Concepts. Each concept to be usedin the aircraft system (e.g., aerodynamics,structures, propulsion) should be identified andcoded according to prior application, beforefuture research. Tests for each concept shouldbe specif ied with the effect of failureidentified.

• Development Test and Evaluation (DT&E)/OT&E Plan. The aircraft DT&E/OT&E testplan should be reviewed to ensure it includesground and flight tests necessary to safely andeffectively develop the system.

• Test Failures. The T&E plans should be madeassuming there will be failures; they areinevitable.

• Multi-Service Testing. When a new aircraftdevelopment program requires multi-Servicetesting during OT&E and prior to Low RateInitial Production (LRIP), the test plan shouldinclude the types of tests and resourcesrequired from other activities and Services.

• Traceability. The aircraft development and testprogram should be designed and scheduled soif trouble arises, its source can be traced backthrough the lab tests and the analytical studies.

• Competitive Prototype Tests. When a competi-tive prototype test program using test andoperational crews is employed, the aircraftshould be compared on the basis of theperformance of critical missions.

• Prototype Similarity to Development and Pro-duction Aircraft. A firm determination shouldbe made of the degree of similarity of the win-ning prototype (in a competitive prototype pro-gram) to the Engineering Development Model(EDM) and production aircraft. Thus, testresults that are derived from the prototype inthe interim period prior to availability of theEDM aircraft can be utilized effectively.

• Prototype Tests. The prototype aircraft test datashould be used to determine where emphasisshould be placed in the engineeringdevelopment program.

• Inlet/Engine/Nozzle Match. The aircraft testprogram should provide for an early andadequate inlet/engine/nozzle match through a

Page 246: DAU T&E Management Guide.pdf

25-4

well-planned test program, and there shouldbe time programming for corrections.

• Subsystem Tests. There should be a balancedprogram for the aircraft subsystem tests.

• Propulsion System. If the aircraft is paced bythe propulsion systems development, an earlyadvanced-development project for the propul-sion may be appropriate for a new concept.

• Electromagnetic Interface (EMI) Testing. Full-scale aircraft systems tests in an anechoicchamber are desirable for some aircraft.

• Parts Interchange. Early plans should providefor tests where theoretically identical parts, par-ticularly in avionics, are interchanged to en-sure that the aircraft systems can be maintainedin readiness.

• Human Factors Demonstration. Ensure ad-equate demonstration of human factors isconsidered in test planning.

• Logistics T&E. Adequate resources should bescheduled for the aircraft logistics system T&E,and a positive program should exist for theutilization of this information at the time ofOT&E.

• User Participation. It is imperative that theoperational command actively participate in theDT&E phase to ensure that user needs are rep-resented in the development of the system.

25.2.1.3 Engineering Development Model

• Test Design. Test programs should be designedto have a high probability of early identifica-tion of major deficiencies during the DT&Eand Initial Operational Test and Evaluation(IOT&E).

• Data for Alternate Scenarios. By careful at-tention to testing techniques, maximize the

utility of the test data gathered; aircraftinstrumentation; range instrumentation; anddata collection, reduction, and storage.

• Test Milestones. Development programs shouldbe built around testing milestones, not calendardates.

• Production Engineering Influence on Researchand Development (R&D) Hardware. Encour-age that production philosophy and productiontechniques be brought to the maximum prac-ticable extent into an early phase of the designprocess for R&D hardware.

• Running Evaluation of Tests. Ensure that run-ning evaluations of tests are conducted. If itbecomes clear that test objectives are unattain-able or additional samples will not change thetest outcome, ensure that procedures areestablished for terminating the test.

• Simulation. Analysis and simulation should beconducted, where practicable, before eachphase of development flight testing.

• Avionics Mock-up. Encourage use of a com-plete avionics system installed in a mock-upof the appropriate section or sections of theaircraft.

• Escape Systems Testing. Ensure the aircrewescape system is thoroughly tested with par-ticular attention to redundant features, such aspyrotechnic firing channels.

• Structural Testing. Ensure that fatigue testingis conducted on early production airframes.Airframe production should be held to a lowrate until satisfactory progress is shown inthese tests.

• Gun Firing Tests. All forms of ordnance,especially those that create gases, must befired from the aircraft for external effects(blast and debris), internal effects (shock) and

Page 247: DAU T&E Management Guide.pdf

25-5

effects on the propulsion (inlet compositionor distribution).

• Post-Stall Characteristics. Special attention iswarranted on the post-stall test plans for DT&Eand OT&E.

• Subsystem Performance History. DuringDT&E and IOT&E of aircraft, ensure that aperformance history of each aircraft subsystemis kept.

• Flight Deficiency Reporting. Composition offlight deficiencies reporting by aircrews, par-ticularly those pertaining to avionics, shouldbe given special attention.

• Crew Limitations. Ensure aircrew limitationsare included in the tests.

• Use of Operational Personnel. Recommend ex-perienced operational personnel help in es-tablishing Measures of Effectiveness (MOEs)and in other operational test planning. In con-ducting OT&E, use typical operational air-crews and support personnel.

• Role of the User. Ensure that users participatein the T&E phase so their needs are representedin the development of the system concept andhardware.

• Crew Fatigue and System Effectiveness. Inattack aircraft operational testing and particu-larly in attack helicopter tests where vibrationis a fatiguing factor, ascertain that the testsinclude a measure of degradation over time.

• Time Constraints on Crews. Detailed OT plansshould be evaluated to determine that the test-imposed conditions on the crew do not invali-date the applicability of the collected data.

• Maintenance and Training Publications. Theaircraft development program should provide

for concurrent training of crews andpreparation of draft technical manuals to beused by IOT&E maintenance and operatingcrews.

• R&D Completion Prior to IOT&E. The testingplans should ensure that, before an aircraft sys-tem is subjected to IOT&E, the subsystems es-sential to the basic mission have completedR&D.

• Complete Basic DT&E before Starting OT&E.Before the weapon system is subjected toIOT&E, all critical subsystems should havecompleted basic DT&E and signif icantproblems should be solved.

• Realism in Testing. Ascertain that final DT&Esystem tests and IOT&E flight tests arerepresentative of operational conditions.

• Test All Profiles and Modes. Tests should beconducted to evaluate all planned operationalflight profiles and all primary and backup,degraded operating modes.

• Update of OT Plans. Ensure that OT plans arereviewed and updated, as needed, to make themrelevant to evolving concepts.

• Plan OT&E Early. Ensure that operational suit-ability tests are planned to attempt to identifyoperational def iciencies of new systemsquickly so fixes can be developed and testedbefore large-scale production.

• Missile Launch Tests. Review the final posi-tion fix planned before launching inertial-guided air-to-surface missiles.

• Mission Completion Success Probability. Mis-sion completion success probability factorsshould be used to measure progress in theaircraft test program.

Page 248: DAU T&E Management Guide.pdf

25-6

25.2.1.4 Production (LRIP and Full Rate),Deployment and OperationalSupport

• OT Realism. Assure IOT&E and Follow-onTest and Evaluation (FOT&E) are conductedunder realistic conditions.

• Design FOT&E for Less-Than-Optimal Con-ditions. Structure the FOT&E logistical sup-port for simulated combat conditions.

• New Threat. Be alert to the need to extend theIOT&E if a new threat shows up. AddressIOT&E limitations in FOT&E.

• Certif ication of Ordnance. Ensure thatordnance to be delivered by an aircraft is cer-tified for the aircraft.

• Inadvertent Influence of Test. The IOT&E/FOT&E plans should provide measures forensuring that actions by observers and umpiresdo not unwittingly influence trial outcome.

• Deficiencies Discovered In-Service. Be awarethat in-Service operations of an aircraft sys-tem will surface deficiencies that extensiveIOT&E/FOT&E probably would not uncover.

• Lead the Fleet. Accelerated Service test of asmall quantity of early production aircraft isadvisable and periodically during FOT&Ethereafter.

25.2.2 Missile Systems

25.2.2.1 Concept Assessment

• Weapon System Interfaces. Consider signifi-cant weapon system interfaces, their testrequirements and probable costs at the outsetof the concept assessment. Ensure that theprogram plan assembled before program startincludes an understanding of the basic test

criteria and broad test plans for the wholeprogram.

• Number of Test Missiles. Ensure that there issufficient time and a sufficient number of testarticles to support the program through its vari-ous phases. Compare the program require-ments with past missile programs of genericsimilarity. If there is substantial difference, thenadequate justification should be provided. TheDT&E period on many programs has had tobe extended as much as 50 percent.

• T&E Gap. A T&E gap has been experiencedin some missile programs between the timewhen testing with R&D hardware was com-pleted and the time when follow-on operationalsuitability testing was initiated with productionhardware.

• Feasibility Tests. Ensure experimental testevidence is available to indicate the feasibilityof the concept and the availability of thetechnology for the system development.

• Evaluation of Component Tests. Results of testsconducted during the concept assessment andthe prototype testing, which most likely havebeen conducted as avionics brassboard, bread-board, or modified existing hardware, shouldbe evaluated with special attention.

• Multi-Service Testing Plans. When a new mis-sile development program requires multi-Service testing during OT&E, the early Testand Evaluation Master Plan (TEMP) shouldinclude the type of tests and resources requiredfrom other activities and Services.

• Test Facilities and Instrumentation Require-ments. The test facilities and instrumentationrequirements to conduct tests should be gen-erally identified early along with a tentativeschedule of test activities.

Page 249: DAU T&E Management Guide.pdf

25-7

25.2.2.2 Prototype Testing

• Establish Test Criteria. By the end of prototypetesting, test criteria should be established sothere is no question on what constitutes a suc-cessful test and what performance is expected.

• Human Factors. Ensure that the TEMP in-cludes adequate demonstration of humanfactors considerations.

• Instrumentation Diagnostic Capability andCompatibility. Instrumentation design, with ad-equate diagnostic capability and compatibil-ity in DT&E and IOT&E phases, is essential.

• Provisions for Test Failures. The DT&E andOT&E plans should include provisions for theoccurrence of failures.

• Integrated Test Plan (ITP). Ensure develop-ment of an integrated system test plan that pre-establishes milestones and goals for easymeasurement of program progress at a latertime.

• T&E Requirements. Ensure that the T&Eprogram requirements are firm before approv-ing an R&D test program. Many missileprograms have suffered severe cost impacts asa result of this deficiency. The test plan mustinclude provisions to adequately test thoseportions of the operational envelope that stressthe system including backup and degradedoperational modes.

• Personnel Training Plans. Ensure that adequatetraining and certif ication plans for testpersonnel have been developed.

• Test and Engineering Reporting Format. In-clude a T&E reporting format in the programplan. Attention must be given to the reportingformat in order to provide a consistent basisfor T&E throughout the program life cycle.

• Program-to-Program Cross Talk. Encourageprogram-to-program T&E cross talk. T&Eproblems and their solutions, as one program,provide a valuable index of lessons learned andtechniques for problem resolution on otherprograms.

• Status of T&E Offices. Ensure that T&E officesreporting to the Program Manager (PM) or di-rector have the same stature as other major el-ements. It is important that the T&E compo-nent of the system program office has organi-zational status and authority equal to configu-ration management, program control, systemengineering, etc.

• Measurement of Actual Environments. Thor-ough measurements should be made to defineand understand the actual environment inwhich the system components must live duringthe captive, launch, and in-flight phases.

• Thoroughness of Laboratory Testing. Signifi-cant time and money will be saved if each com-ponent, each subsystem, and the full systemare all tested as thoroughly as possible in thelaboratory.

• Contract Form. The contract form can be ex-tremely important to the T&E aspects. In oneprogram, the contract gave the contractor fullauthority to determine the number of test mis-siles; and in another, the contract incentiveresulted in the contractor concentrating testson one optimum profile to satisfy the incen-tive instead of developing the performancethroughout important areas of the envelope.

• Participation of Operational Command. It isimperative that the operational commandactively participate in the DT&E phase toensure that user needs are represented in thedevelopment of the system.

Page 250: DAU T&E Management Guide.pdf

25-8

25.2.2.3 Engineering Development Model

• Production Philosophy and Techniques. En-courage that production philosophy andproduction techniques be brought, to the maxi-mum practicable extent, into an early phase ofthe design process for R&D hardware. Thereare many missile programs in which the com-ponents were not qualified until the missilewas well into production.

• Operational Flight Profiles. Tests should beconducted to evaluate all planned operationalflight profiles and all primary and backupdegraded operating modes.

• Failure Isolation and Responsive Action. Doesthe system test plan provide for adequateinstrumentation so missile failures can beisolated and fixed before the next flight?

• Responsive Actions for Test Failures. Encour-age a closed-loop reporting and resolutionprocess, which ensures that each test failure atevery level is closed out by appropriate action;i.e., redesign, procurement, retest, etc.

• Plan Tests of Whole System. Plan tests of thewhole system including proper phasing of theplatform and supporting gear, the launcher, themissile, and user participation.

• Determination of Component Configuration.Conditions and component configuration dur-ing Development Tests (DTs) should be deter-mined by the primary objectives of that test.Whenever a non-operational configuration isdictated by early test requirements, tests shouldnot be challenged by the fact that configura-tion is not operational.

• Testing of Software. T&E should ensure thatsoftware products are tested appropriatelyduring each phase. Software often has beendeveloped more as an add-on than as anintegral part of the overall system. Software

requirements need the same consideration ashardware requirements in the prototypedevelopment.

• Range Safety Dry Runs. Ensure the test planincludes adequate test program/range safetydry runs. The government test ranges have toprovide facilities to safely test many differentprojects:— Assemblies/Subsystems special require-

ments,— Seekers and tracking devices,— Propulsion subsystems,— Connectors and their related hardware,— Lanyard assemblies,— Safing, arming, fuzing, and other ordnance

devices.

• Review of Air-to-Surface Missile (ASM) TestPosition Fixes. Review the final position fixplanned before launching ASMs. There areinstances in which the OT of air-launchedmissiles utilized artificial position fixes justprior to missile launch.

• Operator Limitations. Ensure operator limita-tions are included in the tests. Most tacticalmissiles, especially those used in close sup-port, require visual acquisition of the targetby the missile operator and/or an air/groundcontroller.

• Test Simulations and Dry Runs. Plan and usetest simulations and dry runs. Dry runs shouldbe conducted for each new phase of testing.Simulation and other laboratory or groundtesting should be conducted to predict thespecific test outcome. The “wet run” test shouldfinally be run to verify the test objectives. Evalu-ation of the simulation versus the actual testresults will help to refine the understanding ofthe system.

• Component Performance Records. Keep perfor-mance records on components. There are manyexamples in missile programs that have required

Page 251: DAU T&E Management Guide.pdf

25-9

parts stock sweeps associated with flight fail-ures and component aging test programs.

• Tracking Test Data. Ensure the test programtracks data in a readily usable manner. Reli-ability and performance evaluations of a mis-sile system should break down the missile’sactivity into at least the following phases:— Prelaunch, including captive carry

reliability,— Launch,— In-flight,— Accuracy/fuzing.

• Updating IOT&E Planning. Periodically up-date Production Qualification Testing (PQT)and IOT&E planning during the early R&Dphase. Few missile system programs have hadadequate user participation with the desiredcontinuity of personnel to minimize the prob-lems of transition from DT&E to OT&E todeployment/utilization.

• Instrumentation Provisions in ProductionMissiles. Encourage built-in instrumentationprovisions in production missiles.

• Constraints on Missile Operator. Detailed testplans should be evaluated to determine thatthe test-imposed constraints on the missileoperator do not invalidate the applicability ofthe data so collected.

• Problem Fixes Before Production. Ensure thatoperational suitability tests identify operationaldeficiencies of new systems quickly so fixescan be developed and tested before production.

• Flight Tests Representative of Operations. As-certain that final DT&E system tests andplanned IOT&E flight tests are representativeof operational flights. Some ballistic missileR&D programs have shown high success ratesin R&D flight tests; however, when the earlyproduction systems were deployed, they exhib-ited a number of unsatisfactory characteristics

such as poor alert reliability and poor opera-tional test-flight reliability.

• System Interfaces in OT. Ensure the primaryobjective of an OT planning is to obtain mea-surements on the overall performance of theweapon system when it is interfaced with thosesystems required to operationally use theweapons system.

25.2.2.4 Production (LRIP, Full Rate),Deployment and OperationalSupport

• Realistic Conditions for Operational Testing.Ascertain operational testing is conductedunder realistic combat conditions. This meansthat the offense/defense battle needs to besimulated in some way before the weapon sys-tem evaluation can be considered completed.Whether this exercise is conducted within asingle Service (as in the test of a surface-to-surface antitank missile against tanks) oramong Services (as in the test of an ASMagainst tanks with antiaircraft protection), theplans for such testing should be formulated aspart of the system development plan.

• Testing All Operational Modes. Ensure theFOT&E plan includes tests of any operationalmodes not previously tested in IOT&E. Alllaunch modes including degraded, backupmodes should be tested in the FOT&E becausethe software interface with the production hard-ware system should be evaluated thoroughly.Otherwise, small, easy-to-fix problems mightpreclude launch.

• Extension of the OT&E for New Threats. Bealert to the need to extend the IOT&E/FOT&Eif a new threat arises. Few missile programsperform any kind of testing relatable to evalu-ating system performance against current ornew threats.

Page 252: DAU T&E Management Guide.pdf

25-10

• “Lead-the-Fleet” Production Scheduling.Lead-the-Fleet missile scheduling and testsshould be considered.

• Test Fixes. Test fixes result from earlier op-erational testing. After the IOT&E that identi-fied problem areas in missiles, FOT&E shouldevaluate these areas primarily to determine theadequacy of the incorporated fixes, particu-larly if the IOT&E did not run long enough totest the fixes.

• FOT&E Feedback to Acceptance Testing.Ensure that FOT&E results are quickly fedback to influence early production acceptancetesting. Production acceptance testing is prob-ably the final means the government normallywill have to ensure the product meets specifi-cations. Early acceptance testing could beinfluenced favorably by a quick feedback fromFOT&E to acceptance testing. This is exem-plified by a current ASM program where pro-duction has reached peak rates, and the IOT&Ehas not been completed.

25.2.3 Command and Control (C2) Systems

25.2.3.1 Concept Assessment

• Concept Test Philosophy. The T&E plannersmust understand the nature of Command andControl (C2) systems early in the concept as-sessment. In a complex C2 system, a totalsystems concept must be developed initially.Total systems life cycle must be analyzed sothe necessary requirement for the design canbe established.

• The Importance of Software Testing. Testersshould recognize that software is a pacing itemin C2 systems development.

• Software Test Scheduling – Contractors’Facilities. Provision should be made for in-cluding software T&E during each phase of

C2 systems’ acquisition. Availability of con-tractors’ facilities should be considered.

• Evaluation of Exploratory Development Tests.Care should be exercised in evaluating resultsof tests conducted during exploratory devel-opment of C2 systems. These tests, which mostlikely have been conducted on brassboard,breadboard, or modified existing hardware,should be evaluated with special attention.

• Feasibility Testing for Field Compilers. Earlytest planning should allow for simulating thecomputer system to test for field use ofcompilers, where applicable.

• Evaluation of Test Plan Scheduling. Milestonesshould be event-oriented, not calendar-oriented.

• Type Personnel Needs – Effects on T&E. A mixof personnel with different backgroundsaffecting T&E is required.

• Planning for Joint-Service OT&E BeforeProgram Start. A Joint-Service OT&E (multi-Service) T&E strategy should be consideredfor C2 systems.

25.2.3.2 Prototype Testing

• Test Prototypes. In C2 systems, prototypes mustreasonably resemble final hardware configu-ration from a functional-use standpoint. Whenhigh technical risk is present, developmentshould be structured around the use of one ormore test prototypes designed to prove thesystem concept under realistic operationalconditions before proceeding to engineeringdevelopment.

• Test Objectives: Critical Issues. In addition toaddressing critical technical issues, T&Eobjectives during prototype testing shouldaddress the functional issues of a C2 system.

Page 253: DAU T&E Management Guide.pdf

25-11

• Real-Time Software: Demonstration of “Appli-cation Patches.” Tests of real-time C2 systemsshould include demonstrations of interfaceswhereby locally generated application patchesare brought into being.

• Independent Software Test-User Group. Anindependent test-user software group is neededduring early software qualification testing.

• System Interfaces. Critical attention should bedevoted to testing interfaces with other C2 sys-tems and to interfaces between subsystems.Particular attention should be devoted tointerfaces with other C2 systems and to the in-terfaces between sensors (e.g., radar units),communications systems (e.g., modems), andthe specific processors (e.g., Central Process-ing Units (CPUs)). Interface with informationprocessing C2 systems must also address data-element and code-standardization problems ifdata is to be processed online.

• Human Factors. In a C2 system, human factorsmust be considered from the earliest prototypedesigns and testing provided. Testing shouldbe conducted to determine the most efficientarrangement of equipment from the humanfactor standpoint; e.g., displays should be ar-ranged for viewing from an optimum anglewhenever possible; adequate maneuveringroom within installation constraints should beallowed considering the number of personnelnormally manning the facility; and console-mounted controls should be designed andlocated to facilitate operation, minimize fa-tigue, and avoid confusion.

• Degraded Operations Testing. When the ex-pected operational environment of a C2 sys-tem suggests that the system may be operatedunder less-than-finely-tuned conditions, testsshould be designed to allow for performancemeasurements under degraded conditions.

• Test-Bed. The use of a test-bed for study andexperimentation with new C2 systems isneeded early in the prototype development.

• Software-Hardware Interfaces. The software-hardware interfaces, with all operationalbackup modes to a new C2 system, should betested early in the program.

• Reproducible Tests. Test plans should containa method for allowing full-load message inputwhile maintaining reproducible test conditions.

• Cost-Effectiveness. Field-test data are neededduring the prototype development for input tocost-effectiveness analyses of C2 systems.

25.2.3.3 Engineering Development Model

• Acquisition Strategy. The acquisition strategyfor the system should:— Allow sufficient time between the planned

end of demonstration testing and majorprocurement (as opposed to limited pro-curement) decisions. This provides flexibil-ity for modifying plans, which may berequired during the test phases of theprogram. For instance, because insufficienttime was allowed for testing one recent C2

system, the program and the contract hadto be modified and renegotiated;

— Be evaluated relative to constraints imposed;— Ensure that sufficient dollars are available,

not only to conduct the planned T&E butto allow for the additional T&E that isalways required due to failures, designchanges, etc.

• Problem Indications. It is important to estab-lish an early detection scheme so managementcan determine when a program is becoming“ill.”

• Impact of Software Failures. Prior to anyproduction release, the impact of software

Page 254: DAU T&E Management Guide.pdf

25-12

failures on overall system performanceparameters must be considered.

• Displays. The display subsystems of a C2

system should provide an essential function tothe user. Displays are key subsystems of a C2

system. They provide the link that couples theoperator to the rest of the system and are, there-fore, often critical to its success.

• Pilot Test. A pilot test should be conductedbefore IOT&E so sufficient time is availablefor necessary changes.

• Publications and Manuals. It is imperative thatall system publications and manuals becompleted, reviewed, and selectively tested un-der operational conditions before beginningoverall system suitability testing.

• Power Sources. Mobile, prime power sourcesare usually provided as Government-FurnishedEquipment (GFE) and can be a problem areain testing C2 systems.

• Subsystem Tests. Every major subsystem of a C2

system should have a successful DT&E beforebeginning overall system operational testing.

• Communications. The C2 systems must betested in the appropriate electromagnetic en-vironment to determine the performance of itscommunications system.

• Demonstration of Procedures. Test plans shouldinclude a procedural demonstration wherebythe tested C2 system works in conjunction withother systems.

• GFE and Facilities. T&E should be concernedabout the availability of GFE as specified inthe proposed contract.

• User Participation in T&E. The varying needsof the user for a C2 system make participationin all phases of T&E mandatory.

25.2.3.4 Production (LRIP, Full Rate),Deployment and OperationalSupport

• Critical Issues. IOT&E should be designedduring early planning to provide the answersto some critical issues peculiar to C2 systems.Some of these critical issues that IOT&E ofC2 systems should be planned to answer are:— Is system mission reaction time a signifi-

cant improvement over present systems?— Is a backup mode provided for use when

either airborne or ground system exhibitsa failure?

— Can the system be transported as opera-tionally required by organic transport?(Consider ground, air, and amphibiousrequirements.)

— Is there a special requirement for sitepreparation? (For example, survey and an-tenna location.)

— Can the system be erected and dismantledin times specif ied? Are these timesrealistic?

— Does relocation affect system alignment?— Does system provide for operation during

maintenance?— Can on-site maintenance be performed on

shelterless subsystems (e.g., radar units)during adverse weather conditions?

• IOT&E Reliability Data. The IOT&E canprovide valuable data on the operational reli-ability of a C2 system; this data cannot beobtained through DT&E.

• Maintenance. In IOT&E, maintenance shouldinclude: a measurement of the adequacy of themaintenance levels and the maintenance prac-tices; an assessment of the impact that themaintenance plan has on the operational reli-ability; the accessibility of the major compo-nents of the system for field maintenance (e.g.,cables and connectors are installed to facili-tate access); and verification that the softwaredesign for maintenance and diagnostic routines

Page 255: DAU T&E Management Guide.pdf

25-13

and procedures are adequate and the softwarecan be modified to accommodate functionalchanges.

• Continuity of Operations. The IOT&E shouldprovide for an impact assessment of the fail-ure of any subsystem element of a C2 systemon overall mission effectiveness.

• Imitative Deception. The IOT&E shouldprovide for tests to assess the susceptibility ofthe data links of a C2 system to imitativedeception.

• First Article Testing (FAT). The pre-production,FAT and evaluation should be designed andconducted to: (1) confirm the adequacy ofthe equipment to meet specified performancerequirements; (2) confirm the adequacy of thesoftware not only to meet current user needsbut to accommodate changing needs; and (3)determine failure modes and rates of the to-tal integrated system. This activity should befollowed by FOT&E.

• Test Planners and Evaluators. Use the IOT&Epersonnel in the FOT&E program. The plan-ners and evaluators for the FOT&E of the pro-duction system can do a better job if they areinvolved initially in planning and conductingthe IOT&E.

25.2.4 Ship Systems

25.2.4.1 Concept Assessment

• TEMP. Prior to program initiation, sufficientmateriel should be generated to allow forevaluating the overall T&E program.

• Test Objectives and Critical Issues. In evalu-ating the initial test concept, it is importantthat the test objectives during prototype testand evaluation address the major critical issues,especially technological issues.

• Test Facilities and Instrumentation Required.The test facilities and instrumentation require-ments to conduct developmental and opera-tional tests and a tentative schedule of testactivities should be identified.

• Multiple Approach To Weapon System Devel-opment. Whenever possible, the weapon sys-tem concept should not be predicated on thesuccessful development of a single hardwareor software approach in the various critical sub-systems (unless it has been previouslydemonstrated adequately).

• Comparison of New versus Old System. Theprocedure for examining the relative perfor-mance of new or modified systems versus oldshould be indicated in the T&E plan.

• Test Support Facilities. The phasing of testsupport facilities must be planned carefully,with some schedule flexibility to cover latedelivery and other unforeseen problems.

• Fleet Operating Force Requirements. The re-quirement for fleet operating forces for DT&Eor OT&E should be assessed early in theprogram and a specific commitment made asto the types of units to be employed.

• Mission-Related MOEs. During the conceptassessment of the acquisition of a new classof ship, a study effort should be commencedjointly by the Chief of Naval Operations(CNO) and the Commander, Operational Testand Evaluation Force (COMOPTEVFOR).This effort is to establish mission-relatedMOEs, which may be expressed in numericalfashion and may later be made the subject ofOT&E to determine how closely the new shipsystem meets the operational need for whichit was conceived.

• Ship T&E Management. The management ofship T&E should ensure that test requirementsare necessary and consistent relative to systems/

Page 256: DAU T&E Management Guide.pdf

25-14

subsystem aspects and that the necessary test-ing is coordinated so that test redundancy doesnot become a problem.

• T&E of Large, Integrally-Constructed Systems.Major subsystems should be proven feasiblebefore firm commitment to a detailed hulldesign.

25.2.4.2 Prototype Testing

• Authentication of Human Factors Concepts.T&E should authenticate the human factorsconcepts embodied in the proposed systemsdesign, examining questions of safety, com-fort, appropriateness of man-machine inter-faces, as well as the numbers and skill levelsof the personnel required.

• Acquisition Strategy. The acquisition strategyfor a ship and its subsystems should allowsufficient time between the planned end ofdemonstration testing and major procurementdecisions of GFE for flexibility to modify plans(may be required during the test phases of theprogram).

• Evaluation of Results of Exploratory Testing.Results of tests conducted during exploratorydevelopment and most likely conducted onbrassboards, breadboards, or modified existinghardware should be evaluated carefully.

• Software Testing. In view of increased depen-dence upon computers in ship management andtactical operation, software testing must be ex-ceptionally thorough, and integrated softwaretesting must begin as early as possible.

• New Hull Forms. When a new type of shipinvolves a radical departure from the con-ventional hull form, extensive prototype test-ing should be required prior to furthercommitment to the new hull form.

• Effects of Hull and Propulsion on MissionCapability. The predicted effects of the provenhull and propulsion system design on the per-formance of the ship’s critical system shouldbe determined.

• Advances in Propulsion. Demonstration of theuse of new propulsion systems should be con-ducted prior to making the decision to committhe propulsion systems to the ship in question.

• Propulsion Systems in Other Classes. Whenan engine to be used in the propulsion systemof a new ship is already performing satisfac-torily in another ship, this is not to be taken asan indication that shortcuts can be taken inpropulsion system DT&E, or that no problemswill be encountered.

• Waivers to T&E of Ship Systems. Waivers toT&E of pre-production models of a system inorder to speed up production and deliveryshould be made only after considering all costsand benefits of the waiver, including those notassociated with the contract.

• Environment Effects on Sonar Domes. Envi-ronmental effects on sonar domes and theirself-noise should be tested and evaluatedbefore the domes are accepted as part of thesonar system.

• Hull/Machinery Testing by Computer Simula-tion. In DT&E ships, there will be cases wherethe best means to conduct evaluations of par-ticular hull and machinery capabilities isthrough dynamic analysis using computersimulation, with later validation of the simu-lation by actual test.

25.2.4.3 Engineering Development Model

• Initial or Pilot Phase of IOT&E. Before anyOTs to demonstrate operational suitability andeffectiveness are conducted, an initial or pilot

Page 257: DAU T&E Management Guide.pdf

25-15

test should be conducted (Technical Evalua-tion (TECHEVAL)).

• Identify Critical Subsystems. In planning forthe IOT&E of a ship system, the critical sub-systems, with respect to mission performance,should be identified.

• Reliability of Critical Systems. T&E should de-termine the expected reliability at sea of sys-tems critical to the ship’s mobility and to theprimary and major secondary tasks.

• Consistency in Test Objectives. There are vari-ous phases in testing a ship system. One shouldensure the objectives of one phase are notinconsistent with the objectives of the otherphases.

• Single Screw Ships. T&E of the propulsionsystems of ships with a single screw shouldbe especially rigorous to determine failurerates, maintenance, and repair alternatives.

• Problems Associated With New Hulls. When-ever a new hull is incorporated into shipdesign, a T&E of this hull should be conductedprior to the Full-Rate Production (FRP) andincorporation of the major weaponssubsystems.

25.2.4.4 Production (LRIP, Full Rate),Deployment and OperationalSupport

• The OT&E of Shipboard Gun Systems. TheOTs of shipboard gun systems should simu-late the stress, exposure time, and other con-ditions of battle so that the suitability of theweapon can be evaluated in total.

• Operational Reliability. The OT&E shouldprovide valuable data on the operational reli-ability of ship weapon systems that cannot beobtained through DT&E.

• Targets for Antiaircraft Warfare (AAW) OT&E.The OT of shipboard AAW weapons demandsthe use of targets that realistically simulate thepresent-day threat.

• Design of Ship FOT&E. In the testing programof a ship system, it should be recognized that,although it may be designated as a special-purpose ship, in most cases it will be used ina general-purpose role as well. This will causepost deployment FOT&E.

• Operational Testing During Shakedown Peri-ods. The time period for FOT&E of a ship canbe used more efficiently if full advantage istaken of the periods immediately after the shipis delivered to the Navy.

• Fleet Operations in FOT&E. A great deal ofinformation on the operational effectiveness ofa ship can be obtained from standard fleetoperations through well-designed informationcollection, processing, and analysis procedures.

• Ship Antisubmarine Warfare (ASW) FOT&EPlanning. In planning FOT&E of shipboardsystems, it is important to recognize thedifficulty of achieving realism, perhaps moreso than in other areas of naval warfare.

• Variable Depth Sonar FOT&E. The behaviorof towed bodies of variable depth sonar sys-tems and towed arrays should be tested andevaluated under all ship maneuvers and speedslikely to be encountered in combat.

• Ship Self-Noise Tests. The magnetic and acous-tic signatures of a ship can be tested accuratelyonly after they are completed.

• Effect of Major Electronic Countermeasures(ECM) on Ship Capability. The FOT&E of aship should include tests of the effectivenessof the ship when subjected to major ECM.

Page 258: DAU T&E Management Guide.pdf

25-16

• Ship System Survivability. FOT&E of modernships should provide for the assessment of theirability to survive and continue to fight whensubjected to battle damage.

• Interlocks. Shipboard electronic systems aredesigned with interlock switches that openelectrical circuits for safety reasons when theequipment cabinets are opened. The FOT&Eshould be able to detect over-design as well asminimum design adequacy of the interlocksystems.

• Intra-ship Communication. In conducting leadship trials and evaluations, particular attentionshould be given to the operational impactresulting from absence, by design, of intra-shipcommunications circuits and stations fromimportant operating locations.

25.2.5 Surface Vehicle Systems

25.2.5.1 Concept Assessment

• Preparing Test Plans. It is necessary thatdetailed evaluation criteria be established thatincludes all items to be tested.

• Test Plans. Prior to program initiation, a planshould be prepared for evaluating the overallT&E program. As part of this, a detailed T&Eplan for those tests to be conducted beforeadvanced engineering development to validatethe concept and hardware approach to thevehicle system should be developed. The ob-jective of the validation test plan is to fullyevaluate the performance characteristics of thenew concept vehicle. This test plan cannot bedeveloped, of course, until the performancecharacteristics are defined.

• Performance Characteristics Range. Statedperformance characteristics derived from stud-ies should be measured early in the program.Unrealistic performance requirements can leadto false starts and costly delays.

• Operating Degradation. System performancedegrades under field conditions. Anticipateddegradation must be considered during T&E.When a system must operate at peak perfor-mance during DT/OT to meet the specified re-quirements, it will then be likely to perform ata lesser level when operated in the field.

• Test Personnel. The test director and/or keymembers of the test planning group within theproject office should have significant T&Eexperience.

• Design Reviews. T&E factors and experiencemust influence the system design. The appli-cation of knowledge derived from past experi-ence can be a major asset in arriving at a soundsystem design.

• Surrogate Vehicles. When high technical riskis present, development should be structuredaround the use of one or more surrogate ve-hicles designed to prove the system conceptunder realistic operational conditions beforeproceeding with further development.

• Test Facilities and Scheduling. Test range andresource requirements to conduct validationtests and a tentative schedule of test activitiesshould be identified.

25.2.5.2 Prototype Testing

• Vulnerability. The vulnerability of vehiclesshould be estimated on the basis of testing.

• Gun and Ammunition Performance. Gun andammunition development should be considereda part of overall tank system development.When a new gun tube, or one which has notbeen mounted previously on a tank chassis, isbeing evaluated, all ammunition types (includ-ing missiles) planned for use in that systemshould be test fired under simulated opera-tional conditions.

Page 259: DAU T&E Management Guide.pdf

25-17

• Increased Complexity. The addition of newcapabilities to an existing system or systemtype will generally increase complexity of thesystem and, therefore, increase the types andamount of testing required and the time toperform these tests.

• Component Interfaces. Prior to assembly in aprototype system, component subsystemsshould be assembled in a mock-up and verifiedfor physical fit, human factors considerations,interface compatibility, and for electrical andmechanical compatibility.

• Determining Test Conditions. During valida-tion, test conditions should be determined bythe primary objectives of that test rather thanby more general considerations of realism.

• Test Plan Development. The test plan devel-oped by this point should be in nearly finalform and include, as a minimum:— A description of requirements,— The facilities needed to make evaluations,— The schedule of evaluations and facilities,— The reporting procedure, the objective being

to communicate test results in an under-standable format to all program echelons,

— The T&E guidelines,— A further refinement of the cost estimates

that were initiated during the ConceptEvaluation Phase.

• Prototype Tests. Prototype tests should showsatisfactory meeting of success criteria that aremeaningful in terms of operational usage. It isessential in designing contractually requiredtests, upon whose outcome large incentive pay-ments or even program continuation may de-pend, to specify broader success criteria thansimply hit or miss in a single given scenario.

• Reliability Testing. Reliability testing shouldbe performed on component and subsystem as-semblies before testing of the complete vehiclesystem. Prior to full system testing, viable

component and subsystem tests should beconducted.

• Human Factors. In evaluating ground vehicles,human factors should be considered at allstages starting with the design of the prototype.

• Test Plan Scheduling. Test plan schedulingshould be tied to event milestones rather thanto the calendar. In evaluating the adequacy ofthe scheduling as given by test plans, it is im-portant that milestones be tied to the majorevents of the weapon system (meeting statedrequirements) and not the calendar.

• Test Failures. The T&E schedule should besufficiently flexible to accommodate failuresand correction of identified problems.

25.2.5.3 Engineering Development Model

• Pilot and Dry-Run Tests. A scheduled seriesof tests should be preceded by a dry run,which verifies that the desired data will beobtained.

• Comparison Testing. The test program shouldinclude a detailed comparison of the charac-teristics of a new vehicle system with those ofexisting systems, alternate vehicle systemconcepts (if applicable), and those of anysystem(s) being replaced.

• Simulation. Simulation techniques and equip-ment should be utilized to enhance data col-lection. Creation of histograms for each testcourse provides a record of conditions experi-enced by the vehicle during testing. Use of achassis dynamometer can produce additionaldriveling endurance testing with more completeinstrumentation coverage.

• Environmental Testing. Ground vehicles shouldbe tested in environmental conditions andsituations comparable to those in which theywill be expected to perform.

Page 260: DAU T&E Management Guide.pdf

25-18

• System Vulnerability. For combat vehicles,some estimate of vulnerability to battle damageshould be made.

• Design Criteria Verification. Subsystem designcriteria should be compared with actualcharacteristics.

• Electromagnetic Testing. Vehicle testing shouldinclude electromagnetic testing.

• System Strength Testing. In evaluating groundvehicles, early testing should verify intrinsicstrength. This implies operation with maxi-mum anticipated loading, including trailedloads at maximum speeds and over worst-casegrades, secondary roads, and cross-countryconditions for which the vehicle was developedor procured. This test is intended to identifydeficient areas of design, not to break themachinery.

• Component Compatibility. Component com-patibility should be checked through theduration of the test sequence.

• Human Interface. Critiques of good and badfeatures of the vehicle should be made earlyin the prototype stage while adequate timeremains to make any indicated changes.

• Serviceability Testing. Ground vehicles shouldbe tested and evaluated to determine the rela-tive ease of serviceability, particularly withhigh-frequency operations.

• Experienced User Critique. Ground vehicleuser opinions should be obtained early in thedevelopment program.

• Troubleshooting During Tests. Provisionsshould be made to identify subsystem failurecauses. Subsystems may exhibit failures dur-ing testing. Adequate provisions should bemade to permit troubleshooting and identifi-cation of defective components and inadequatedesign.

25.2.5.4 Production (LRIP, Full Rate),Deployment and OperationalSupport

• Planning the IOT&E. The IOT&E should beplanned to be cost-effective and providemeaningful results.

• Performance and Reliability Testing. The pro-duction FAT should verify the performance ofthe vehicle system and determine the degrada-tion, failure modes, and failure rates.

• Lead-the-Fleet Testing. At least one produc-tion prototype or initial production model ve-hicle should be allocated to intensive testingto accumulate high operating time in a shortperiod.

• User Evaluation. User-reported shortcomingsshould be followed up to determine problemareas requiring correction. Fixes should beevaluated during an FOT&E.

Page 261: DAU T&E Management Guide.pdf

25-19

ENDNOTES

1. Office of the Secretary of Defense (OSD),Best Practices Applicable to DoD [Depart-ment of Defense] Developmental Test andEvaluation. Conducted by the Science Appli-cations International Corporation (SAIC),June 1999.

2. Defense Science Board Report, Test andEvaluation Capabilities, December 2000.

3. GAO/NSIAD-00-199, Best Practices: A MoreConstructive Test Approach is Key to BetterWeapon System Outcomes, July 2000.

4. Defense Science Board Study, Report of TaskForce on Test and Evaluation, April 2, 1974.

Page 262: DAU T&E Management Guide.pdf

25-20

Page 263: DAU T&E Management Guide.pdf

APPENDICES

Page 264: DAU T&E Management Guide.pdf
Page 265: DAU T&E Management Guide.pdf

A-1

APPENDIX AACRONYMS AND THEIR MEANINGS

Aa Achieved availability

AAE Army Acquisition Executive

AAH Advanced Attack Helicopter

AAW Antiaircraft Warfare

ACAT Acquisition Category

ACE Army Corps of Engineers

ACTD Advanced Concept Technology Demonstration

ADM Acquisition Decision Memorandum

ADP Automated Data Processing

ADPE Automated Data Processing Equipment

AEC Army Evaluation Center

AF Air Force

AFCEA Armed Forces Communications and Electronics Association

AFEWES Air Force Electronic Warfare Evaluation Simulator

AFFTC Air Force Flight Test Center

AFM Air Force Manual

AFMC Air Force Materiel Command

AFOTEC Air Force Operational Test and Evaluation Center

AFR Air Force Regulation

AFSC Air Force Space Command

AF/TE Chief of Staff, Test and Evaluation (Air Force)

AFV Armored Family of Vehicles

Ai Inherent Availability

AIS Automated Information System

ALCM Air-Launched Cruise Missile

AMC Army Materiel Command

AMSDL Acquisition Management Systems and Data Requirements Control List

Ao Operational Availability

Page 266: DAU T&E Management Guide.pdf

A-2

AoA Analysis of Alternatives

AP Acquisition Plan

APB Acquisition Program Baseline

AR Army Regulation

ARL-HRED Army Research Laboratory-Human Research and EngineeringDivision/Directorate

ASA(ALT) Assistant Secretary of the Army for Acquisition, Logistics, andTechnology

ASD(NII) Assistant Secretary of Defense for Networks and InformationIntegration

ASM Air-to-Surface Missile

ASN(RD&A) Assistant Secretary of the Navy for Research, Development, andAcquisition

ASN/RE&S Assistant Secretary of the Navy/Research, Engineering, and Science

ASR Alternative System Review

ASW Antisubmarine Warfare

ATD Advanced Technology Demonstration (or Development)

ATE Automatic Test Equipment

ATEC Army Test and Evaluation Command

ATIRS Army Test Incident Reporting System

ATP Automated Test Plan

ATPS Automated Test Planning System

ATR Acceptance Test Report

AWACS Airborne Warning and Control System

BA Budget Activity; Budget Authority

BIT Built-in Test

BITE Built-in Test Equipment

BLRIP Beyond Low Rate Initial Production

BoD Board of Directors

BoOD Board of Operating Directors

C2 Command and Control

C3 Command, Control, and Communications

Page 267: DAU T&E Management Guide.pdf

A-3

C3I Command, Control, Communications, and Intelligence

C4 Command, Control, Communications, and Computers

C4I Command, Control, Communications, Computers, and Intelligence

C4ISR Command, Control, Communications, Computers, Intelligence,

Surveillance, and Reconnaissance

CAD Computer-Aided Design

CAE Component Acquisition Executive

CAIV Cost as an Independent Variable

CAM Computer-Aided Manufacturing

CARS Consolidated Acquisition Reporting System

CDD Capability Development Document

CDR Critical Design Review

CDRL Contract Data Requirements List

CE Continuous Education; Concept Exploration

CEP Circle Error Probability; Concept Experimentation Program

CERDEC Communications-Electronics Research Development and EngineeringCenter

CFR Code of Federal Regulations

CG MCSC Commanding General, Marine Corps Systems Command

CI Configuration Item

CII Compatibility, Interoperability, and Integration

CIO Chief Information Officer

CJCSI Chairman of the Joint Chiefs of Staff Instruction

CJCSM Chairman of the Joint Chiefs of Staff Manual

CLIN Contract Line Item Number

CNO Chief of Naval Operations

CNP Candidate Nomination Proposal

COCOM Combatant Commander

COEA Cost and Operational Effectiveness Analysis

COI Critical Operational Issue

COIC Critical Operational Issues and Criteria

Page 268: DAU T&E Management Guide.pdf

A-4

COMOPTEVFOR Commander, Operational Test and Evaluation Force (Navy)

COO Concept of Operations

CPD Capability Production Document

CPS Competitive Prototyping Strategy

CPU Central Processing Unit

CR Concept Refinement

CSCI Computer Software Configuration Item

CTEIP Central Test and Evaluation Investment Program

CTO Comparative Test Office

DA Department of the Army

DAB Defense Acquisition Board

DAE Defense Acquisition Executive

DAES Defense Acquisition Executive Summary

DAG Data Authentication Group

DAS Director of the Army Staff

DAU Defense Acquisition University

DBDD Data Base Design Document

DCMA Defense Contract Management Agency

DCS Deputy Chief of Staff

DCS/R&D Deputy Chief of Staff for Research and Development

DCSOPS Deputy Chief of Staff for Operations

DDR&E Director of Defense Research and Engineering

DIA Defense Intelligence Agency

DID Data Item Description

DISA Defense Information Systems Agency

DLT Design Limit Test

DMSO Defense Modeling and Simulation Office

DNA Defense Nuclear Agency

DoD Department of Defense

DoDD Department of Defense Directive

DoDI Department of Defense Instruction

Page 269: DAU T&E Management Guide.pdf

A-5

DOE Department of Energy

DOT&E Director, Operational Test and Evaluation

DOTLPF Doctrine, Organization, Training, Leadership, Personnel, and Facilities

DPRO Defense Plant Representative Office

DRR Design Readiness Review

DS Defense Systems

DSARC Defense Systems Acquisition Review Council (now the DefenseAcquisition Board (DAB))

DSB Defense Science Board

DSMC Defense Systems Management College

DT Development Test

DT&E Development Test and Evaluation

DT&E/DS Development Test and Evaluation/Defense Systems

DTC Developmental Test Command (Army)

DTIC Defense Technical Information Center

DTRMC Defense Test Resource Management Center

DTSE&E Director, Test Systems Engineering and Evaluation

DTTSG Defense Test and Training Steering Group

DUSA(OR) Deputy Under Secretary of the Army (Operations Research)

DUSD(A&T) Deputy Under Secretary of Defense for Acquisition and Technology

DVAL Data Link Vulnerability Analysis

EA Evolutionary Acquisition

EC Electronic Combat

ECCM Electronic Counter-Countermeasures

ECM Electronic Countermeasures

ECP Engineering Change Proposal

ECR Engineering Change Review

EDM Engineering Development Model

EDP Event Design Plan (Army)

EDT Engineering Design Test

EFA Engineering Flight Activity

Page 270: DAU T&E Management Guide.pdf

A-6

EIS Environmental Impact Statement

EMCS Electromagnetic Control Study

EMI Electromagnetic Interference

EMP Electromagnetic Pulse

EOA Early Operational Assessment

ERAM Extended Range Anti-armor Munitions

ESM Electronic Support Measures

ESOH Environmental Safety and Occupational Health

ESS Environmental Stress Screening

EW Electronic Warfare

FA/A Functional Analysis/Allocation

FACA Federal Advisory Committee Act

FAR Federal Acquisition Regulation

FAT First Article Testing

FCA Functional Configuration Audit

FCT Foreign Comparative Testing

FDTE Force Development Tests and Experimentation

FFBD Functional Flow Block Diagram

FMECA Failure Mode, Effects, and Criticality Analysis

FOC Full Operational Capability

FORSCOM Forces Command (Army)

FOT&E Follow-on Operational Test and Evaluation

FQR Formal Qualification Review

FQT Formal Qualification Test/Testing

FRP Full Rate Production

FRPDR Full Rate Production Decision Review

FWE Foreign Weapons Evaluation

FY Fiscal Year

FYDP Future Years Defense Program

FYTP Future Years Test Program; Five Year Test Program (Army)

GAO General Accounting Office (now Government Accountability Office)

Page 271: DAU T&E Management Guide.pdf

A-7

GFE Government Furnished Equipment

HELSTF High Energy Laser System Test Facility

HQDA Headquarters, Department of the Army

HSI Human Systems Integration

HW Hardware

HWCI Hardware Configuration Item

HWIL Hardware-in-the-Loop

ICBM Intercontinental Ballistic Missile

ICD Initial Capabilities Document

ICE Independent Cost Estimate

IDD Interface Decision Document

IEP Independent Evaluation Plan

IER Independent Evaluation Report

IFPP Information for Proposal Preparation

ILS Integrated Logistics Support

INSCOM Intelligence and Security Command

IOA Independent Operational Assessment

IOC Initial Operating Capability

IOT&E Initial Operational Test and Evaluation

IPPD Integrated Product and Process Development

IPT Integrated Product Team

IR&D International Research and Development

IRA Industrial Resource Analysis

IRS Interface Requirements Specification

IT Information Technology

ITEA International Test and Evaluation Association

ITP Integrated Test Plan

ITOP International Test Operating Procedure

ITSCAP Information Technology Security Certification and AccreditationProgram

ITTS Instrumentation, Targets, and Threat Simulators

Page 272: DAU T&E Management Guide.pdf

A-8

IV&V Independent Verification and Validation

JCIDS Joint Capabilities Integration and Development System

JITC Joint Interoperability Test Command

JLF Joint Live Fire

JROC Joint Requirements Oversight Council

JTCG Joint Technical Coordinating Group

JT&E Joint Test and Evaluation

JTTR Joint Training and Test Range Roadmap

KPP Key Performance Parameter

Kr Contractor

LCC Life Cycle Cost

LCCE Life Cycle Cost Estimate

LFT Live Fire Testing

LFT&E Live Fire Test and Evaluation

LRIP Low Rate Initial Production

LS Logistics Support

LSA Logistics Support Analysis

LSP Logistics Support Plan

LTBT Limited Test Ban Treaty

M&S Modeling and Simulation

MAIS Major Automated Information System

MAJCOM Major Commands

MANPRINT Manpower and Personnel Integration

MAV Minimum Acceptable Value

MCOTEA Marine Corps Operational Test and Evaluation Activity

MCP Military Construction Program

MCSC Marine Corps Systems Command

MDA Milestone Decision Authority; Missile Defense Agency

MDAP Major Defense Acquisition Program

MEDCOM Medical Command (Army)

MIL-HDBK Military Handbook

Page 273: DAU T&E Management Guide.pdf

A-9

MIL-SPEC Military Specification

MIL-STD Military Standard

MOA Memorandum of Agreement

MOE Measure of Effectiveness

MOP Measure of Performance

MOS Military Occupational Specialty

MOT&E Multi-Service Operational Test and Evaluation

MOU Memorandum of Understanding

MPE Military Preliminary Evaluation

MRTFB Major Range and Test Facility Base

MS Milestone

MSIAC Modeling and Simulation Information Analysis Center

MTBF Mean Time Between Failure

MTTR Mean Time To Repair

NAPMA North Atlantic Program Management Agency

NATO North Atlantic Treaty Organization

NAVAIR Naval Air Systems Command

NAVSEA Naval Sea Systems Command

NAWC Naval Air Warfare Center

NBC Nuclear, Biological, and Chemical

NBCC Nuclear, Biological, and Chemical Contamination

NDAA National Defense Authorization Act

NDCP Navy Decision Coordinating Paper

NDI Nondevelopment(al) Item

NDU National Defense University

NEPA National Environmental Policy Act

NH&S Nuclear Hardness and Survivability

NSIAD National Security and International Affairs Division

NSS National Security Systems

O&M Operations and Maintenance

O&S Operations and Support

Page 274: DAU T&E Management Guide.pdf

A-10

OA Operational Assessment

OCD Operational Concept Document

OIPT Overarching Integrated Product Team

OMB Office of Management and Budget

OPEVAL Operational Evaluation

OPNAV Operational Navy

OPNAVIST Operational Navy Instruction

OPSEC Operations Security

OPTEVFOR Operational Test and Evaluation Force (Navy)

ORMAS/TE Operational Resource Management Assessment Systems for Test andEvaluation

OSD Office of the Secretary of Defense

OT Operational Test

OT&E Operational Test and Evaluation

OTA Operational Test Agency

OTC Operational Test Command (Army)

OTD Operational Test Director

OTRR Operational Test Readiness Review

OTP Outline Test Plan

P3I Preplanned Product Improvements

P&D Production and Deployment

PAT&E Production Acceptance Test and Evaluation

PCA Physical Configuration Audit

PCO Primary Contracting Officer

PDR Preliminary Design Review

PDRR Program Definition and Risk Reduction

PDUSD(A&T) Principal Deputy Under Secretary of Defense for Acquisition andTechnology

PE Program Element

PEO Program Executive Officer

PEP Production Engineering and Planning

PF/DOS Production, Fielding/Deployment, and Operational Support

Page 275: DAU T&E Management Guide.pdf

A-11

PI Product Improvement

Pk Probability of kill

Pk/h Probability of kill given a hit

PM Program Manager; Product Manager

PMO Program Management Office

PO Program Office, Purchase Order

POM Program Objectives Memorandum

PPBE Planning, Programming, Budgeting and Execution

PQT Production Qualification Test

PRAT Production Reliability Acceptance Test

PRESINSURV President of the Board of Inspection and Survey

PRR Production Readiness Review

PSA Principal Staff Assistant

QA Quality Assurance

QOT&E Qualification Operational Test and Evaluation

R&D Research and Development

R&E Research and Engineering

R&M Reliability and Maintainability

RAM Reliability, Availability, and Maintainability

RAS Requirements Allocation Sheet

RCC Range Commanders Council

RCS Radar Cross Section

RD&A Research, Development, and Acquisition

RDECOM Research, Development and Engineering Command (Army)

RDT Reliability Development Testing

RDT&E Research, Development, Test and Evaluation

RFP Request for Proposal

RGT Reliability Growth Test

RM Resource Manager

RQT Reliability Qualification Test

RSI Rationalization, Standardization, and Interoperability

Page 276: DAU T&E Management Guide.pdf

A-12

RSM Radar Spectrum Management

S&TS Strategic and Tactical Systems

SAF/AQ Secretary of the Air Force for Acquisition

SAF/USA Director of Space Acquisition

SAR Selected Acquisition Report

SAT Ship Acceptance Test

SDD Software Design Document; System Development and Demonstration

SECARMY Secretary of the Army

SECDEF Secretary of the Defense

SECNAV Secretary of the Navy

SEP Systems Engineering Process; System Engineering Plan; SystemEvaluation Plan (Army)

SFR System Functional Review

SI Systems Integration

SIL Software Integration Laboratory

SMDC Space and Missile Defense Command

SOCOM Special Operations Command

SOO Statement of Objectives

SOW Statement of Work

SPAWAR Space and Naval Warfare Systems Command

SPEC Specification

SPO System Program Office

SRR Systems Requirements Review

SRS Software Requirements Specification

SSR Software Specification Review

STA System Threat Assessment

STEP Simulation, Test and Evaluation Process

STP Software Test Plan

STRI Simulation, Training, and Instrumentation (Army)

SQA Software Quality Assurance

SVR System Verification Review

Page 277: DAU T&E Management Guide.pdf

A-13

SW Software

SWIL Software in the Loop

T&E Test and Evaluation

TAAF Test, Analyze, and Fix

TADS Target Acquisition Designation System; Theater Air Defense System

TAFT Test, Analyze, Fix, and Test

TD Technology Development

TDS Technology Development Strategy

TDY Temporary Duty

TEAM Test, Evaluation, Analysis, and Modeling

TECHEVAL Technical Evaluation (Navy Term)

TEMA Test and Evaluation Management Agency

TEMP Test and Evaluation Master Plan

TERC Test and Evaluation Resources Committee

TFRD Test Facility Requirements Document

TIRIC Training Instrumentation Resource Investment Committee

TLS Time Line Sheet

TM Technical Manual, Test Manager

TMC Test Management Council

TMP Technical Management Plan

TPD Test Program Documentation

TPO Test Program Outline

TPM Technical Performance Measurement

TPWG Test Planning Working Group

TRADOC Training and Doctrine Command

TRIMS Test Resource Information Management System

TRP Test Resources Plan

TRR Test Readiness Review

TRS Test Requirements Sheet

TSARC Test Schedule and Review Committee

UNK Unknown

Page 278: DAU T&E Management Guide.pdf

A-14

U.S. United States

USA United States Army

USAF United States Air Force

USAFE/DOQ U.S. Air Forces in Europe/Directorate of Operations-Operations

USAKA United States Army Kwajalein Atoll

USD(AT&L) Under Secretary of Defense for Acquisition, Technology, and Logistics

USMC United States Marine Corps

USN United States Navy

U.S.C. United States Code

VCJCS Vice Chairman, Joint Chiefs of Staff

VCSA Vice Chief of Staff, Army

VECP Value Engineering Change Proposal

VV&A Verification, Validation and Accreditation

WBS Work Breakdown Structure

WIPT Working-level Integrated Product Team

WSMR White Sands Missile Range

Page 279: DAU T&E Management Guide.pdf

B-1

APPENDIX BDOD GLOSSARY OF

TEST TERMINOLOGY

ACCEPTANCE TRIALS — Trials and material inspection conducted underway by the trail boardfor ships constructed in a private shipyard to determine suitability for acceptance of a ship.

ACQUISITION — The conceptualization, initiation, design, development, test, contracting, produc-tion, deployment, Logistics Support (LS), modification, and disposal of weapons and other sys-tems, supplies, or services (including construction) to satisfy Department of Defense (DoD) needs,intended for use in or in support of military missions.

ACQUISITION CATEGORY (ACAT) — ACAT I programs are Major Defense Acquisition Pro-grams (MDAPs). An MDAP is defined as a program that is not highly classified and is desig-nated by the Under Secretary of Defense (Acquisition, Technology, and Logistics) (USD(AT&L))as an MDAP or is estimated by USD(AT&L) to require eventual total expenditure for research,development, test and evaluation of more than $365 million Fiscal Year (FY) 2000 constant dol-lars, or for procurement of more than $2.190 billion in FY2000 constant dollars.

*1. ACAT ID for which the Milestone Decision Authority (MDA) is USD(AT&L). The “D”refers to the Defense Acquisition Board (DAB), which advises the USD(AT&L) at majordecision points.

2. ACAT IC for which the MDA is the DoD Component Head or, if delegated, the DoD Com-ponent Acquisition Executive (CAE). The “C” refers to Component.

The USD(AT&L) designates programs as ACAT ID or ACAT IC.

ACAT IA programs are Major Automated Information Systems (MAISs). A MAIS is any pro-gram designated by the Assistant Secretary of Defense for Command, Control, Communications,and Intelligence (ASD(C3I)) as an MAIS or is estimated to require program costs for any singleyear in excess of $32 million in Fiscal Year (FY) 2000 constant dollars, total program in excess of$126 million in FY2000 constant dollars, or total life cycle costs in excess of $378 million FY2000constant dollars.

MAIS do not include highly sensitive classified programs or tactical communications systems.

ACAT II program is a major system that is a combination of elements that shall function togetherto produce the capabilities required to fulfill a mission need, excluding construction or otherimprovements to real property. It is estimated by the DoD Component head to require eventualtotal expenditure for Research, Development, Test, and Evaluation (RDT&E) of more than $140million in FY2000 constant dollars, or procurement of more than $660 million in FY2000 con-stant dollars, or is designated as major by the DoD Component head.

Page 280: DAU T&E Management Guide.pdf

B-2

ACAT III programs are defined as those acquisition programs that do not meet the criteria for anACAT I, an ACAT IA, or an ACAT II. The MDA is designated by the CAE and shall be at thelowest appropriate level. This category includes less-than-major AISs.

ACQUISITION DECISION MEMORANDUM (ADM) — A memorandum signed by the Mile-stone Decision Authority (MDA) that documents decisions made as the result of a milestonedecision review or in-process review.

ACQUISITION LIFE CYCLE — The life of an acquisition program consists of phases, each pre-ceded by a milestone or other decision point, during which a system goes through research,development, test and evaluation, and production. Currently, the four phases are: (1) ConceptExploration (CE) (Phase 0); Program Definition and Risk Reduction (PDRR) (Phase I); (3) Engi-neering and Manufacturing Development (EMD) (Phase II); and (4) Production, Fielding/Deploy-ment, and Operational Support (PF/DOS) (Phase III).

ACQUISITION PHASE — All the tasks and activities needed to bring a program to the next majormilestone occur during an acquisition phase. Phases provide a logical means of progressivelytranslating broadly stated mission needs into well-defined system-specific requirements and ulti-mately into operationally effective, suitable, and survivable systems.

ACQUISITION PROGRAM BASELINE (APB) — A document that contains the most importantcost, schedule, and performance parameters (both objectives and thresholds) for the program. It isapproved by the Milestone Decision Authority (MDA), and signed by the Program Manager (PM)and their direct chain of supervision, e.g., for Acquisition Category (ACAT) ID programs it issigned by the PM, Program Executive Officer (PEO), Component Acquisition Executive (CAE),and Defense Acquisition Executive (DAE).

ACQUISITION STRATEGY — A business and technical management approach designed to achieveprogram objectives within the resource constraints imposed. It is the framework for planning,directing, contracting for, and managing a program. It provides a master schedule for research,development, test, production, fielding, modification, postproduction management, and other ac-tivities essential for program success. Acquisition strategy is the basis for formulating functionalplans and strategies (e.g., Test And Evaluation Master Plan (TEMP), Acquisition Plan (AP), com-petition, prototyping, etc.).

ACQUISITION RISK — See Risk.

ADVANCED CONCEPT TECHNOLOGY DEMONSTRATION (ACTD) — Shall be used todetermine military utility of proven technology and to develop the concept of operations that willoptimize effectiveness. ACTDs themselves are not acquisition programs, although they are de-signed to provide a residual, usable capability upon completion. Funding is programmed to sup-port 2 years in the field. ACTDs are funded with 6.3a (Advanced Technology Development (orDemonstration) (ATD)) funds.

Page 281: DAU T&E Management Guide.pdf

B-3

ADVANCED TECHNOLOGY DEVELOPMENT (OR DEMONSTRATION) (ATD) (BudgetActivity 6.3) — Projects within the 6.3a (ATD) program that are used to demonstrate the maturityand potential of advanced technologies for enhanced military operational capability or cost effec-tiveness. It is intended to reduce technical risks and uncertainties at the relatively low costs ofinformal processes.

AGENCY COMPONENT — A major organizational subdivision of an agency. For example: theArmy, Navy, Air Force, and Defense Supply Agency are agency components of the Department ofDefense. The Federal Aviation, Urban Mass Transportation, and the Federal Highway Administra-tions are agency components of the Department of Transportation.

ANALYSIS OF ALTERNATIVES (AoA) — An analysis of the estimated costs and operationaleffectiveness of alternative materiel systems to meet a mission need and the associated programfor acquiring each alternative. Formerly known as Cost and Operational Effectiveness Analysis(COEA).

AUTOMATED INFORMATION SYSTEM (AIS) — A combination of computer hardware andsoftware, data, or telecommunications that performs functions such as collecting, processing,transmitting, and displaying information. Excluded are computer resources, both hardware andsoftware, that are physically part of, dedicated to, or essential in real time to the mission perfor-mance of weapon systems.

AUTOMATIC TEST EQUIPMENT (ATE) — Equipment that is designed to automatically con-duct analysis of functional or static parameters and to evaluate the degree of performance degra-dation and perform fault isolation of unit malfunctions.

BASELINE — Defined quantity or quality used as starting point for subsequent efforts and progressmeasurement that can be a technical cost or schedule baseline.

BRASSBOARD CONFIGURATION — An experimental device (or group of devices) used to de-termine feasibility and to develop technical and operational data. It will normally be a modelsufficiently hardened for use outside of laboratory environments to demonstrate the technical andoperational principles of immediate interest. It may resemble the end item but is not intended foruse as the end item.

BREADBOARD CONFIGURATION — An experimental device (or group of devices) used todetermine feasibility and to develop technical data. It will normally be configured only for labo-ratory use to demonstrate the technical principles of immediate interest. It may not resemble theend item and is not intended for use as the projected end item.

CAPSTONE TEST AND EVALUATION MASTER PLAN (TEMP) — A TEMP which addressesthe testing and evaluation of a program consisting of a collection of individual systems (family ofsystems, system of systems) which function collectively. Individual system-unique content re-quirements are addressed in an annex to the basic Capstone TEMP.

Page 282: DAU T&E Management Guide.pdf

B-4

CERTIFICATION FOR INITIAL OPERATIONAL TEST AND EVALUATION (IOT&E) — Aservice process undertaken in the advanced stages of system development, normally during LowRate Initial Production (LRIP), resulting in the announcement of a system’s readiness to undergoIOT&E. The process varies with each Service.

COMPETITIVE PROTOTYPING STRATEGY (CPS) — Prototype competition between two ormore contractors in a comparative side-by-side test.

CONCEPT EXPERIMENTATION PROGRAM (CEP) — The CEP is an annual program ex-ecuted by the Training and Doctrine Command (TRADOC) for commanders who have require-ments determination and force development missions for specific warfighting experiments. Ex-periments are discrete events investigating either a material concept or a warfighting idea. TheCEP procedures are provided in TRADOC Pamphlet 71-9.

CONCURRENCY — Part of an acquisition strategy that would combine or overlap life cycle phases(such as Engineering and Manufacturing Development (EMD), and production), or activities (suchas development and operational testing).

CONTINGENCY TESTING — Additional testing required to support a decision to commit addedresources to a program, when significant test objectives have not been met during planned tests.

CONTINUOUS EVALUATION (CE) — A continuous process, extending from concept definitionthrough deployment, that evaluates the operational effectiveness and suitability of a system byanalysis of all available data.

COMBAT SYSTEM — The equipment, computer programs, people, and documentation organic tothe accomplishment of the mission of an aircraft, surface ship, or submarine; it excludes thestructure, material, propulsion, power and auxiliary equipment, transmissions and propulsion,fuels and control systems, and silencing inherent in the construction and operation of aircraft,surface ships, and submarines.

CONFIGURATION MANAGEMENT — The technical and administrative direction and surveil-lance actions taken to identify and document the functional and physical characteristics of aConfiguration Item (CI), to control changes to a CI and its characteristics, and to record andreport change processing and implementation status. It provides a complete audit trail of deci-sions and design modifications.

CONTRACT — An agreement between two or more legally competent parties, in the proper form,on a legal subject matter or purpose and for legal consideration.

CONTRACTOR LOGISTICS SUPPORT — The performance of maintenance and/or materialmanagement functions for a Department of Defense (DoD) system by a commercial activity.Historically done on an interim basis until systems support could be transitioned to a DoD or-ganic capability. Current policy now allows for the provision of system support by contractors ona long-term basis. Also called Long-Term Contractor Logistics Support.

Page 283: DAU T&E Management Guide.pdf

B-5

COOPERATIVE PROGRAMS — Programs that comprise one or more specific cooperative projectswhose arrangements are defined in a written agreement between the parties and which are con-ducted in the following general areas:

1. Research, Development, Test, and Evaluation (RDT&E) of defense articles (including coop-erative upgrade or other modification of a U.S.-developed system), joint production (includ-ing follow-on support) of a defense article that was developed by one or more of the partici-pants, and procurement by the United States of a foreign defense article (including software),technology (including manufacturing rights), or service (including logistics support) that areimplemented under Title 22 U.S.C. §2767, Reference (c), to promote the Rationalization,Standardization, and Interoperability (RSI) of North Atlantic Treaty Organization (NATO)armed forces or to enhance the ongoing efforts of non-NATO countries to improve their con-ventional defense capabilities.

2. Cooperative Research and Development (R&D) program with NATO and major non-NATOallies implemented under Title 10 U.S.C. §2350a, to improve the conventional defense capa-bilities of NATO and enhance RSI.

3. Data, information, and personnel exchange activities conducted under approved DoD programs.

4. Test and Evaluation (T&E) of conventional defense equipment, munitions, and technologiesdeveloped by allied and friendly nations to meet valid existing U.S. military requirements.

COST AS AN INDEPENDENT VARIABLE (CAIV) — Methodologies used to acquire and oper-ate affordable DoD systems by setting aggressive, achievable Life Cycle Cost (LCC) objectives,and managing achievement of these objectives by trading off performance and schedule, as nec-essary. Cost objectives balance mission needs with projected out-year resources, taking into ac-count anticipated process improvements in both DoD and industry. CAIV has brought attention tothe government’s responsibilities for setting/adjusting LCC objectives and for evaluating require-ments in terms of overall cost consequences.

CRITICAL ISSUES — Those aspects of a system’s capability, either operational, technical, or other,that must be questioned before a system’s overall suitability can be known. Critical issues are ofprimary importance to the decision authority in reaching a decision to allow the system to ad-vance into the next phase of development.

CRITICAL OPERATIONAL ISSUE (COI) — Operational effectiveness and operational suitabil-ity issues (not parameters, objectives, or thresholds) that must be examined in Operational Testand Evaluation (OT&E) to determine the system’s capability to perform its mission. A COI isnormally phrased as a question that must be answered in order to properly evaluate operationaleffectiveness (e.g., “Will the system detect the threat in a combat environment at adequate rangeto allow successful engagement?”) or operational suitability (e.g., “Will the system be safe tooperate in a combat environment?”).

Page 284: DAU T&E Management Guide.pdf

B-6

DATA SYSTEM — Combinations of personnel efforts, forms, formats, instructions, procedures,data elements and related data codes, communications facilities, and Automated Data ProcessingEquipment (ADPE) that provide an organized and interconnected means—either automated, manual,or a mixture of these—for recording, collecting, processing, and communicating data.

DEFENSE ACQUISITION EXECUTIVE (DAE) — The individual responsible for all acquisitionmatters within the DoD. (See DoD Directive (DoDD) 5000.1.)

DEPARTMENT OF DEFENSE ACQUISITION SYSTEM — A single uniform system wherebyall equipment, facilities, and services are planned, designed, developed, acquired, maintained, anddisposed of within the DoD. The system encompasses establishing and enforcing policies andpractices that govern acquisitions, to include documenting mission needs and establishing perfor-mance goals and baselines; determining and prioritizing resource requirements for acquisitionprograms; planning and executing acquisition programs; directing and controlling the acquisitionreview process; developing and assessing logistics implications; contracting; monitoring theexecution status of approved programs; and reporting to the Congress.

DESIGNATED ACQUISITION PROGRAM — Program designated by the Director, OperationalTest and Evaluation (DOT&E) or the Deputy Director, Developmental Test and Evaluation (S&TS)for OSD oversight of test and evaluation.

DEVELOPING AGENCY (DA) — The Systems Command or Chief of Naval Operations (CNO)-designated project manager assigned responsibility for the development, test and evaluation of aweapon system, subsystem, or item of equipment.

DEVELOPMENT TEST AND EVALUATION (DT&E) — T&E conducted throughout the lifecycle to identify potential operational and technological capabilities and limitations of the alter-native concepts and design options being pursued; support the identification of cost-performancetradeoffs by providing analyses of the capabilities and limitations of alternatives; support theidentification and description of design technical risks; assess progress toward meeting CriticalOperational Issues (COIs), mitigation of acquisition technical risk, achievement of manufacturingprocess requirements and system maturity; assess validity of assumptions and conclusions fromthe Analysis of Alternatives (AoAs); provide data and analysis in support of the decision to cer-tify the system ready for Operational Test and Evaluation (OT&E); and in the case of AutomatedInformation Systems (AISs), support an information systems security certification prior to pro-cessing classified or sensitive data and ensure a standards conformance certification.

DEVELOPMENTAL TESTER — The command or agency that plans, conducts, and reports theresults of developmental testing. Associated contractors may perform developmental testing onbehalf of the command or agency.

EARLY OPERATIONAL ASSESSMENT (EOA) — An operational assessment conducted prior toor in support of prototype testing.

EFFECTIVENESS — The extent to which the goals of the system are attained, or the degree towhich a system can be elected to achieve a set of specific mission requirements.

Page 285: DAU T&E Management Guide.pdf

B-7

ENGINEERING CHANGE — An alteration in the physical or functional characteristics of a systemor item delivered, to be delivered or under development, after establishment of such characteristics.

ENGINEERING CHANGE PROPOSAL (ECP) — A proposal to the responsible authority recom-mending that a change to an original item of equipment be considered, and the design or engi-neering change be incorporated into the article to modify, add to, delete, or supersede originalparts.

ENGINEERING DEVELOPMENT — The RDT&E funding category that includes developmentprograms being engineered for Service use but not yet approved for procurement or operation.Budget Category 6.4 includes those projects in full-scale development of Service use; but theyhave not yet received approval for production or had production funds included in the DoD bud-get submission for the budget or subsequent fiscal year.

EVALUATION CRITERIA — Standards by which achievement of required technical and opera-tional effectiveness/suitability characteristics or resolution of technical or operational issues maybe evaluated. Evaluation criteria should include quantitative thresholds for the Initial OperatingCapability (IOC) system. If parameter maturity grows beyond IOC, intermediate evaluation crite-ria, appropriately time-lined, must also be provided.

FIRST ARTICLE — First article includes pre-production models, initial production samples, testsamples, first lots, pilot models, and pilot lots; and approval involves testing and evaluating thefirst article for conformance with specified contract requirements before or in the initial stage ofproduction under a contract.

FIRST ARTICLE TESTING (FAT) — Production testing that is planned, conducted, and moni-tored by the materiel developer. FAT includes pre-production and initial production testing con-ducted to ensure that the contractor can furnish a product that meets the established technicalcriteria.

FOLLOW-ON OPERATIONAL TEST AND EVALUATION (FOT&E) — That test and evalua-tion that may be necessary after system deployment to refine the estimates made during opera-tional test and evaluation, to evaluate changes, and to re-evaluate the system to ensure that itcontinues to meet operational needs and retains its effectiveness in a new environment or againsta new threat.

FOLLOW-ON PRODUCTION TEST — A technical test conducted subsequent to a full produc-tion decision on initial production and mass production models to determine production conform-ance for quality assurance purposes. Program funding category — Procurement.

FOREIGN COMPARATIVE TESTING (FCT) — A DoD T&E program that is prescribed in Title10 U.S.C. §2350a(g), and is centrally managed by the Director, Foreign Comparative Test (Strate-gic and Tactical Systems (S&TS)). It provides funding for U.S. T&E of selected equipment itemsand technologies developed by allied countries when such items and technologies are identifiedas having good potential to satisfy valid DoD requirements.

Page 286: DAU T&E Management Guide.pdf

B-8

FUTURE-YEAR DEFENSE PROGRAM (FYDP) —(Formerly the Five Year Defense Program).The official DoD document that summarizes forces and resources associated with programs ap-proved by the Secretary of Defense (SECDEF). Its three parts are the organizations affected,appropriations accounts (Research, Development, Test, and Evaluation (RDT&E), Operations andMaintenance (O&M), etc.), and the 11 major force programs (strategic forces, airlift/sealift,Research and Development (R&D), etc.). R&D is Program 06. Under the current Planning, Pro-gramming, Budgeting and Execution (PPBE) cycle, the FYDP is updated when the Servicessubmit their Program Objective Memorandums (POMs) to the Office of the Secretary of Defense(OSD) (May/June), when the Services submit their budgets to OSD (September), and when thePresident submits the national budget to the Congress (February). The primary data element inthe FYDP is the Program Element (PE).

HARMONIZATION — Refers to the process, or results, of adjusting differences or inconsistenciesin the qualitative basic military requirements of the United States, its allies, and other friendlycountries. It implies that significant features will be brought into line so as to make possiblesubstantial gains in terms of the overall objectives of cooperation (e.g., enhanced utilization ofresources, standardization, and compatibility of equipment). It implies especially that compara-tively minor differences in “requirements” should not be permitted to serve as a basis for thesupport of slightly different duplicative programs and projects.

HUMAN SYSTEMS INTEGRATION (HSI) — A disciplined, unified, and interactive approach tointegrate human considerations into system design to improve total system performance and re-duce costs of ownership. The major categories of human considerations are manpower, personnel,training, human factors engineering, safety, and health.

INDEPENDENT EVALUATION REPORT (IER) — A report that provides an assessment of itemor system operational effectiveness and operational suitability versus critical issues as well as theadequacy of testing to that point in the development of item or system.

INDEPENDENT OPERATIONAL TEST AGENCY — The Army Test and Evaluation Command(ATEC), the Navy Operational Test and Evaluation Force (OPTEVFOR), the Air Force Opera-tional Test and Evaluation Center (AFOTEC), the Marine Corps Operational Test and EvaluationActivity (MCOTEA), and for the Defense Information Systems Agency (DISA) – the Joint In-teroperability Test Command (JITC).

INDEPENDENT VERIFICATION AND VALIDATION (IV&V) — An independent review of thesoftware product for functional effectiveness and technical sufficiency.

INITIAL OPERATIONAL CAPABILITY (IOC) — The first attainment of the capability to employeffectively a weapon, item of equipment, or system of approved specific characteristics with theappropriate number, type, and mix of trained and equipped personnel necessary to operate, main-tain, and support the system. It is normally defined in the Operational Requirements Document(ORD).

Page 287: DAU T&E Management Guide.pdf

B-9

INITIAL OPERATIONAL TEST AND EVALUATION (IOT&E) — Operational test and evalua-tion conducted on production, or production representative articles, to determine whether systemsare operationally effective and suitable for intended use by representative users to support thedecision to proceed beyond Low Rate Initial Production (LRIP). For the Navy, see OperationalEvaluation (OPEVAL).

IN-PROCESS REVIEW — Review of a project or program at critical points to evaluate status andmake recommendations to the decision authority.

INSPECTION — Visual examination of the item (hardware and software) and associated descriptivedocumentation, which compares appropriate characteristics with predetermined standards todetermine conformance to requirements without the use of special laboratory equipment orprocedures.

INTEGRATED PRODUCT AND PROCESS DEVELOPMENT (IPPD) — A management tech-nique that simultaneously integrates all essential acquisition activities through the use ofmultidisciplinary teams to optimize the design, manufacturing, and supportability processes. IPPDfacilitates meeting cost and performance objectives from product concept through production,including field support. One of the key IPPD tenets is multidisciplinary teamwork throughIntegrated Product Teams (IPTs).

INTEGRATED PRODUCT TEAM (IPT) — Team composed of representatives from all appropri-ate functional disciplines working together to build successful programs, identify and resolveissues, and make sound and timely recommendations to facilitate decisionmaking. There are threetypes of IPTs: Overarching IPTs (OIPTs) focus on strategic guidance, program assessment, andissue resolution; Working-level IPTs (WIPTs) identify and resolve program issues, determineprogram status, and seek opportunities for acquisition reform; and program-level IPTs focus onprogram execution and may include representatives from both government and, after contractaward, industry.

INTEROPERABILITY — The ability of systems, units, or forces to provide services to or acceptservices from other systems, units, or forces and to use the services so exchanged to operateeffectively together. The conditions achieved among communications-electronics systems or itemsof communications-electronics equipment when information or services can be exchanged di-rectly and satisfactorily between them and/or their users. Designated an element of the Net-ReadyKey Performance Parameter (KPP) by Joint Chiefs of Staff (JCS).

ISSUES — Any aspect of the system’s capability, either operational, technical or other, that must bequestioned before the system’s overall military utility can be known. Operational issues are issuesthat must be evaluated considering the soldier and the machine as an entity to estimate the opera-tional effectiveness and operational suitability of the system in its complete user environment.

KEY PERFORMANCE PARAMETERS (KPPs) — Those minimum attributes or characteristicsconsidered most essential for an effective military capability. The Capability Development Docu-ment (CDD) and the Capability Production Document (CPD) KPPs are included verbatim in theAcquisition Program Baseline (APB).

Page 288: DAU T&E Management Guide.pdf

B-10

LIVE FIRE TEST AND EVALUATION (LFT&E) — A test process that is defined in Title 10U.S.C. §2366, that must be conducted on a covered system, major munition program, missileprogram, or Product Improvement (PI) to a covered system, major munition program, or missileprogram before it can proceed beyond Low Rate Initial Production (LRIP). A covered system is amajor system (Acquisition Category (ACAT) I, II) that is: 1) user-occupied and designed to pro-vide some degree of protection to its occupants in combat, or ; 2) a conventional munitionsprogram or missile program, or; 3) a conventional munitions program for which more than1,000,000,000 rounds are planned to be acquired, or; 4) a modification to a covered system thatis likely to affect significantly the survivability or lethality of such a system.

LIVE FIRE TEST AND EVALUATION (LFT&E) REPORT — Report prepared by the Director,Operational Test and Evaluation (DOT&E) on survivability and lethality testing. Submitted to theCongress for covered systems prior to the decision to proceed beyond Low Rate Initial Production(LRIP).

LETHALITY — The probability that weapon effects will destroy the target or render it neutral.

LIFE CYCLE COST (LCC) — The total cost to the government for the development, acquisition,operation, and logistics support of a system or set of forces over a defined life span.

LOGISTICS SUPPORTABILITY — The degree of ease to which system design characteristicsand planned logistics resources (including the Logistics Support (LS) elements) allow for themeeting of system availability and wartime usage requirements.

LONG LEAD ITEMS — Those components of a system or piece of equipment that take the longesttime to procure and, therefore, may require an early commitment of funds in order to meetacquisition program schedules.

LOT ACCEPTANCE — This test is based on a sampling procedure to ensure that the productretains its quality. No acceptance or installation should be permitted until this test for the lot hasbeen successfully completed.

LOW RATE INITIAL PRODUCTION (LRIP) — The minimum number of systems (other thanships and satellites) to provide production representative articles for Operational Test and Evalu-ation (OT&E), to establish an initial production base, and to permit an orderly increase in theproduction rate sufficient to lead to Full Rate Production (FRP) upon successful completion ofoperational testing. For Major Defense Acquisition Programs (MDAPs), LRIP quantities in ex-cess of 10 percent of the acquisition objective must be reported in the Selected Acquisition Report(SAR). For ships and satellites LRIP is the minimum quantity and rate that preserves mobilization.

MAINTAINABILITY — The ability of an item to be retained in, or restored to, a specified conditionwhen maintenance is performed by personnel having specified skill levels, using prescribed proce-dures and resources, at each prescribed level of maintenance and repair. (See Mean Time To Repair(MTTR))

MAJOR DEFENSE ACQUISITION PROGRAM (MDAP) — See “Acquisition Category.”

Page 289: DAU T&E Management Guide.pdf

B-11

MAJOR SYSTEM (DoD) — See “Acquisition Category.”

MAJOR RANGE AND TEST FACILITY BASE (MRTFB) — The complex of major DoD rangesand test facilities managed according to DoD 3200.11 by the Director, Test Resource Manage-ment Center reporting to the USD(AT&L).

MEAN TIME BETWEEN FAILURE (MTBF) — For a particular interval, the total functional lifeof a population of an item divided by the total number of failures within the population. Thedefinition holds for time, rounds, miles, events, or other measures of life unit. A basic technicalmeasure of reliability.

MEAN TIME TO REPAIR (MTTR) — The total elapsed time (clock hours) for corrective mainte-nance divided by the total number of corrective maintenance actions during a given period oftime. A basic technical measure of maintainability.

MEASURE OF EFFECTIVENESS (MOE) — A measure of operational performance success thatmust be closely related to the objective of the mission or operation being evaluated. For example, killsper shot, probability of kill, effective range, etc. Linkage shall exist among the various MOEs used inthe Analysis of Alternatives (AoAs), Operations Requirements Document (ORD) and Test and Evalu-ation (T&E); in particular, the MOEs, Measures of Performance (MOPs), criteria in the ORD, theAoAs, the Test and Evaluation Master Plan (TEMP), and the Acquisition Program Baseline (APB)shall be consistent. A meaningful MOE must be quantifiable and a measure of what degree the realobjective is achieved.

MEASURE OF PERFORMANCE (MOP) — Measure of a lower level of performance represent-ing subsets of Measures of Effectiveness (MOEs). Examples are speed, payload, range, time onstation, frequency, or other distinctly quantifiable performance features.

MILESTONE — The point when a recommendation is made and approval sought regarding startingor continuing (proceeding to next phase) an acquisition program.

MILESTONE DECISION AUTHORITY (MDA) — The individual designated in accordance withcriteria established by the Under Secretary of Defense (Acquisition, Technology, and Logistics)(USD(AT&L)), or by the Assistant Secretary of Defense (Command, Control, Communications, andIntelligence) (ASD(C3I)) for Automated Information System (AIS) acquisition programs (DoD 5000.2-R (Reference C)), to approve entry of an acquisition program into the next acquisition phase.

MILITARY OPERATIONAL REQUIREMENT — The formal expression of a military need, re-sponses to which result in development or acquisition of item(s), equipment, or systems.

MISSION RELIABILITY — The probability that a system will perform mission essential func-tions for a given period of time under conditions stated in the mission profile.

MODEL — A model is a representation of an actual or conceptual system that involves mathematics,logical expressions, or computer simulations that can be used to predict how the system mightperform or survive under various conditions or in a range of hostile environments.

Page 290: DAU T&E Management Guide.pdf

B-12

MULTI-SERVICE TEST AND EVALUATION — T&E conducted by two or more DoD Compo-nents for systems to be acquired by more than one DoD Component (Joint acquisition program),or for a DoD Component’s systems that have interfaces with equipment of another DoD Compo-nent. May be developmental testing or operational testing (MOT&E).

NON-DEVELOPMENTAL ITEM (NDI) — A NDI is any previously developed item of supplyused exclusively for government purposes by a federal agency, a state or local government, or aforeign government with which the United States has a mutual defense cooperation agreement;any item described above that requires only minor modifications or modifications of the typecustomarily available in the commercial marketplace in order to meet the requirements of theprocessing department or agency.

NONMAJOR DEFENSE ACQUISITION PROGRAM — A program other than a Major DefenseAcquisition Program (MDAP) Acquisition Category (ACAT) I or a highly sensitive classifiedprogram: i.e., ACAT II and ACAT III programs.

NUCLEAR HARDNESS — A quantitative description of the resistance of a system or componentto malfunction (temporary and permanent) and/or degraded performance induced by a nuclearweapon environment. Measured by resistance to physical quantities such as overpressure, peakvelocities, energy absorbed, and electrical stress. Hardness is achieved through adhering to appro-priate design specifications and is verified by one or more test and analysis techniques.

OBJECTIVE — The performance value that is desired by the user and which the Program Manager(PM) is attempting to obtain. The objective value represents an operationally meaningful, time-critical, and cost-effective increment above the performance threshold for each program parameter.

OPEN SYSTEMS — Acquisition of Weapons Systems—An integrated technical and business strat-egy that defines key interfaces for a system (or a piece of equipment under development) inaccordance with those adopted by formal consensus bodies (recognized industry standards’ bod-ies) as specifications and standards, or commonly accepted (de facto) standards (both companyproprietary and non-proprietary) if they facilitate utilization of multiple suppliers.

OPERATIONAL ASSESSMENT (OA) — An evaluation of operational effectiveness and opera-tional suitability made by an independent operational test activity, with user support as required,on other than production systems. The focus of an OA is on significant trends noted in develop-ment efforts, programmatic voids, areas of risk, adequacy of requirements, and the ability of theprogram to support adequate Operational Testing (OT). OA may be made at any time using tech-nology demonstrators, prototypes, mock-ups, Engineering Development Models (EMDs), or simu-lations but will not substitute for the independent Operational Test and Evaluation (OT&E) necessaryto support full production decisions.

OPERATIONAL AVAILABILITY (Ao) — The degree (expressed in terms of 1.0 or 100 percent asthe highest) to which one can expect equipment or weapon systems to work properly when re-quired. The equation is uptime over uptime plus downtime, expressed as Ao. It is the quantitativelink between readiness objectives and supportability.

Page 291: DAU T&E Management Guide.pdf

B-13

OPERATIONAL EFFECTIVENESS — The overall degree of mission accomplishment of a sys-tem when used by representative personnel in the environment planned or expected (e.g. natural,electronic, threat, etc.) for operational employment of the system considering organization, doc-trine, tactics, survivability, vulnerability, and threat (including countermeasures; initial nuclearweapons effects; and Nuclear, Biological, and Chemical Contamination (NBCC) threats).

OPERATIONAL EVALUATION — Addresses the effectiveness and suitability of the weapons,equipment, or munitions for use in combat by typical military users and the system operationalissues and criteria; provides information to estimate organizational structure, personnel require-ments, doctrine, training and tactics; identifies any operational deficiencies and the need for anymodifications; and assesses MANPRINT (safety, health hazards, human factors, manpower, andpersonnel) aspects of the system in a realistic operational environment.

OPERATIONAL REQUIREMENTS — User- or user representative-generated validated needs de-veloped to address mission area deficiencies, evolving threats, emerging technologies, or weaponsystem cost improvements. Operational requirements form the foundation for weapon systemunique specifications and contract requirements.

OPERATIONAL REQUIREMENTS DOCUMENT (ORD) — Documents the user’s objectivesand minimum acceptable requirements for operational performance of a proposed concept orsystem. Format is contained in the Chairman of the Joint Chiefs of Staff Instruction (CJCSI)3170.01B, Requirements Generation System (RGS), April 15, 2001.

OPERATIONAL SUITABILITY — The degree to which a system can be placed satisfactorily infield use with consideration being given to availability, compatibility, transportability, interoperabil-ity, reliability, wartime usage rates, maintainability, safety, human factors, manpower supportability,logistics supportability, natural environmental effects and impacts, documentation, and trainingrequirements.

OPERATIONAL TEST AND EVALUATION (OT&E) — The field test, under realistic conditions,of any item (or key component) of weapons, equipment, or munitions for the purpose of deter-mining the effectiveness and suitability of the weapons, equipment, or munitions for use in com-bat by typical military users; and the evaluation of the results of such tests. (Title 10 U.S.C. 2399)

OPERATIONAL TEST CRITERIA — Expressions of the operational level of performance re-quired of the military system to demonstrate operational effectiveness for given functions duringeach operational test. The expression consists of the function addressed, the basis for comparison,the performance required, and the confidence level.

OPERATIONAL TEST READINESS REVIEW (OTRR) — A review to identify problems thatmay impact the conduct of an Operational Test and Evaluation (OT&E). The OTRRs are con-ducted to determine changes required in planning, resources, or testing necessary to proceed withthe OT&E. Participants include the operational tester (chair), evaluator, material developer, userrepresentative, logisticians, Headquarters, Department of the Army (HQDA) staff elements, andothers as necessary.

Page 292: DAU T&E Management Guide.pdf

B-14

PARAMETER — A determining factor or characteristic. Usually related to performance in developinga system.

PERFORMANCE — Those operational and support characteristics of the system that allow it toeffectively and efficiently perform its assigned mission over time. The support characteristics ofthe system include both supportability aspects of the design and the support elements necessaryfor system operation.

PILOT PRODUCTION — Production line normally established during first production phase totest new manufacturing methods and procedures. Normally funded by Research, Development,Test, and Evaluation (RDT&E) until the line is proven.

POSTPRODUCTION TESTING — Testing conducted to assure that materiel that is reworked,repaired, renovated, rebuilt, or overhauled after initial issue and deployment conforms to specifiedquality, reliability, safety, and operational performance standards. Included in postproduction testsare surveillance tests, stockpile reliability, and reconditioning tests.

PREPLANNED PRODUCT IMPROVEMENT (P3I) — Planned future evolutionary improvementof developmental systems for which design considerations are effected during development toenhance future application of projected technology. Includes improvements planned for ongoingsystems that go beyond the current performance envelope to achieve a needed operational capability.

PREPRODUCTION PROTOTYPE — An article in final form employing standard parts, represen-tative of articles to be produced subsequently in a production line.

PREPRODUCTION QUALIFICATION TEST — The formal contractual tests that ensure designintegrity over the specified operational and environmental range. These tests usually use proto-type or pre-production hardware fabricated to the proposed production design specifications anddrawings. Such tests include contractual Reliability and Maintainability (R&M) demonstrationtests required prior to production release.

PROBABILITY OF KILL (Pk) — The lethality of a weapon system. Generally refers to arma-ments. (e.g., missiles, ordnance, etc.). Usually the statistical probability that the weapon willdetonate close enough to the target with enough effectiveness to disable the target.

PRODUCT IMPROVEMENT (PI) — Effort to incorporate a configuration change involving engi-neering and testing effort on end items and depot repairable components, or changes on otherthan developmental items to increase system or combat effectiveness or extend useful militarylife. Usually results from feedback from the users.

PRODUCTION ACCEPTANCE TEST AND EVALUATION (PAT&E) — T&E of productionitems to demonstrate that items procured fulfill requirements and specifications of the procuringcontract or agreements.

PRODUCTION ARTICLE — The end item under initial or Full Rate Production (FRP).

Page 293: DAU T&E Management Guide.pdf

B-15

PRODUCTION PROVE OUT — A technical test conducted prior to production testing with proto-type hardware to determine the most appropriate design alternative. This testing may also providedata on safety, the achievability of critical system technical characteristics, refinement and rugge-dization of hardware configurations, and determination of technical risks.

PRODUCTION QUALIFICATION TEST (PQT) — A technical test completed prior to the FullRate Production (FRP) decision to ensure the effectiveness of the manufacturing process, equip-ment, and procedures. This testing also serves the purpose of providing data for the independentevaluation required for materiel release so that the evaluator can address the adequacy of themateriel with respect to the stated requirements. These tests are conducted on a number of samplestaken at random from the first production lot, and are repeated if the process or design is changedsignificantly, and when a second or alternative source is brought online.

PROGRAM MANAGER (PM) — A military or civilian official who is responsible for managing,through Integrated Product Teams (IPTs), an acquisition program.

PROTOTYPE — An original or model on which a later system/item is formed or based. Earlyprototypes may be built during early design stages and tested prior to advancing to advancedengineering. Selected prototyping may evolve into an Engineering Development Model (EDM),as required to identify and resolve specific design and manufacturing risks in that phase or insupport of Preplanned Product Improvement (P3I) or Evolutionary Acquisition (EA).

QUALIFICATIONS TESTING — Simulates defined operational environmental conditions with apredetermined safety factor, the results indicating whether a given design can perform its functionwithin the simulated operational environment of a system.

QUALITY ASSURANCE (QA) — A planned and systematic pattern of all actions necessary toprovide confidence that adequate technical requirements are established, that products and ser-vices conform to established technical requirements, and that satisfactory performance is achieved.

REALISTIC TEST ENVIRONMENT — The conditions under which the system is expected to beoperated and maintained, including the natural weather and climatic conditions, terrain effects,battlefield disturbances, and enemy threat conditions.

RELIABILITY — The ability of a system and its parts to perform its mission without failure,degradation, or demand on the support system. (See Mean Time Between Failure (MTBF).)

REQUIRED OPERATIONAL CHARACTERISTICS — System parameters that are primary indi-cators of the system’s capability to be employed to perform the required mission functions, and to besupported.

REQUIRED TECHNICAL CHARACTERISTICS — System parameters selected as primary in-dicators of achievement of engineering goals. These need not be direct measures of, but shouldalways relate to the system’s capability to perform the required mission functions, and to besupported.

Page 294: DAU T&E Management Guide.pdf

B-16

RESEARCH — 1. Systematic inquiry into a subject in order to discover or revise facts, theories,etc., to investigate. 2. Means of developing new technology for potential use in defense systems.

RISK — A measure of the inability to achieve program objectives within defined cost and scheduleconstraints. Risk is associated with all aspects of the program, e.g., threat, technology, designprocesses, Work Breakdown Structure (WBS) elements, etc. It has two components:

1. The probability of failing to achieve a particular outcome; and2. The consequences of failing to achieve that outcome.

RISK ASSESSMENT — The process of identifying program risks within risk areas and criticaltechnical processes, analyzing them for their consequences and probabilities of occurrence, andprioritizing them for handling.

RISK MONITORING — A process that systematically tracks and evaluates the performance of riskitems against established metrics throughout the acquisition process and develops further riskreduction handling options as appropriate.

SAFETY — The application of engineering and management principles, criteria, and techniques tooptimize safety within the constraints of operational effectiveness, time, and cost throughout allphases of the system life cycle.

SAFETY/HEALTH VERIFICATION — The development of data used to evaluate the safety andhealth features of a system to determine its acceptability. This is done primarily during Develop-mental Test (DT) and user or Operational Test (OT) and evaluation and supplemented by analysisand independent evaluations.

SAFETY RELEASE — A formal document issued to a user test organization before any hands-onuse or maintenance by personnel. The safety release indicates the system is safe for use andmaintenance by typical user personnel and describes the system safety analyses. Operational lim-its and precautions are included. The test agency uses the data to integrate safety into test controlsand procedures and to determine if the test objectives can be met within these limits.

SELECTED ACQUISITION REPORT (SAR) — Standard, comprehensive, summary status re-ports on Major Defense Acquisition Programs (MDAPs) (Acquisition Category (ACAT) I) re-quired for periodic submission to the Congress. They include key cost, schedule, and technicalinformation.

SIMULATION — A simulation is a method for implementing a model. It is the process of conductingexperiments with a model for the purpose of understanding the behavior of the system modeledunder selected conditions or of evaluating various strategies for the operation of the system withinthe limits imposed by developmental or operational criteria. Simulation may include the use ofanalog or digital devices, laboratory models, or “test bed” sites. Simulations are usually programmedfor solution on a computer; however, in the broadest sense, military exercises and war games arealso simulations.

Page 295: DAU T&E Management Guide.pdf

B-17

SIMULATOR — A generic term used to describe equipment used to represent weapon systems indevelopment testing, operational testing, and training, e.g., a threat simulator has one or morecharacteristics which, when detected by human senses or man-made sensors, provide the appear-ance of an actual threat weapon system with a prescribed degree of fidelity.

SPECIFICATION — A document used in development and procurement that describes the techni-cal requirements for items, materials, and services, including the procedures by which it will bedetermined that the requirements have been met. Specifications may be unique to a specificprogram (program-peculiar) or they may be common to several applications (general in nature).

SUBTEST — An element of a test program. A subset is a test conducted for a specific purpose (e.g.,rain, dust, transportability, missile firing, fording).

SURVIVABILITY — Survivability is the capability of a system and its crew to avoid or withstanda man-made hostile environment without suffering an abortive impairment of its ability to accom-plish its designated mission.

SUSCEPTIBILITY — The degree to which a device, equipment, or weapon system is open toeffective attack due to one or more inherent weaknesses. Susceptibility is a function of opera-tional tactics, countermeasures, probability of enemy fielding a threat, etc. Susceptibility is con-sidered a subset of survivability.

SYSTEM — 1. The organization of hardware, software, material, facilities, personnel, data, andservices needed to perform a designated function with specified results, such as the gathering ofspecified data, its processing, and delivery to users. 2. A combination of two or more interrelatedequipment (sets) arranged in a functional package to perform an operational function or to satisfya requirement.

SYSTEM ENGINEERING, DEFENSE — A comprehensive, iterative technical management pro-cess that includes translating operational requirements into configured systems, integrating thetechnical inputs of the entire design team, managing interfaces, characterizing and managingtechnical risk, transitioning technology from the technology base into program-specific efforts,and verifying that designs meet operational needs. It is a life cycle activity that demands a concur-rent approach to both product and process development.

SYSTEMS ENGINEERING PROCESS (SEP) — A logical sequence of activities and decisionstransforming an operational need into a description of system performance parameters and apreferred system configuration.

SYSTEM SPECIFICATION — States all necessary functional requirements of a system in termsof technical performance and mission requirements, including test provisions to assure that allrequirements are achieved. Essential physical constraints are included. System specifications statethe technical and mission requirements of the system as an entity.

Page 296: DAU T&E Management Guide.pdf

B-18

SYSTEM THREAT ASSESSMENT (STA) — Describes the threat to be countered and the pro-jected threat environment. The threat information must be validated by the Defense IntelligenceAgency (DIA) programs reviewed by the Defense Acquisition Board (DAB).

TECHNICAL EVALUATION (TECHEVAL) — The study, investigations, or Test and Evaluation(T&E) by a developing agency to determine the technical suitability of materiel, equipment, or asystem, for use in the military Services. (See Development Test and Evaluation (DT&E), Navy.)

TECHNICAL FEASIBILITY TEST — A technical feasibility test is a developmental test typicallyconducted during Concept Refinement (CR) and Technology Development (TD) to provide data toassist in determining safety and health hazards, and establishing system performance specificationsand feasibility.

TECHNICAL PERFORMANCE MEASUREMENT (TPM) — Describes all the activities under-taken by the government to obtain design status beyond those treating schedule and cost. A TPMmanager is defined as the product design assessment, which estimates through tests the values ofessential performance parameters of the current design of Work Breakdown Structure (WBS)product elements. It forecasts the values to be achieved through the planned technical programeffort, measures differences between achieved values and those allocated to the product elementby the System Engineering Process (SEP), and determines the impact of those differences onsystem effectiveness.

TECHNICAL TESTER — The command or agency that plans, conducts, and reports the results ofArmy technical testing. Associated contractors may perform development testing on behalf of thecommand or agency.

TEST — Any program or procedure that is designed to obtain, verify, or provide data for the evalu-ation of Research and Development (R&D) (other than laboratory experiments); progress in ac-complishing development objectives; or performance and operational capability of systems, sub-systems, components, and equipment items.

TEST AND EVALUATION (T&E) — Process by which a system or components provide informa-tion regarding risk and risk mitigation and empirical data to validate models and simulations.T&E permits, as assessment of the attainment of technical performance, specifications, and sys-tem maturity to determine whether systems are operationally effective, suitable, and survivablefor intended use. There are two general types of T&E—Developmental (DT&E) and Operational(OT&E). (See Operational Test and Evaluation (OT&E), Initial Operational Test and Evaluation(IOT&E), and Developmental Test and Evaluation (DT&E).)

TEST AND EVALUATION MASTER PLAN (TEMP) — Documents the overall structure andobjectives of the Test and Evaluation (T&E) program. It provides a framework within which togenerate detailed T&E plans, and it documents schedule and resource implications associatedwith the T&E program. The TEMP identifies the necessary Developmental Test and Evaluation(DT&E), Operational Test and Evaluation (OT&E), and Live Fire Test and Evaluation (LFT&E)activities. It relates program schedule, test management strategy and structure, and requiredresources to: Critical Operational Issues (COIs); critical technical parameters; objectives and

Page 297: DAU T&E Management Guide.pdf

B-19

thresholds documented in the Operational Requirements Document (ORD); evaluation criteria;and milestone decision points. For multi-Service or joint programs, a single integrated TEMP isrequired. Component-unique content requirements, particularly evaluation criteria associated withCOIs, can be addressed in a component-prepared annex to the basic TEMP. (See Capstone TEMP).)

TEST BED — A system representation consisting of actual hardware and/or software and computermodels or prototype hardware and/or software

TEST CRITERIA — Standards by which test results and outcome are judged.

TEST DESIGN PLAN — A statement of the conditions under which the test is to be conducted, thedata required from the test, and the data handling required to relate the data results to the testconditions.

TEST INSTRUMENTATION — Test instrumentation is scientific, Automated Data Processing Equip-ment (ADPE), or technical equipment used to measure, sense, record, transmit, process, or dis-play data during tests, evaluations, or examination of materiel, training concepts, or tactical doc-trine. Audio-visual is included as instrumentation when used to support Army testing.

TEST RESOURCES — A collective term that encompasses all elements necessary to plan, conduct,and collect/analyze data from a test event or program. Elements include test funding and supportmanpower (including Temporary Duty (TDY) costs), test assets (or units under test), test assetsupport equipment, technical data, simulation models, test-beds, threat simulators, surrogates andreplicas, special instrumentation peculiar to a given test asset or test event, targets, tracking anddata acquisition, instrumentation, equipment for data reduction, communications, meteorology,utilities, photography, calibration, security, recovery, maintenance and repair, frequency manage-ment and control, and base/facility support services.

THREAT —The sum of the potential strengths, capabilities, and strategic objectives of any adver-sary that can limit or negate U.S. mission accomplishment or reduce force, system, or equipmenteffectiveness.

THRESHOLDS — The Minimum Acceptable Value (MAV) which, in the user’s judgment, is neces-sary to satisfy the need. If threshold values are not achieved, program performance is seriouslydegraded, the program may be too costly, or the program may no longer be viable.

TRANSPORTABILITY — The capability of materiel to be moved by towing, self-propulsion, orcarrier through any means, such as railways, highways, waterways, pipelines, oceans, and air-ways. (Full consideration of available and projected transportation assets, mobility plans, andschedules, and the impact of system equipment and support items on the strategic mobility ofoperating military forces is required to achieve this capability.)

UNKNOWN-UNKNOWNS (UNK(s)) — Future situation impossible to plan, predict, or even knowwhat to look for.

Page 298: DAU T&E Management Guide.pdf

B-20

USER — An operational command or agency that receives or will receive benefit from the acquiredsystem. Combatant Commanders (COCOMs) and the Services are the users. There may be morethan one user for a system. The Services are seen as users for systems required to organize, equip,and train forces for the COCOMs of the unified command.

USER FRIENDLY — Primarily a term used in Automated Data Processing (ADP), it connotes amachine (hardware) or program (software) that is compatible with a person’s ability to operateeach successfully and easily.

USER REPRESENTATIVES — A command or agency that has been formally designated by properauthority to represent single or multiple users in the requirements and acquisition process. TheServices and the Service components of the Combatant Commanders (COCOMs) are normallythe user representatives. There should be only one user representative for a system.

VALIDATION — 1. The process by which the contractor (or as otherwise directed by the DoDcomponent procuring activity) tests a publication/Technical Manual (TM) for technical accuracyand adequacy. 2. The procedure of comparing input and output against an edited file and evaluat-ing the result of the comparison by means of a decision table established as a standard.

VARIANCE (Statistical) — A measure of the degree of spread among a set of values; a measure ofthe tendency of individual values to vary from the mean value. It is computed by subtracting themean value from each value, squaring each of these differences, summing these results, anddividing this sum by the number of values in order to obtain the arithmetic mean of these squares.

VULNERABILITY — The characteristics of a system that cause it to suffer a definite degradation(loss or reduction of capability to perform the designated mission) as a result of having beensubjected to a certain (defined) level of effects in an unnatural (manmade) hostile environment.Vulnerability is considered a subset of survivability.

WORK BREAKDOWN STRUCTURE (WBS) — An organized method to break down a projectinto logical subdivisions or subprojects at lower and lower levels of details. It is very useful inorganizing a project.

WORKING-LEVEL INTEGRATED PRODUCT TEAM (WIPT) — Team of representatives fromall appropriate functional disciplines working together to build successful and balanced programs,identify and resolve issues, and make sound and timely decisions. WIPTs may include membersfrom both government and industry, including program contractors and sub-contractors. A com-mittee, which includes non-government representatives to provide an industry view, would be anadvisory committee covered by the Federal Advisory Committee Act (FACA) and must follow theprocedures of that Act.

Page 299: DAU T&E Management Guide.pdf

B-21

Page 300: DAU T&E Management Guide.pdf

B-22

Page 301: DAU T&E Management Guide.pdf

C-1

APPENDIX CTEST-RELATED DATAITEM DESCRIPTIONS

extracted from DoD 5010.12-L,Acquisition Management System and

Data Requirements Control List (AMSDL)

Acceptance Test Plan DI-QCIC-80154, 80553Airborne Sound Measurements Test Report DI-HFAC-80272Airframe Rigidity Test Report DI-T-30734Ammunition Test Expenditure Report DI-MISC-80060Armor Material Test Reports DI-MISC-80073Ballistic Acceptance Test Report DI-MISC-80246C.P. Propeller Test Agenda UDI-T-23737Coordinated Test Plan DI-MGMT-80937Corrosion Testing Reports DI-MFFP-80108Damage Tolerance Test Results Reports DI-T-30725Demonstration Test

Plan DI-QCIC-80775Report DI-QCIC-80774

Directed Energy Survivability Test Plan DI-R-1786Durability Test Results Report DI-T-30726Electromagnetic Compatibility Test Plan DI-T-3704BElectromagnetic Interference Test

Plan DI-EMCS-80201Report DI-EMCS-80200

Electrostatic Discharge Sensitivity Test Report DI-RELI-80670Emission Control (EMCON) Test Report DI-R-2059Endurance Test (EMCS) Failure Reports DI-ATTS-80366Engineer Design Test Plan DI-MGMT-80688Environmental Design Test Plan DI-ENVR-80861Environmental Test Report DI-ENVR-80863Equipment Test Plan (Nonsystem) DI-T-3709AFactory Test

Plan DI-QCIC-80153EMCS Plan DI-ATTS-80360EMCS Procedures DI-ATTS-80361EMCS Reports DI-ATTS-80362

First Article Qualification Test Plan DI-T-5315AFlight Flutter Test Report DI-T-30733Flutter Model Test Report DI-T-30732Hardware Diagnostic Test System Development Plan DI-ATTS-80005High-Impact Shock Test Procedures DI-ENVR-80709Hull Test Results (Boats) Report UDI-T-23718

Page 302: DAU T&E Management Guide.pdf

C-2

Human Engineering TestPlan DI-HFAC-80743Report DI-HFAC-80744

Inspection and Test Plan DI-QCIC-81110Installation Test

Plan DI-QCIC-80155Procedures DI-QCIC-80511Report DI-QCIC-80140, 80512

Integrated Circuit Test Documentation DI-ATTS-80888Maintainability/Testability Demonstration Test

Plan DI-MNTY-80831Report DI-MNTY-80832

Maintenance Training Equipment Test Outlines DI-H-6129AMaster Test Plan/Program Test Plan DI-T-30714NBC Contamination Survivability Test Plan DI-R-1779Nuclear Survivability Test

Plan DI-NUOR-80928Report DI-NUOR-80929

Packaging TestPlan DI-PACK-80456Report DI-PACK-80457

Part, Component, or Subsystem Test Plan(s) DI-MISC-80759Parts (Non-standard) Test Data Report DI-MISC-81058Parts Qualification Test Plan DI-T-5477APerformance Oriented Packaging Test Report DI-PACK-81059Production Test

Plan DI-MNTY-80173Report DI-NDTI-80492

Quality Conformance Test Procedures DI-RELI-80322Radar Spectrum Management (RSM) Test Plan DI-MISC-81113Randomizer Test Report DI-NDTI-80884Reliability Test

Plan DI-RELI-80250Procedures DI-RELI-80251Reports DI-RELI-80252

Research and Development Test and Acceptance Plan DI-T-30744Rough Handling Test Report DI-T-5144CShip Acceptance Test (SAT)

Schedule DI-T-23959BReport DI-T-23190A

Shipboard Industrial Test Procedures DI-QCIC-80206Shock Test

Extension Request DI-ENVR-80706Report DI-ENVR-80708

Software General Unit Test Plan DI-MCCR-80307

Page 303: DAU T&E Management Guide.pdf

C-3

Software TestDescription DI-MCCR-80015APlan DI-MCCR-80014AProcedures DI-MCCR-80310Report DI-MCCR-80017A, 80311

Software SystemDevelopment Test and Evaluation Plan DI-MCCR-80309Integration and Test Plan DI-MCCR-80308

Sound Test Failure Notification and Recommendation DI-HFAC-80271Special Test Equipment Plan DI-T-30702Spectrum Signature Test Plan DI-R-2068Static Test

Plan DI-T-21463AReports DI-T-21464A

Structure-borne Vibration Acceleration Measurement Test DI-HFAC-80274Superimposed Load Test Report DI-T-5463ATempest Test

Request DI-EMCS-80218Plan DI-T-1912A

Test Change Proposal DI-T-26391BTest Elements List DI-QCIC-80204Test Facility Requirements Document (TFRD) DI-FACR-80810Test Package DI-ILSS-81085Test

Plan DI-NDTI-80566Plans/Procedures DI-NDTI-80808Procedure DI-NDTI-80603Procedures UDI-T-23732B

Test Plan Documentation for AIS DI-IPSC-80697Test Program

Documentation (TPD) DI-ATTS-80284Integration Logbook DI-ATTS-80281

TPS and OTPS Acceptance TestProcedures (ATPS) DI-ATTS-80282AReport (ATR) DI-ATTS-80283A

Test Reports DI-NDTI-80809A,DI-MISC-80653

Test Requirements Document DI-T-2181,DI-ATTS-80002, 80041

Test Scheduling Report DI-MISC-80761Testability

Program Plan DI-T-7198Analysis Report DI-T-7199

Page 304: DAU T&E Management Guide.pdf

C-4

Trainer Test Procedures and Results Report DI-T-25594CVibration and Noise Test Reports DI-T-30735Vibration Testing

Extension UDI-T-23752Report UDI-T-23762

Welding Procedure Qualification Test Report DI-MISC-80876

Page 305: DAU T&E Management Guide.pdf

D-1

APPENDIX DBIBLIOGRAPHY

Acquisition Desk Book, http://akss.dau.mil/DAG.

AFLC Pamphlet 800-34, ACQ Logistics Manage-ment.

AFM [Air Force Manual] 55-43, Management ofOperational Test and Evaluation.

AFM 99-113, Space Systems Test and Evalua-tion Process, May 1, 1996.

AFOTEC Regulation 23-1, Organization andFunctions Air Force Operational Test andEvaluation Center (AFOTEC).

AFOTEC, Electronic Warfare Operational Testand Evaluation.

AFOTEC, Nuclear Survivability Handbook forAir Force OT&E.

AFOTEC 99-103, Capabilities Test and Evalua-tion, October 22, 2001.

AFR [Air Force Regulation] 23-36, Air ForceOperational Test and Evaluation Center(AFOTEC).

AFR 55-30, Operations Security.

AFR 55-43, Management of Operational Test andEvaluation, June 1979.

AFR 800-2, Acquisition Program Management,July 21, 1994.

AFR 800-8, Integrated Logistics Support (ILS)Program.

AFR 80-14, Test and Evaluation. (Cancelled)

AFR 80-38, Management of the Air Force SystemsSurvivability Program, May 27, 1997.

AFR 800-2, Acquisition Program Management,July 21, 1994.

AFSCR [Air Force System Command Regula-tion] 550-13, Producibility.

AFSCR 550-15, Statistical Process Control.

AMC [Army Materiel Command] Plan 70-2,AMC-TRADOC Materiel AcquisitionHandbook.

Anderson, J.E., “Towards the Goal of ImprovingMIL-STD Testing,” Defense Systems Manage-ment Journal, Vol. 12, No. 2, pp. 30-34, April1976.

AR [Army Regulation] 10-11, U.S. Army MaterielDevelopment and Readiness Command.

AR 10-4, U.S. Army Operational Test andEvaluation Agency. (Cancelled)

AR 10-41, U.S. Army Training and DoctrineCommand.

AR 10-42, U.S. Armed Forces Command.

AR 700-127, Integrated Logistics Support, No-vember 10, 1999.

AR 70-24, Special Procedures Pertaining toNuclear Weapons System Development.

AR 70-60, Nuclear Survivability of ArmyMateriel.

Page 306: DAU T&E Management Guide.pdf

D-2

AR 70-71, Nuclear, Biological, and ChemicalContamination Survivability of Army Materiel.

AR 71-1, Systems Acquisition Policy and Proce-dures (replaced by AR 70-1), December 31,2003.

AR 71-3, Force Development User Testing.

AR 73-1, Test and Evaluation Policy, January 7,2002.

ATEC PAM [Army Test and Evaluation Com-mand Pamphlet] 310-4, Index of Test Opera-tions Procedures and International Test Op-erations Procedures, December 31, 1988.

BDM Corporation, Application of Simulations tothe OT&E Process, May 1974.

BDM Corporation, Functional Description ofthe Acquisition Test and Evaluation Process,July 8, 1983.

Bolino, John V., Software T&E in the Departmentof Defense.

CJCSI [Chairman of the Joint Chiefs of StaffInstruction] 3170.013, Joint Capabilities In-tegration and Development System (JCIDS),March 12, 2004.

CJCSM [Chairman of the Joint Chiefs of StaffManual] 3170.01A, Operation of the JointCapabilities Integration and DevelopmentSystem, March 12, 2004.

COMOPTEVFOR [Commander, Operational Testand Evaluation Force], AD NumberAD124168, Operational Test Director Guide.

COMOPTEVFOR, Project Analysis Guide.

COMOPTEVFORINST 3960.1D, OperationalTest Director’s Guide.

DA PAM [Department of the Army Pamphlet]700-50, Integrated Logistics Support: Devel-opmental Supportability Test and EvaluationGuide. (Cancelled)

DA PAM 73-1, Test and Evaluation in Supportof Systems Acquisition, May 30, 2003.

Data Link Vulnerability Analysis Joint Test Force,DVAL Methodology – Methodology Overview/Executive Summary, November 1984.

Defense Acquisition University Press. TheGlossary of Defense Acquisition Acronymsand Terms, Vol. 11, September 2003.

Defense Science Board Task Force Report,Solving the Risk Equation in Transitioningfrom Development to Production, May 25, 1983(later published as DoD Manual 4245.7).

Defense Science Board, Report of Task Force onTest and Evaluation, April 1, 1974.

Defense Science Board Report, Test and Evalua-tion Capabilities, December 2000.

Defense Systems Management College,Acquisition Logistics Guide, December 1997.

Defense Systems Management College, JointLogistics Commanders Guidance for the Useof an Evolutionary Acquisition (EA) Strategyin Acquiring Command and Control (C2)Systems, March 1987.

Defense Systems Management College, “RiskManagement as a Means of Direction andControl,” Program Manager’s Notebook, FactSheet Number 4.5, June 1992.

Defense Systems Management College, DefenseManufacturing Management Course.

Page 307: DAU T&E Management Guide.pdf

D-3

Defense Systems Management College, Inte-grated Logistics Support Guide, First Edition,May 1986.

Defense Systems Management College, JointLogistics Commanders Guide for theManagement of Joint Service Programs, 1987.

Defense Systems Management College, JointLogistics Commanders Guide for the Man-agement of Multinational Programs, 1987.

Defense Systems Management College, SystemsEngineering Fundamentals Guide, January2001.

Defense Systems Management College, SystemsEngineering Management Guide, SecondEdition, 1990.

Director, Defense Test and Evaluation, Memoran-dum of Response of Admiral Isham Linder,Director, Test and Evaluation to SupplementalQuestions posed by the Senate GovernmentalAffairs Committee, August 2, 1983.

DoD 3200.11-D, Major Range and Test FacilityBase Summary of Capabilities, June 1, 1983.

DoD 3235.1-H, Test and Evaluation of SystemReliability, Availability, and Maintainability:A Primer, March 1982.

DoD 4245.7-M, Transition from Development toProduction, September 1985.

DoD 5010.12-L, Acquisition ManagementSystems and Data Requirements Control List(AMSDL), April 2001.

DoD 5000.2-R, Mandatory Procedures for MajorDefense Acquisition Programs (MDAPs) andMajor Automated Information System (MAIS)Acquisition Programs, June 2001.

DoD 5000.3-M-1, Test and Evaluation MasterPlan (TEMP) Guidelines, October 1986.(Cancelled)

DoD 5000.3-M-2, Foreign Weapons EvaluationProgram and NATO Comparative Test ProgramProcedures Manual, January 1994.

DoD 5000.3-M-3, Software Test and EvaluationManual. (Cancelled)

DoDD [Department of Defense Directive]3200.11, Major Range and Test Facility Base(MRTFB), May 1, 2002.

DoDD 4245.6, Defense Production Management.(Cancelled)

DoDD 4245.7, Transition from Development toProduction, with Change 1, September 1985.(Cancelled)

DoDD 5000.1, The Defense Acquisition System,May 12, 2003.

DoDD 5000.3, Test and Evaluation, December 26,1979. (Cancelled)

DoDD 5000.37, Acquisition and Distribution ofCommercial Products (ADCOP), September 29,1978. (Cancelled)

DoDD 5000.59, DoD Modeling and SimulationManagement, January 20, 1998.

DoDD 5000.38, Production Readiness Reviews.(Cancelled)

DoDD 5000.39, Acquisition and Management ofIntegrated Logistics Support for Systems andEquipment, November 17, 1983. (Cancelled)

DoDD 5010.8, Value Engineering Program, 1968.

DoDD 5010.41, Joint Test and Evaluation (JT&E)Program, February 23, 1998.

Page 308: DAU T&E Management Guide.pdf

D-4

DoDD 5105.31, Defense Nuclear Agency, June14, 1985. (Cancelled)

DoDD 5134.1, Under Secretary of Defense forAcquisition, Technology, and Logistics(USD(AT&L)), April 21, 2000.

DoDD 5141.2, Director of Operational Test andEvaluation (DOT&E), May 25, 2000.

DoDD 5160.5, Responsibilities for Research,Development, and Acquisition of ChemicalWeapons and Chemical and BiologicalDefense, May 1, 1985.

DoDI [Department of Defense Instruction]4245.4, Acquisition of Nuclear-SurvivableSystems, July 25, 1988. (Cancelled)

DoDI 5000.2, Operation of the Defense Acquisi-tion System, May 12, 2003.

DoDI 5000.61, DoD Modeling and Simulation(M&S) Verification, Validation and Accredi-tation (VV&A), May 13, 2003.

DoDI 5030.55, Joint AEC-DoD Nuclear Weap-ons Development Procedures, January 4,1974.

DoDI 7000.10, Contractor Cost Performance,Funds Status, and Cost/Schedule Reports.(Cancelled)

DoD-STD [Department of Defense Standard]-2167, Defense System Software Development,January 1, 1987.

DoD-STD-2168, Defense System Software Qual-ity Program, May 21, 1996.

DoD FMR 7000.14-R, Vol. 02B, Budget Formu-lation and Presentation, Chapter 5, “Re-search, Development, Test and EvaluationAppropriations,” June 2004.

FY88 Report of the Secretary of Defense to theCongress, 1988.

GAO/NSIAD-00-199, Best Practices: A MoreConstructive Test Approach is Key to BetterWeapon System Outcome, July 2000.

GAO/NSIAD-87-57, Operational Test andEvaluation Can Contribute More to Decision-Making, February 4, 1987.

GAO/PEMD-86-12BR, Bigeye Bomb: An Evalu-ation of DoD’s Chemical and DevelopmentalTests, June 10, 1986.

GAO/PEMD-87-17, Live Fire Testing: Evaluat-ing DoD’s Programs, June 12, 1997.

GAO/PSAD-78-102, Operational Testing of AirForce Systems Requires Several Improvements.

GAO/PSAD-79-86, Review of Military ServicesDevelopment Test and Evaluation of SixWeapon Systems Totaling an Estimated $12Billion in Development and ProcurementCosts, June 25, 1979.

Georgia Institute of Technology, The Software Testand Evaluation Project.

Hearing before the Senate Committee onGovernmental Affairs, June 23, 1983.

Hoivik, Thomas H., The Navy Test and EvaluationProcess in Major Systems Acquisition, DTICAccession No. AD A035935, November 1976.

International Test and Evaluation Association(ITEA), Test and Evaluation of Command andControl Systems-Developed Evolutionary Ac-quisition, May 1987.

JCHEM Joint Test Force, Concept and Approachfor a Joint Test and Evaluation of U.S.Chemical Warfare-Retaliatory Capabilities,AD #B088340, February 1984.

Page 309: DAU T&E Management Guide.pdf

D-5

Johnson, Larry H., Contractor Testing and theArmy Test and Evaluation Master Plan forFull Engineering Development, DefenseSystems Management College, 1983.

Joint Test Director’s Handbook (Draft), militaryhandbook.

Kausal, Tony, Comparison of the Defense Acqui-sition Systems of France, Great Britain, Ger-many, and the United States, Defense Acqui-sition University: Ft. Belvoir, VA, September1999.

Kausal, Tony, and Stefan Markowski, Compari-son of the Defense Acquisition Systems ofAustralia, Japan, South Korea, Singapore, andthe United States, Defense Acquisition Uni-versity: Ft. Belvoir, VA, July 2000.

Mann, Greg A., LTC, The Role of Simulation inOperational Test and Evaluation, Report AU-ARI-83-10, Air University Press, Maxwell AirForce Base, AL: August 1983.

MCO [Marine Corps Order] 3960.2, MarineCorps Operational Test and Evaluation Ac-tivity (MCOTEA); establishment of, withchanges, October 14, 1994.

MCO 5000.11B, Testing and Evaluation ofSystems and Equipment for the Marine Corps.

Memorandum of Agreement of Multi-ServiceOT&E, with changes, August 2003.

Memorandum of Understanding Among theGovernments of the French Republic, theFederal Republic of Germany, the UnitedKingdom of Great Britain and NorthernIreland, and the United States of America Re-lating to the Mutual Acceptance of Test andEvaluation for the Reciprocal Procurement ofDefense Equipment.

MIL-HDBK[Military Handbook]-881, WorkBreakdown Structure, January 2, 1998.

MIL-STD[Military Standard]-1388-1A, LogisticsSupport Analysis, April 11, 1983.

MIL-STD-1512A, Technical Reviews and Auditsfor Systems, Equipment, and ComputerPrograms, September 30, 1997.

MIL-STD-1528, Manufacturing Master Plan.

MIL-STD-480, Configuration Control – Engi-neering Changes, Deviations, and Waivers,July 15, 1988.

MIL-STD-483 (USAF), Configuration Manage-ment Practices for Systems, Equipment,Munitions and Computer Software.

MIL-STD-490, Specification Practices, June 4,1985, modified March 1997.

MIL-STD-499A (USAF), Engineering Manage-ment Practices, May 1, 1974.

MIL-STD-961D, DoD Standard Practice forDefense Specifications, March 22, 1995.

MITRE Corporation, Software Reporting Metrics,November 1985.

NAVMATINST [Naval Material Command In-struction] 3960, Test and Evaluation.

NAVSO-P-2457, RDT&E/Acquisition Manage-ment Guide, 10th Edition.

O’Bryon, James F., Assistant Deputy UnderSecretary of Defense (Live Fire Test). State-ment before the Acquisition Policy Panel ofthe House Armed Services Committee, Sep-tember 10, 1987.

Page 310: DAU T&E Management Guide.pdf

D-6

Office of the Deputy Director, Test and Evalua-tion (Live Fire Test), Live Fire Test andEvaluation Planning Guide, June 1989.

Office of the Deputy Under Secretary for Re-search and Engineering (Test and Evaluation),Joint Test and Evaluation Procedures Manual(Draft), October 1986.

Office of the Under Secretary of Defense (Ac-quisition, Technology and Logistics), Director,Strategic & Tactical Systems, The ForeignComparative Testing (FCT) Handbook, March2001, http://www.acq.osd.mil/cto.

Office of the Under Secretary of Defense forResearch and Engineering (Director, DefenseTest and Evaluation), Test and Evaluation inthe United States Department of Defense,August 1983.

OPNAVINST [Office of the Chief of Naval Opera-tions Instruction] 3070.1A, Operations Security.

OPNAVINST 3960.10C, Test and Evaluation(Draft). (Cancelled)

OPNAVINST 5000.49, ILS in the AcquisitionProcess.

OPNAVINST 5440.47F, Mission and Functionsof Operational Test and Evaluation Force(OPTEVFOR).

Public Law (P.L.) 92-156, National Defense Au-thorization Act for Fiscal Year 1972.

Public Law 98-94, National Defense Authoriza-tion Act for Fiscal Year 1987.

Public Law 99-661, National Defense Authoriza-tion Act for Fiscal Year 1987.

Range Commanders Council (RCC), UniversalDocumentation System Handbook, Document501-84, Vol. 3, August 1989.

Rhoads, D.I., Systems Engineering Guide forManagement of Software Acquisition, DefenseSystems Management College, 1983, pp. IV-1 to IV-23.

Rome Air Development Center, Standard Proce-dures for Air Force Operational Test andEvaluation.

SECNAVINST [Secretary of the Navy Instruc-tion] 5000.1B, Systems Acquisition.

SECNAVINST 5000.39, USN Acquisition andManagement of ILS Systems and Equipment.

SECNAVINST 5000.42C, RDT&E AcquisitionProcedures.

SECNAVINST 5430.67A, Assignment of Respon-sibilities for Research, Development, Test andEvaluation.

Stevens, Roger T., Operational Test and Evalua-tion Process: A Systems Engineering Process,John Wiley & Sons, New York: 1979, pp. 49-56.

Thomas, E.F., Reliability Testing Pitfalls, Reli-ability and Maintainability Symposium, Vol.7, No. 2, 1974, pp. 78-85.

USAOTEA Operational Test and EvaluationGuide, Special Supplement, Continuous Com-prehensive Evaluation Planning for NuclearHardness and Survivability Test and Evalua-tion (Draft). (Cancelled)

Watt, Charles K., The Role of Test Beds in C3I,AFCEA 35th Annual Convention, June 15-17,1982.

Willoughby, Willis J., “Reliability by Design, NotChance,” Defense Systems ManagementJournal, Vol. 12, No. 2, pp. 12-18.

Page 311: DAU T&E Management Guide.pdf

E-1

APPENDIX EINDEX

PageAcquisition Decision Memorandum ............................................................................................... 5-4Acquisition Process .................................................................................................................. 2-1, 2-2Air Force Acquisition Executive ..................................................................................................... 3-9Air Force OT&E Organization ...................................................................................................... 3-10Air Force DT&E Organization ........................................................................................................ 3-9Aircraft Systems T&E ................................................................................................................... 25-2Army Acquisition Executive ........................................................................................................... 3-5Army T&E Organization ................................................................................................................. 3-5Baseline Documentation ................................................................................................................. 5-4Chemical Weapons Testing ............................................................................................................ 24-1Combined and Concurrent Testing ................................................................................................. 9-1Combining DT and OT ................................................................................................................... 9-1Command and Control System T&E .......................................................................................... 25-10Commercial Items T&E ................................................................................................................ 23-1Concept Exploration Phase ............................................................................................................. 2-2Concurrent Testing .......................................................................................................................... 9-3Configuration Change Control........................................................................................................ 8-7Congressional Policy ....................................................................................................................... 3-1Contractor Test Management ............................................................................ 4-4, 4-6, 4-7, 4-8, 7-5Contribution at Major Milestones .......................................................................................... 1-3, 12-1Cost Associated with T&E ..................................................................................................... 4-5, 15-2Critical Operational Test Issues ........................................................................................... 11-4, 13-4Data Sources ................................................................................................................. 4-2, 12-4, 13-7Design Evaluation/Verification Testing.................................................................................... 4-6, 7-7Design for Test ....................................................................................................................... 7-9, 7-10Design Limit Testing ....................................................................................................................... 7-6Design Reviews .............................................................................................................. 2-10, 4-5, 8-1Development T&E .................................................................................................................... 6-1, 9-1Development Testing ....................................................................................................................... 7-1

Contractor ........................................................................................................................... 7-5Government .................................................................................................................... 7-6Limited Procurement Programs .......................................................................................... 7-9Responsibilities .................................................................................................................... 7-5Support of Technical Reviews/Milestone Decisions .......................................................... 8-1Test Program Integration .............................................................................................. 4-4, 7-6

Director, Operational Test and Evaluation ...................................................................................... 3-4DoD Test Process ............................................................................................................................ 2-9DT&E and the Review Process ...................................................................................................... 8-1Early Operational Assessments ................................................................... 1-5, 6-4, 11-2, 12-1, 12-3EC/C4ISR Testing ......................................................................................................................... 20-1End of Test Phase Report .............................................................................................................. 5-11

Page 312: DAU T&E Management Guide.pdf

E-2

Evaluation Agencies ................................................................................ 3-4, 4-6, 11-2—, 11-4, 13-9Evaluation ...................................................................................................................................... 13-1

Criteria ............................................................................................................................... 13-4Development/Operational Test and Evaluation ................................................................. 13-7Differences between T&E ................................................................................................. 13-1Issues ................................................................................................................................. 13-4Plan ............................................................................................................................. 5-7, 13-2Planning ............................................................................................................................. 13-6Process ............................................................................................................................... 13-1Operational ........................................................................................................................ 13-9Technical ............................................................................................................................ 13-9

Evolutionary Acquisition...................................................................................................... 16-2, 20-6First Article Test ............................................................................................................................ 10-2Follow-on Operational T&E .................................................................................. 1-7, 6-7, 11-3, 12-6Foreign Comparative Testing Program........................................................................... 3-2, 3-4, 22-1Formal Reviews .............................................................................................................. 2-9, 2-10, 8-1

Systems Requirements Review ........................................................................................... 8-4Systems Functional Review ................................................................................................ 8-4Software Specification Review ........................................................................................... 8-4Preliminary Design Review ................................................................................................ 8-4Critical Design Review ....................................................................................................... 8-6Test Readiness Review ........................................................................................................ 8-6Formal Qualification Review .............................................................................................. 8-6Production Readiness Review ............................................................................................. 8-7

Functional Configuration Audit ...................................................................................................... 8-6Hazardous Weapons Testing .......................................................................................................... 24-1Human Factors Evaluation ................................................................................................... 11-2, 19-4Independent Verification and Validation ....................................................................................... 17-8Initial OT&E ...........................................................................................1-7, 2-2, 6-4, 6-7, 11-3, 12-5Integrated Test Plan ...................................................................................................... 7-6, 16-8, 20-1Logistics Support Planning ........................................................................................................... 19-1

Objectives .......................................................................................................................... 19-1Logistics Support T&E.................................................................................................................. 19-2Data Collection and Analyses .............................................................................................. 13-6, 19-7

Limitations ......................................................................................................................... 19-7Use of Results ................................................................................................................... 19-7Support Package ................................................................................................................ 19-7

Integrated Test Approach ............................................................................................................... 20-1Introduction to Test & Evaluation Process ..................................................................................... 2-1Issues, Critical ...................................................................................................................... 11-4, 13-4Joint T&E ........................................................................................................................................ 6-8Laser Weapons, Testing ................................................................................................................. 24-3Life Cycle T&E ....................................................................................................... 2-1, 2-4, 7-1, 12-1Life Testing ...................................................................................................................................... 7-7Live Fire Test Guidelines ...................................................................................... 6-8, 6-9, 18-1, 18-3

Page 313: DAU T&E Management Guide.pdf

E-3

Logistics T&E................................................................................................................................ 19-1Planning ............................................................................................................................. 19-2

Beyond Low Rate Initial Production Report ........................................................ 3-2, 3-5, 5-12, 10-3Low Rate Initial Production .......................................................................................................... 10-3Major Range and Test Facility Base ...................................................................................... 3-4, 15-2Marine Corps Acquisition Executive ............................................................................................ 3-11Marine Corps DT&E Organization ............................................................................................... 3-11Marine Corps Operational Test & Evaluation Agency ................................................................. 3-11Missile System T&E ..................................................................................................................... 25-5Major Milestone Reviews ........................................................................................................ 1-3, 5-3Matrix Analysis Techniques .......................................................................................................... 13-7NATO Comparative Testing Program ........................................................................................... 22-2Navy DT&E Organizations ............................................................................................................. 3-7Navy Operational Test & Evaluation Force .................................................................................... 3-9NDI Testing ................................................................................................................................... 23-1Nondevelopment Items Definition ................................................................................................ 23-1Nuclear Hardness T&E ................................................................................................................. 6-10Nuclear, Biological & Chemical Weapons T&E ................................................................... 6-9, 24-1Objectives and Thresholds ............................................................................................. 2-7, 5-3, 13-5Operational T&E ............................................................................................................ 6-3, 7-3, 11-1OPEVAL ........................................................................................................................................ 11-3OT&E to Support Milestone Decisions ........................................................................................ 12-1OT&E Contractor Responsibilities .......................................................................................... 4-7, 4-8OSD Oversight ................................................................................................................................ 3-2Physical Configuration Audit .......................................................................................................... 8-6Policy, The Congress ....................................................................................................................... 3-1

OSD ..................................................................................................................................... 3-2Army .................................................................................................................................... 3-5Navy ..................................................................................................................................... 3-7Air Force .............................................................................................................................. 3-9Marine Corps ..................................................................................................................... 3-11

Planning Guidelines for Logistics T&E........................................................................................ 19-1Post Production T&E............................................................................................. 1-7, 7-4, 10-3, 12-6Production Qualification Tests ............................................................................................... 6-1, 10-2Product Baseline and T&E ............................................................................................... 2-9, 5-5, 8-5Production Acceptance Test & Evaluation ............................................................................. 7-4, 10-3Production Readiness Reviews ............................................................................................ 10-2, 10-5Program Definition and Risk Reduction Phase ...................................................... 1-5, 2-2, 7-1, 12-3Program Office Responsibility for T&E......................................................................................... 4-1Qualification Operational T&E ..................................................................................................... 11-3Qualification Testing ..................................................................................................................... 10-2Quick-Look Report ................................................................................................................ 5-9, 11-6Realism during Testing ............................................................................................................. 6-4, 1-5Reliability, Availability & Maintainability Assessments ....................................................... 7-8, 19-5Reliability Development Testing ............................................................................................ 7-8, 19-6

Page 314: DAU T&E Management Guide.pdf

E-4

Reports ................................................................................................3-1, 5-9, 5-11, 11-6, 13-9, 18-6Requirements Documentation ......................................................................................................... 5-1Resources .............................................................................................................................. 4-11, 15-1Risk Management ............................................................................................................. 1-1, 1-8, 2-7Schedules Formulation ........................................................................................................... 4-3, 16-4Security and T&E .......................................................................................................................... 24-5Service Resource ........................................................................................................................... 15-1

Planning ............................................................................................................................. 15-4Service Test Facilities ............................................................................................................. 6-1, 15-4

Army .................................................................................................................................. 15-4Navy ................................................................................................................................... 15-5Air Force ............................................................................................................................ 15-7

Ship System T&E ........................................................................................................................ 25-13Simulation, Test, and Evaluation Process (STEP) ................................................................. 6-1, 14-7Software Development Process ..................................................................................................... 17-4Software Life Cycle T&E ............................................................................................................. 17-4Space Systems Testing ........................................................................................................... 7-9, 24-3Specifications ........................................................................................................................... 4-6, 8-4Surface Vehicle T&E ................................................................................................................... 25-16System Life Cycle ........................................................................................................................... 2-1Systems Engineering Process .......................................................................................................... 2-6Technical Management Planning ............................................................................................. 2-7, 5-1Technical Reviews and Audits ................................................................................................. 4-5, 8-1Test Concepts, IOT&E ........................................................................................................... 3-5, 11-5Test and Evaluation Master Plan .......................................................................... 3-4, 5-7, 15-1, 16-1Test Design Plan .............................................................................................................................. 5-7Test Funding ................................................................................................................... 3-5, 4-5, 15-8Technical Performance Measurement ............................................................................................. 2-7Test Plan .......................................................................................................................................... 5-9Test Program Integration .............................................................................................. 7-6, 16-8, 20-1Test Reports ................................................................................................. 3-1, 5-9, 11-6, 13-9, 18-6Testing with Limitations ............................................................................................................... 24-1Thresholds ...................................................................................................................... 2-7, 5-3, 13-5Transition to Production ................................................................................................................ 10-3

Transition Planning ........................................................................................................... 10-3Testing during Transition................................................................................................... 10-3

Test Resource Planning ................................................................................................................. 15-1Testing for Nuclear Hardness and Survivability .................................................................. 6-10, 18-6TEMP Requirements ..................................................................................................................... 16-1Upgrades, Enhancements, Modification T&E .............................................. 1-7, 2-4, 4-10, 7-4, 12-6Validity of Modeling and Simulation ........................................................................................... 14-4Warranties, Impact on T&E ............................................................................................................ 7-9

Page 315: DAU T&E Management Guide.pdf

F-1

APPENDIX FPOINTS OF CONTACT

SERVICE TEST AND EVALUATION COURSES

ARMY

Commander, U.S. Army Developmental Test CommandATTN: CSTE-DTC

Aberdeen Proving Ground, MD 21005-5055

Commander, U.S. Army Logistics Management GroupATTN: AMXMC-ACM-MA

Fort Lee, VA 23801-6048

Commander, U.S. Army Test CommandATTN: CSTE-TC4501 Ford Avenue

Alexandria, VA 22302

NAVY

Commander, Space & Naval Warfare Systems CommandATTN: Management Operations

National Center, Building 1Washington, DC 20361

Commander, Operational Test & Evaluation ForceATTN: Dest 02B

Norfolk, VA 23511-6388

AIR FORCE

Commander, Air Force Operational Test & Evaluation CenterATTN: Asst. Director, Training

Building 20130Kirtland Air Force Base, NM 87117-7001

Commander, Air Force Institute of TechnologyATTN: Student Operations

Wright-Patterson Air Force Base, OH 45433

Page 316: DAU T&E Management Guide.pdf

F-2

OSD

Defense Test & Evaluation Professional InstituteNaval Air Warfare Center, Weapons Division

Code 02PIPoint Mugu, CA 93042

Page 317: DAU T&E Management Guide.pdf

CUSTOMER FEEDBACK

Dear Valued Customer:

The Defense Acquisition University Press publishes this reference tool for students of the Defense Acquisition Univer-sity and other members of the defense acquisition workforce. We continually seek customer inputs to maintain theintegrity and usefulness of this publication.

We ask that you spend a few minutes evaluating this publication. Although we have identified several topic areas toevaluate, please feel free to add any additional thoughts as they apply to your particular situation. This information willassist us in improving future editions.

To assist you in responding, this pre-addressed form requires no postage. Simply tear-out this page, fold, and tapeclosed as indicated, then place in the mail. Responses may also be faxed to the DAU Press at (703) 805-2917 or DSN655-2917.

TEST AND EVALUATION MANAGEMENT GUIDE (January 2005)

1. Do you find this publication a useful reference tool?YES____NO ____ How can we improve the level of usefulness?__________________________________________________________________________________________________________________________________________________________________________________________

2. Is the material presented in this publication current?YES____ NO_____

3. Using the rating scale below, please indicate your satisfaction level for each topic area. Place your response on theline beside the topic area.

1 = Very satisfied 2 = Satisfied 3 = No opinion 4 = Dissatisfied 5 = Very dissatisfied

_____Readability_____Scope of coverage_____Contribution to your knowledge of the subject_____Contribution to your job effectiveness_____Contribution to process improvement

4. Are there other subject areas you would like to have included in this publication?YES___ (Please identify)________________________________________________________________________NO ___

5. If you recommend additions, deletions, or changes to this publication, please identify them below._______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

6. Additional comments ______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

(THIS SECTION IS OPTIONAL)

NAME/TITLE_________________________________________________________________________________

ADDRESS____________________________________________________________________________________

CITY/STATE/ZIP______________________________________________________________________________

Commercial (____)______________ DSN______________E-Mail_______________________________________

Page 318: DAU T&E Management Guide.pdf