Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Introduction Systems Engineering Fundamentals
i
SYSTEMSENGINEERING
FUNDAMENTALS
January 2001
SUPPLEMENTARY TEXTPREPARED BY THE
DEFENSE ACQUISITION UNIVERSITY PRESSFORT BELVOIR, VIRGINIA 22060-5565
Systems Engineering Fundamentals Introduction
ii
Introduction Systems Engineering Fundamentals
iii
TABLE OFCONTENTS
PREFACE ............................................................................................................................................. iv
PART 1. INTRODUCTION
Chapter 1. Introduction to Systems Engineering Management ............................................. 3
Chapter 2. Systems Engineering Management in DoD Acquisition .................................... 11
PART 2. THE SYSTEMS ENGINEERING PROCESS
Chapter 3. Systems Engineering Process Overview ............................................................ 31
Chapter 4. Requirements Analysis ....................................................................................... 35
Chapter 5. Functional Analysis and Allocation.................................................................... 45
Chapter 6. Design Synthesis ................................................................................................ 57
Chapter 7. Verification .........................................................................................................65
Chapter 8. Systems Engineering Process Outputs ............................................................... 73
PART 3. SYSTEM ANALYSIS AND CONTROL
Chapter 9. Work Breakdown Structure ................................................................................ 85
Chapter 10. Configuration Management ................................................................................ 91
Chapter 11. Technical Reviews and Audits ............................................................................ 99
Chapter 12. Trade Studies .................................................................................................... 111
Chapter 13. Modeling and Simulation ................................................................................. 117
Chapter 14. Metrics .............................................................................................................. 125
Chapter 15. Risk Management ............................................................................................. 133
PART 4. PLANNING, ORGANIZING, AND MANAGING
Chapter 16. Systems Engineering Planning ......................................................................... 147
Chapter 17. Product Improvement Strategies ...................................................................... 157
Chapter 18. Organizing and Integrating System Development ............................................ 171
Chapter 19. Contractual Considerations .............................................................................. 185
Chapter 20. Management Considerations and Summary ..................................................... 201
GLOSSARY ..................................................................................................................................... 209
Systems Engineering Fundamentals Introduction
iv
PREFACE
This book provides a basic, conceptual-level description of engineering management disciplines thatrelate to the development and life cycle management of a system. For the non-engineer it provides anoverview of how a system is developed. For the engineer and project manager it provides a basic frameworkfor planning and assessing system development.
Information in the book is from various sources, but a good portion is taken from lecture material devel-oped for the two Systems Planning, Research, Development, and Engineering courses offered by theDefense Acquisition University.
The book is divided into four parts: Introduction; Systems Engineering Process; Systems Analysis andControl; and Planning, Organizing, and Managing. The first part introduces the basic concepts thatgovern the systems engineering process and how those concepts fit the Department of Defense acquisitionprocess. Chapter 1 establishes the basic concept and introduces terms that will be used throughout thebook. The second chapter goes through a typical acquisition life cycle showing how systems engineeringsupports acquisition decision making.
The second part introduces the systems engineering problem-solving process, and discusses in basicterms some traditional techniques used in the process. An overview is given, and then the process ofrequirements analysis, functional analysis and allocation, design synthesis, and verification is explainedin some detail. This part ends with a discussion of the documentation developed as the finished output ofthe systems engineering process.
Part three discusses analysis and control tools that provide balance to the process. Key activities (such asrisk management, configuration management, and trade studies) that support and run parallel to thesystem engineering process are identified and explained.
Part four discusses issues integral to the conduct of a systems engineering effort, from planning toconsideration of broader management issues.
In some chapters supplementary sections provide related material that shows common techniques orpolicy-driven processes. These expand the basic conceptual discussion, but give the student a clearerpicture of what systems engineering means in a real acquisition environment.
Chapter 1 Introduction to Systems Engineering
1
PART 1
INTRODUCTION
Systems Engineering Fundamentals Chapter 1
2
Chapter 1 Introduction to Systems Engineering
3
CHAPTER 1
INTRODUCTION TOSYSTEMS ENGINEERING
MANAGEMENT
1.1 PURPOSE
The overall organization of this text is describedin the Preface. This chapter establishes some ofthe basic premises that are expanded throughoutthe book. Basic terms explained in this chapter arethe foundation for following definitions. Key sys-tems engineering ideas and viewpoints are pre-sented, starting with a definition of a system.
1.2 DEFINITIONS
A System Is …
Simply stated, a system is an integrated compositeof people, products, and processes that provide acapability to satisfy a stated need or objective.
Systems Engineering Is…
Systems engineering consists of two significantdisciplines: the technical knowledge domain inwhich the systems engineer operates, and systemsengineering management. This book focuses onthe process of systems engineering management.
Three commonly used definitions of systemsengineering are provided by the best known tech-nical standards that apply to this subject. They allhave a common theme:
• A logical sequence of activities and decisionsthat transforms an operational need into a de-scription of system performance parameters anda preferred system configuration. (MIL-STD-
499A, Engineering Management, 1 May 1974.Now cancelled.)
• An interdisciplinary approach that encompassesthe entire technical effort, and evolves into andverifies an integrated and life cycle balancedset of system people, products, and process solu-tions that satisfy customer needs. (EIA StandardIS-632, Systems Engineering, December 1994.)
• An interdisciplinary, collaborative approach thatderives, evolves, and verifies a life-cycle bal-anced system solution which satisfies customerexpectations and meets public acceptability.(IEEE P1220, Standard for Application andManagement of the Systems EngineeringProcess, [Final Draft], 26 September 1994.)
In summary, systems engineering is an interdisci-plinary engineering management process thatevolves and verifies an integrated, life-cycle bal-anced set of system solutions that satisfy customerneeds.
Systems Engineering Management Is…
As illustrated by Figure 1-1, systems engineeringmanagement is accomplished by integrating threemajor activities:
• Development phasing that controls the designprocess and provides baselines that coordinatedesign efforts,
• A systems engineering process that providesa structure for solving design problems and
Systems Engineering Fundamentals Chapter 1
4
Figure 1-1. Three Activities of Systems Engineering Management
DevelopmentPhasing
BaselinesLife CyclePlanning
SystemsEngineering
Process
Life CycleIntegration
SystemsEngineeringManagement
IntegratedTeaming
tracking requirements flow through the designeffort, and
• Life cycle integration that involves customersin the design process and ensures that the systemdeveloped is viable throughout its life.
Each one of these activities is necessary to achieveproper management of a development effort. Phas-ing has two major purposes: it controls the designeffort and is the major connection between the tech-nical management effort and the overall acquisi-tion effort. It controls the design effort by devel-oping design baselines that govern each level ofdevelopment. It interfaces with acquisition man-agement by providing key events in the develop-ment process, where design viability can be as-sessed. The viability of the baselines developed isa major input for acquisition management Mile-stone (MS) decisions. As a result, the timing andcoordination between technical developmentphasing and the acquisition schedule is critical tomaintain a healthy acquisition program.
The systems engineering process is the heart ofsystems engineering management. Its purpose isto provide a structured but flexible process thattransforms requirements into specifications, archi-tectures, and configuration baselines. The disci-pline of this process provides the control and trace-ability to develop solutions that meet customerneeds. The systems engineering process may berepeated one or more times during any phase ofthe development process.
Life cycle integration is necessary to ensure thatthe design solution is viable throughout the life ofthe system. It includes the planning associated withproduct and process development, as well as theintegration of multiple functional concerns into thedesign and engineering process. In this manner,product cycle-times can be reduced, and the needfor redesign and rework substantially reduced.
1.3 DEVELOPMENT PHASING
Development usually progresses through distinctlevels or stages:
Chapter 1 Introduction to Systems Engineering
5
Figure 1-2. Development Phasing
Concept Studies
System Definiiton(Functional Baseline)
Preliminary Design(Allocated Baseline)
Detail Design(Product Baseline)
DESIGN DEFINITION
DESIGN DEFINITION
DESIGN DEFINITION
• Concept level, which produces a system conceptdescription (usually described in a conceptstudy);
• System level, which produces a system descrip-tion in performance requirement terms; and
• Subsystem/Component level, which producesfirst a set of subsystem and component productperformance descriptions, then a set ofcorresponding detailed descriptions of theproducts’ characteristics, essential for theirproduction.
The systems engineering process is applied to eachlevel of system development, one level at a time,to produce these descriptions commonly calledconfiguration baselines. This results in a series ofconfiguration baselines, one at each developmentlevel. These baselines become more detailed witheach level.
In the Department of Defense (DoD) the configu-ration baselines are called the functional baselinefor the system-level description, the allocatedbaseline for the subsystem/ component performance
descriptions, and the product baseline for the sub-system/component detail descriptions. Figure 1-2shows the basic relationships between the baselines.The triangles represent baseline control decisionpoints, and are usually referred to as technical re-views or audits.
Levels of Development Considerations
Significant development at any given level in thesystem hierarchy should not occur until the con-figuration baselines at the higher levels are con-sidered complete, stable, and controlled. Reviewsand audits are used to ensure that the baselines areready for the next level of development. As will beshown in the next chapter, this review and auditprocess also provides the necessary assessment ofsystem maturity, which supports the DoDMilestone decision process.
1.4 THE SYSTEMS ENGINEERINGPROCESS
The systems engineering process is a top-downcomprehensive, iterative and recursive problem
Systems Engineering Fundamentals Chapter 1
6
Figure 1-3. The Systems Engineering Process
solving process, applied sequentially through allstages of development, that is used to:
• Transform needs and requirements into a set ofsystem product and process descriptions (add-ing value and more detail with each level ofdevelopment),
• Generate information for decision makers, and
• Provide input for the next level of development.
As illustrated by Figure 1-3, the fundamental sys-tems engineering activities are RequirementsAnalysis, Functional Analysis and Allocation, andDesign Synthesis—all balanced by techniques andtools collectively called System Analysis and Con-trol. Systems engineering controls are used to trackdecisions and requirements, maintain technicalbaselines, manage interfaces, manage risks, trackcost and schedule, track technical performance,verify requirements are met, and review/audit theprogress.
During the systems engineering process architec-tures are generated to better describe and under-stand the system. The word “architecture” is usedin various contexts in the general field of engi-neering. It is used as a general description of howthe subsystems join together to form the system. Itcan also be a detailed description of an aspect of asystem: for example, the Operational, System, andTechnical Architectures used in Command, Con-trol, Communications, Computers, Intelligence,Surveillance, and Reconnaissance (C4ISR), andsoftware intensive developments. However, Sys-tems Engineering Management as developed inDoD recognizes three universally usable architec-tures that describe important aspects of the system:functional, physical, and system architectures. Thisbook will focus on these architectures as neces-sary components of the systems engineeringprocess.
The Functional Architecture identifies and struc-tures the allocated functional and performancerequirements. The Physical Architecture depicts the
PROCESS OUTPUT
PROCESS
INPUT
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
Functional Analysisand Allocation
Design Synthesis
System Analysisand Control(Balance)
Chapter 1 Introduction to Systems Engineering
7
system product by showing how it is broken downinto subsystems and components. The SystemArchitecture identifies all the products (includingenabling products) that are necessary to supportthe system and, by implication, the processesnecessary for development, production/construc-tion, deployment, operations, support, disposal,training, and verification.
Life Cycle Integration
Life cycle integration is achieved through inte-grated development—that is, concurrent consid-eration of all life cycle needs during the develop-ment process. DoD policy requires integrateddevelopment, called Integrated Product and Prod-uct Development (IPPD) in DoD, to be practicedat all levels in the acquisition chain of commandas will be explained in the chapter on IPPD. Con-current consideration of all life cycle needs can begreatly enhanced through the use of interdiscipli-nary teams. These teams are often referred to asIntegrated Product Teams (IPTs).
The objective of an Integrated Product Team is to:
• Produce a design solution that satisfies initiallydefined requirements, and
• Communicate that design solution clearly,effectively, and in a timely manner.
Multi-functional, integrated teams:
• Place balanced emphasis on product and processdevelopment, and
• Require early involvement of all disciplinesappropriate to the team task.
Design-level IPT members are chosen to meet theteam objectives and generally have distinctive com-petence in:
• Technical management (systems engineering),
• Life cycle functional areas (eight primaryfunctions),
• Technical specialty areas, such as safety, riskmanagement, quality, etc., or
• When appropriate, business areas such asfinance, cost/budget analysis, and contracting.
Life Cycle Functions
Life cycle functions are the characteristic actionsassociated with the system life cycle. As illustratedby Figure 1-4, they are development, productionand construction, deployment (fielding), opera-tion, support, disposal, training, and verification.These activities cover the “cradle to grave” lifecycle process and are associated with major func-tional groups that provide essential support to thelife cycle process. These key life cycle functionsare commonly referred to as the eight primaryfunctions of systems engineering.
The customers of the systems engineer performthe life-cycle functions. The system user’s needsare emphasized because their needs generate therequirement for the system, but it must be remem-bered that all of the life-cycle functional areasgenerate requirements for the systems engineer-ing process once the user has established the basicneed. Those that perform the primary functionsalso provide life-cycle representation in design-level integrated teams.
Primary Function Definitions
Development includes the activities required toevolve the system from customer needs to productor process solutions.
Manufacturing/Production/Construction in-cludes the fabrication of engineering test modelsand “brass boards,” low rate initial production,full- rate production of systems and end items, orthe construction of large or unique systems or sub-systems.
Deployment (Fielding) includes the activities nec-essary to initially deliver, transport, receive, pro-cess, assemble, install, checkout, train, operate,house, store, or field the system to achieve fulloperational capability.
Systems Engineering Fundamentals Chapter 1
8
Figure 1-4. Primary Life Cycle Functions
VerificationTrainingDisposal
Operation Support
Manufacturing/Production/ConstructionDeployment
Development
8 PrimaryLife CycleFunctions
Operation is the user function and includesactivities necessary to satisfy defined operationalobjectives and tasks in peacetime and wartimeenvironments.
Support includes the activities necessary to pro-vide operations support, maintenance, logistics,and material management.
Disposal includes the activities necessary to ensurethat the disposal of decommissioned, destroyed,or irreparable system components meets allapplicable regulations and directives.
Training includes the activities necessary toachieve and maintain the knowledge and skill levelsnecessary to efficiently and effectively performoperations and support functions.
Verification includes the activities necessary toevaluate progress and effectiveness of evolvingsystem products and processes, and to measurespecification compliance.
Systems Engineering Considerations
Systems engineering is a standardized, disciplinedmanagement process for development of systemsolutions that provides a constant approach tosystem development in an environment of changeand uncertainty. It also provides for simultaneousproduct and process development, as well as acommon basis for communication.
Systems engineering ensures that the correcttechnical tasks get done during developmentthrough planning, tracking, and coordinating.Responsibilities of systems engineers include:
• Development of a total system design solutionthat balances cost, schedule, performance, andrisk,
• Development and tracking of technicalinformation needed for decision making,
• Verification that technical solutions satisfycustomer requirements,
Chapter 1 Introduction to Systems Engineering
9
• Development of a system that can be producedeconomically and supported throughout the lifecycle,
• Development and monitoring of internal andexternal interface compatibility of the sys-tem and subsystems using an open systemsapproach,
• Establishment of baselines and configurationcontrol, and
• Proper focus and structure for system and majorsub-system level design IPTs.
1.5 GUIDANCE
DoD 5000.2-R establishes two fundamentalrequirements for program management:
• It requires that an Integrated Product andProcess approach be taken to design whereverpracticable, and
• It requires that a disciplined systems engineer-ing process be used to translate operationalneeds and/or requirements into a systemsolution.
Tailoring the Process
System engineering is applied during all acquisi-tion and support phases for large- and small-scalesystems, new developments or product improve-ments, and single and multiple procurements. Theprocess must be tailored for different needs and/orrequirements. Tailoring considerations includesystem size and complexity, level of systemdefinition detail, scenarios and missions, con-straints and requirements, technology base, majorrisk factors, and organizational best practices andstrengths.
For example, systems engineering of softwareshould follow the basic systems engineeringapproach as presented in this book. However, itmust be tailored to accommodate the softwaredevelopment environment, and the unique progress
tracking and verification problems software devel-opment entails. In a like manner, all technologydomains are expected to bring their own uniqueneeds to the process.
This book provides a conceptual-level descriptionof systems engineering management. The specifictechniques, nomenclature, and recommendedmethods are not meant to be prescriptive. Techni-cal managers must tailor their systems engineer-ing planning to meet their particular requirementsand constraints, environment, technical domain,and schedule/budget situation.
However, the basic time-proven concepts inherentin the systems engineering approach must be re-tained to provide continuity and control. For com-plex system designs, a full and documented un-derstanding of what the system must do shouldprecede development of component performancedescriptions, which should precede componentdetail descriptions. Though some parts of the sys-tem may be dictated as a constraint or interface, ingeneral, solving the design problem should startwith analyzing the requirements and determiningwhat the system has to do before physical alterna-tives are chosen. Configurations must be controlledand risk must be managed.
Tailoring of this process has to be done carefullyto avoid the introduction of substantial unseen riskand uncertainty. Without the control, coordination,and traceability of systems engineering, an envi-ronment of uncertainty results which will lead tosurprises. Experience has shown that thesesurprises almost invariably lead to significantimpacts to cost and schedule. Tailored processesthat reflect the general conceptual approach of thisbook have been developed and adopted by profes-sional societies, academia, industry associations,government agencies, and major companies.
1.6 SUMMARY POINTS
• Systems engineering management is a multi-functional process that integrates life cyclefunctions, the systems engineering problem-solving process, and progressive baselining.
Systems Engineering Fundamentals Chapter 1
10
• The systems engineering process is a prob-lem-solving process that drives the balanceddevelopment of system products and processes.
• Integrated Product Teams should apply the sys-tems engineering process to develop a life cyclebalanced-design solution.
• The systems engineering process is applied toeach level of development, one level at a time.
• Fundamental systems engineering activities areRequirements Analysis, Functional Analysis/Allocation, and Design Synthesis, all of whichare balanced by System Analysis and Control.
• Baseline phasing provides for an increasinglevel of descriptive detail of the products andprocesses with each application of the systemsengineering process.
• Baselining in a nut shell is a concept descrip-tion that leads to a system definition which, inturn, leads to component definitions, and thento component designs, which finally lead to aproduct.
• The output of each application of the systemsengineering process is a major input to the nextprocess application.
Chapter 2 Systems Engineering Management in DoD Acquisition
11
CHAPTER 2
SYSTEMS ENGINEERINGMANAGEMENT INDOD ACQUISITION
establish the broad responsibilities and groundrules to be followed in funding and acquiring majorassets. The departments of the executive branch ofgovernment are then expected to draft their ownguidance consistent with the guidelines estab-lished. The principal guidance for defense systemacquisitions is the DoD 5000 series of directivesand regulations. These documents reflect theactions required of DoD acquisition managers to:
• Translate operational needs into stable,affordable programs,
• Acquire quality products, and
• Organize for efficiency and effectiveness.
2.2 RECENT CHANGES
The DoD 5000 series documents were revised in2000 to make the process more flexible, enablingthe delivery of advanced technology to warfightersmore rapidly and at reduced total ownership cost.The new process encourages multiple entry points,depending on the maturity of the fundamental tech-nologies involved, and the use of evolutionary meth-ods to define and develop systems. This encouragesa tailored approach to acquisition and engineeringmanagement, but it does not alter the basic logicof the underlying systems engineering process.
2.3 ACQUISITION LIFE CYCLE
The revised acquisition process for major defensesystems is shown in Figure 2-1. The process is
2.1 INTRODUCTION
The DoD acquisition process has its foundation infederal policy and public law. The development,acquisition, and operation of military systems isgoverned by a multitude of public laws, formalDoD directives, instructions and manuals, numer-ous Service and Component regulations, and manyinter-service and international agreements.
Managing the development and fielding of mili-tary systems requires three basic activities: tech-nical management, business management, and con-tract management. As described in this book,systems engineering management is the technicalmanagement component of DoD acquisitionmanagement.
The acquisition process runs parallel to the require-ments generation process and the budgeting pro-cess (Planning, Programming, and Budgeting Sys-tem.) User requirements tend to be event drivenby threat. The budgeting process is date driven byconstraints of the Congressional calendar. SystemsEngineering Management bridges these processesand must resolve the dichotomy of event drivenneeds, event driven technology development, anda calendar driven budget.
Direction and Guidance
The Office of Management and Budget (OMB)provides top-level guidance for planning, budget-ing, and acquisition in OMB Circular A-11, Part3, and the Supplemental Capital ProgrammingGuide: Planning, Budgeting, and Acquisition ofCapital Assets, July 1997. These documents
Systems Engineering Fundamentals Chapter 2
12
Figure 2-1. Revised DoD 5000 Acquisition Process
defined by a series of phases during which tech-nology is defined and matured into viable concepts,which are subsequently developed and readied forproduction, after which the systems produced aresupported in the field.
The process allows for a given system to enter theprocess at any of the development phases. For ex-ample, a system using unproven technology wouldenter at the beginning stages of the process andwould proceed through a lengthy period of tech-nology maturation, while a system based on ma-ture and proven technologies might enter directlyinto engineering development or, conceivably, evenproduction. The process itself (Figure 2-1) includesfour phases of development. The first, Conceptand Technology Development, is intended to ex-plore alternative concepts based on assessmentsof operational needs, technology readiness, risk,and affordability. Entry into this phase does notimply that DoD has committed to a new acquisi-tion program; rather, it is the initiation of a pro-cess to determine whether or not a need (typicallydescribed in a Mission Need Statement (MNS))can be met at reasonable levels of technical riskand at affordable costs. The decision to enter into
the Concept and Technology Development phaseis made formally at the Milestone A forum.
The Concept and Technology Developmentphase begins with concept exploration. During thisstage, concept studies are undertaken to define al-ternative concepts and to provide information aboutcapability and risk that would permit an objectivecomparison of competing concepts. A decisionreview is held after completion of the concept ex-ploration activities. The purpose of this review isto determine whether further technology develop-ment is required, or whether the system is ready toenter into system acquisition. If the key technolo-gies involved are reasonably mature and have al-ready been demonstrated, the Milestone DecisionAuthority (MDA) may agree to allow the systemto proceed into system acquisition; if not, the sys-tem may be directed into a component advanceddevelopment stage. (See Supplement A to thischapter for a definition of Technology Readinesslevels.) During this stage, system architecture defi-nition will continue and key technologies will bedemonstrated in order to ensure that technical andcost risks are understood and are at acceptable lev-els prior to entering acquisition. In any event, the
Milestones
• Process entry atMilestones A, B, or C(or within phases)
• Program outyear fundingwhen it makes sense, butno later than Milestone B
Concept andTechnology
Development
SystemDevelopment and
Demonstration
Productionand
Deployment
Sustainmentand
Disposal
Pre-SystemsAcquisition
Systems Acquisition(Engineering Development, Demonstration,
LRIP and Production)
Sustainmentand
Maintenance
Relationship to Requirements Process
MNS ORD
A B C IOC
Single Step orEvolution
to Full Capacity
All validated by JROC
Chapter 2 Systems Engineering Management in DoD Acquisition
13
Concept and Technology Development phase endswith a defined system architecture supported bytechnologies that are at acceptable levels of matu-rity to justify entry into system acquisition.
Formal system acquisition begins with a MilestoneB decision. The decision is based on an integratedassessment of technology maturity, user require-ments, and funding. A successful Milestone B isfollowed by the System Development and Dem-onstration phase. This phase could be entered di-rectly as a result of a technological opportunityand urgent user need, as well as having comethrough concept and technology development. TheSystem Development and Demonstration phaseconsists of two stages of development, systemintegration and system demonstration. Dependingupon the maturity level of the system, it could enterat either stage, or the stages could be combined.This is the phase during which the technologies,components and subsystems defined earlier are firstintegrated at the system level, and then demon-strated and tested. If the system has never beenintegrated into a complete system, it will enter thisphase at the system integration stage. When sub-systems have been integrated, prototypes demon-strated, and risks are considered acceptable, theprogram will normally enter the system demon-stration stage following an interim review by theMDA to ensure readiness. The system demonstra-tion stage is intended to demonstrate that the systemhas operational utility consistent with the opera-tional requirements. Engineering demonstrationmodels are developed and system level develop-ment testing and operational assessments are per-formed to ensure that the system performs asrequired. These demonstrations are to be conductedin environments that represent the eventual opera-tional environments intended. Once a system hasbeen demonstrated in an operationally relevantenvironment, it may enter the Production andDeployment phase.
The Production and Deployment phase consistsof two stages: production readiness and low rateinitial production (LRIP), and rate productionand deployment. The decision forum for entry intothis phase is the Milestone C event. Again, thefundamental issue as to where a program enters
the process is a function of technology maturity,so the possibility exists that a system could enterdirectly into this phase if it were sufficiently ma-ture, for example, a commercial product to be pro-duced for defense applications. However the entryis made—directly or through the maturation pro-cess described, the production readiness and LRIPstage is where initial operational test, live fire test,and low rate initial production are conducted. Uponcompletion of the LRIP stage and following afavorable Beyond LRIP test report, the system entersthe rate production and deployment stage duringwhich the item is produced and deployed to theuser. As the system is produced and deployed, thefinal phase, Sustainment and Disposal, begins.
The last, and longest, phase is the Sustainmentand Disposal phase of the program. During thisphase all necessary activities are accomplished tomaintain and sustain the system in the field in themost cost-effective manner possible. The scope ofactivities is broad and includes everything frommaintenance and supply to safety, health, and en-vironmental management. This period may alsoinclude transition from contractor to organic sup-port, if appropriate. During this phase, modifica-tions and product improvements are usually imple-mented to update and maintain the required levelsof operational capability as technologies and threatsystems evolve. At the end of the system servicelife it is disposed of in accordance with applicableclassified and environmental laws, regulations, anddirectives. Disposal activities also include recy-cling, material recovery, salvage of reutilization,and disposal of by-products from development andproduction.
The key to this model of the acquisition process isthat programs have the flexibility to enter at anyof the first three phases described. The decision asto where the program should enter the process isprimarily a function of user needs and technologymaturity. The MDA makes the decision for theprogram in question. Program managers areencouraged to work with their users to develop evo-lutionary acquisition strategies that will permitdeliveries of usable capabilities in as short a time-frame as possible, with improvements and en-hancements added as needed through continuing
Systems Engineering Fundamentals Chapter 2
14
definition of requirements and development activi-ties to support the evolving needs.
2.4 SYSTEMS ENGINEERINGIN ACQUISITION
As required by DoD 5000.2-R, the systemsengineering process shall:
1. Transform operational needs and requirementsinto an integrated system design solutionthrough concurrent consideration of all life-cycle needs (i.e., development, manufacturing,test and evaluation, verification, deployment,operations, support, training and disposal).
2. Ensure the compatibility, interoperability andintegration of all functional and physical inter-faces and ensure that system definition anddesign reflect the requirements for all systemelements: hardware, software, facilities, people,and data; and
3. Characterize and manage technical risks.
4. Apply scientific and engineering principles toidentify security vulnerabilities and to minimizeor contain associated information assurance andforce protection risks.
These objectives are accomplished with use of themanagement concepts and techniques described inthe chapters which follow in this book. The appli-cation of systems engineering management coin-cides with acquisition phasing. In order to supportmilestone decisions, major technical reviews areconducted to evaluate system design maturity.
Concept and Technology Development
The Concept and Technology Development phaseconsists of two pre-acquisition stages of develop-ment. The first, Concept Exploration, is repre-sented in Figure 2-2. The exploration of conceptsis usually accomplished through multiple short-term studies. Development of these studies is
Figure 2-2. Concept and Technology Development (Concept Exploration Stage)
Analysis of Alternatives
Operational Analysis
R&D Activities
Technology OpportunityAssessments and Analysis
Market Research
Business ProcessReengineering
System Engineering Process(System Architecting)
MNS
TechnologyOpportunity
Assessments
MS A
• Alternative Concepts Defined• Key Requirements Defined• Key Cost, Schedule, Performance
Objectives Established
ORD DevelopmentPreferred Concepts
DecisionReview
Technical Review
Chapter 2 Systems Engineering Management in DoD Acquisition
15
Figure 2-3. Concept and Technology Development(Component Advanced Development Stage)
expected to employ various techniques includingthe systems engineering process that translatesinputs into viable concept architectures whosefunctionality can be traced to the requirements. Inaddition, market surveys, Business Process Reengi-neering activities, operational analysis, and tradestudies support the process.
The primary inputs to these activities includerequirements, in form of the MNS, assessments oftechnology opportunities and status, and the out-puts from any efforts undertaken to explore poten-tial solutions. When the contractor studies arecomplete, a specific concept to be pursued isselected based on a integrated assessment of tech-nical performance; technical, schedule and costrisk; as well as other relevant factors. A decisionreview is then held to evaluate the concept recom-mended and the state of technology upon whichthe concept depends. The MDA then makes a decision as to whether the concept developmentwork needs to be extended or redirected, or whetherthe technology maturity is such that the programcan proceed directly to either Mile-stone B (SystemDevelopment and Demonstration) or Milestone C(Production and Deployment).
If the details of the concept require definition,i.e., the system has yet to be designed and demon-strated previously, or the system appears to bebased on technologies that hold significant risk,then it is likely that the system will proceed to thesecond stage of the Concept and TechnologyDevelopment phase. This stage, ComponentAdvanced Development, is represented in Figure2-3. This is also a pre-acquisition stage of devel-opment and is usually characterized by extensiveinvolvement of the DoD Science and Technology(S&T) community. The fundamental objectives ofthis stage of development are to define a system-level architecture and to accomplish risk-reductionactivities as required to establish confidence thatthe building blocks of the system are sufficientlywell-defined, tested and demonstrated to provideconfidence that, when integrated into higher levelassemblies and subsystems, they will performreliably.
Development of a system-level architecture entailscontinuing refinement of system level requirementsbased on comparative analyses of the system con-cepts under consideration. It also requires thatconsideration be given to the role that the system
Continued Concept ExplorationActivities As Appropriate
Advanced Concept TechnologyDemonstration
Systems Engineering Process(System Architecture Developed)
System ArchitectureDeveloped
Component TechnologyDemonstrated
ORD Development MS B
Concept
DecisionReview
Systems Engineering Fundamentals Chapter 2
16
will play in the system of systems of which it willbe a part. System level interfaces must be estab-lished. Communications and interoperability re-quirements must be established, data flows defined,and operational concepts refined. Top level plan-ning should also address the strategies that will beemployed to maintain the supportability andaffordability of the system over its life cycleincluding the use of common interface standardsand open systems architectures. Important designrequirements such as interoperability, open sys-tems, and the use of commercial componentsshould also be addressed during this stage of theprogram.
Risk reduction activities such as modeling andsimulation, component testing, bench testing, andman-in-the-loop testing are emphasized as deci-sions are made regarding the various technologiesthat must be integrated to form the system. Theprimary focus at this stage is to ensure that the keytechnologies that represent the system components(assemblies and sub-systems) are well understood
and are mature enough to justify their use in a sys-tem design and development effort. The next stageof the life cycle involves engineering development,so research and development (R&D) activitiesconducted within the science and technologyappropriations should be completed during thisstage.
System Development and Demonstration
The decision forum for entry into the SystemDevelopment and Demonstration (SD&D) phaseis the Milestone B event. Entry into this phase rep-resents program initiation, the formal beginningof a system acquisition effort. This is the govern-ment commitment to pursue the program. Entryrequires mature technology, validated require-ments, and funding. At this point, the program re-quirement must be defined by an Operational Re-quirements Document (ORD). This phase consistsof two primary stages, system integration (Figure2-4) and system demonstration (Figure 2-5).
Figure 2-4. System Development and Demonstration(System Integration Stage)
MS B
System LevelArchitecture
ORD
PreviousPhase
TechReview
RequirementsReview
PrototypeDemonstration
System Definition Effort
IPR
Draft Allocated Baseline
Preliminary andDetail Design
Efforts
Approved Functional Baseline
Preliminary Design Effort
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
Functional Analysisand Allocation
Design Synthesis
System Analysisand Control
(Balance)
Chapter 2 Systems Engineering Management in DoD Acquisition
17
Figure 2-5. System Development and Demonstration(System Demonstration Stage)
There is no hard and fast guidance that stipulatesprecisely how the systems engineering process isto intersect with the DoD acquisition process.There are no specified technical events, e.g., DoDdesignated technical reviews, that are to be accom-plished during identified stages of the SD&Dphase. However, the results of a SD&D phaseshould support a production go-ahead decision atMilestone C. That being the case, the processdescribed below reflects a configuration controlapproach that includes a system level design (func-tional baseline), final preliminary designs (allo-cated baselines), and detail designs (initial prod-uct baselines). Along with their associated docu-mentation, they represent the systems engineeringproducts developed during SD&D that are mostlikely needed to appropriately support Milestone C.
System Integration is that stage of SD&D thatapplies to systems that have yet to achieve systemlevel design maturity as demonstrated by the inte-gration of components at the system level in rel-evant environments. For an unprecedented system
(one not previously defined and developed), thisstage will continue the work begun in the compo-nent advanced development stage, but the flavorof the effort now becomes oriented to engineeringdevelopment, rather than the research-orientedefforts that preceded this stage. A formal ORD,technology assessments, and a high-level systemarchitecture have been established. (These willform major inputs to the systems engineeringprocess.) The engineering focus becomes estab-lishment and agreement on system level technicalrequirements stated such that designs based onthose technical requirements will meet the intentof the operational requirements. The system leveltechnical requirements are stabilized and docu-mented in an approved system level requirementsspecification. In addition, the system-level require-ments baseline (functional baseline) is established.This baseline is verified by development anddemonstration of prototypes that show that keytechnologies can be integrated and that associatedrisks are sufficiently low to justify developing thesystem.
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
FunctionalAnalysis
Design Synthesis
System Analysisand Control
(Balance)Requirements
Analysis
RequirementsLoop
Verification
DesignLoop
FunctionalAnalysis
Design Synthesis
System Analysisand Control
(Balance)
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
FunctionalAnalysis
Design Synthesis
System Analysisand Control
(Balance)
ProductionReadiness
andDesign
Completion
IPR
FunctionalBaseline
ORD (Rev)
PreviousPhase
Design Reviews
Preliminary Design EffortMS C
Draft ProductBaseline
Approved Functional and Allocated Baseline
Detail Design Effort
Technical Review
System Demonstration
Initial ProductBaseline
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
FunctionalAnalysis
Design Synthesis
System Analysisand Control
(Balance)
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
FunctionalAnalysis
Design Synthesis
System Analysisand Control
(Balance)
Systems Engineering Fundamentals Chapter 2
18
Program initiation signals the transition from anS&T focus to management by the program office.The R&D community, the users, and the programoffice may have all been involved in defining theconcepts and technologies that will be key to thesystem development. It is appropriate at this point,therefore, to conduct a thorough requirements analy-sis and review to ensure that the user, the contrac-tor, and the program office all hold a common viewof the requirements and to preserve the lessonslearned through the R&D efforts conducted in theearlier phase. The risk at this point can be high,because misunderstandings and errors regardingsystem-level requirements will flow down to sub-sequent designs and can eventually result in over-runs and even program failure. The contractor willnormally use the occasion of the system require-ments review early in this stage to set the func-tional baseline that will govern the flow-down ofrequirements to lower level items as preliminarydesigns are elaborated.
The Interim Progress Review held between Sys-tem Integration and System Demonstration has noestablished agenda. The agenda is defined by theMDA and can be flexible in its timing and con-tent. Because of the flexibility built into theacquisition process, not all programs will conformto the model presented here. Programs may findthemselves in various stages of preliminary designand detailed design as the program passes fromone stage of the SD&D phase to the succeedingstage. With these caveats, System Demonstration(Figure 2-5) is the stage of the SD&D phase dur-ing which preliminary and detailed designs areelaborated, engineering demonstration models arefabricated, and the system is demonstrated inoperationally relevant environments.
System level requirements are flowed down to thelower level items in the architecture and require-ments are documented in the item performancespecifications, which represent the preliminarydesign requirements for those items. The item per-formance specifications and supporting documen-tation, when finalized, together form the allocatedbaseline for the system. Design then proceedstoward the elaboration of a detailed design for
the product or system. The product baseline isdrafted as the design is elaborated. This physicaldescription of the system may change as a resultof testing that will follow, but it forms the basisfor initial fabrication and demonstration of theseitems. If the system has been previously designedand fabricated, then, clearly, this process wouldbe curtailed to take advantage of work alreadycompleted.
Following the elaboration of the detailed design,components and subsystems are fabricated, inte-grated, and tested in a bottom-up approach untilsystem level engineering demonstration models aredeveloped. These demonstration models are not,as a rule, production representative systems.Rather, they are system demonstration models, orintegrated commercial items, that serve the pur-pose of enabling the developer to accomplishdevelopment testing on the integrated system.These models are often configured specifically toenable testing of critical elements of the system,for example, in the case of an aircraft development,there may be separate engineering demonstrationmodels developed specifically to test the integratedavionics subsystems, while others demonstrate theflying qualities and flight controls subsystems.
For purposes of making decisions relative toprogress through the acquisition process, thesesystem-level demonstrations are not intended tobe restricted to laboratory test and demonstrations.They are expected to include rigorous demonstra-tions that the integrated system is capable of per-forming operationally useful tasks under conditionsthat, while not necessarily equal to the rigor offormal operational testing, represent the eventualenvironment in which the system must perform.The result of these demonstrations provide theconfidence required to convince the decision-maker (MDA) that the system is ready to enter theproduction phase of the life cycle. This impliesthat the system has demonstrated not only thattechnical performance is adequate, but also thatthe affordability, supportability, and producibilityrisks are sufficiently low to justify a productiondecision.
Chapter 2 Systems Engineering Management in DoD Acquisition
19
Figure 2-6. Production and Deployment
Production and Deployment
Milestone C is the decision forum for entry intothe Production and Deployment phase of theprogram. Like other phases, this phase is alsodivided into stages of development. ProductionReadiness and LRIP is the first of these. At thispoint, system-level demonstrations have beenaccomplished and the product baseline is defined(although it will be refined as a result of the activi-ties undertaken during this phase). The effort isnow directed toward development of the manufac-turing capability that will produce the product orsystem under development. When a manufactur-ing capability is established, a LRIP effort begins.
The development of a LRIP manufacturing capa-bility has multiple purposes. The items producedare used to proof and refine the production lineitself, items produced on this line are used for Ini-tial Operational Test and Evaluation (IOT&E) andLive Fire Test and Evaluation (LFT&E), and this
is also the means by which manufacturing ratesare ramped upward to the rates intended whenmanufacturing is fully underway.
Following the completion of formal testing, thesubmission of required Beyond-LRIP and Live FireTest reports, and a full-rate production decisionby the MDA, the system enters the Rate Productionand Deployment stage. After the decision to go tofull-rate production, the systems engineeringprocess is used to refine the design to incorporatefindings of the independent operational testing,direction from the MDA, and feedback fromdeployment activities. Once configuration changeshave been made and incorporated into production,and the configuration and production is consid-ered stable, Follow-on Operational Test and Evalu-ation (FOT&E), if required, is typically performedon the stable production system. Test results areused to further refine the production configuration.Once this has been accomplished and productionagain becomes stable, detailed audits are held to
Rate Production and DeploymentProduction Readiness and LRIP
Deployment
Low Rate Initial Production
Initial Operational Testand Live Fire Test
Full Rate Production
Establish Manufacturing Capability
PreviousPhase
ORD
Functional Configuration Audit Physical Configuration AuditDecisionReview
NextPhase
MS C
RequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
Functional Analysisand Allocation
Design Synthesis
SystemAnalysis
and Control(Balance)
Systems Engineering Fundamentals Chapter 2
20
confirm that the Product Baseline documentationcorrectly describes the system being produced.The Product Baseline is then put under formalconfiguration control.
As the system is produced, individual items aredelivered to the field units that will actually em-ploy and use them in their military missions. Care-ful coordination and planning is essential to makethe deployment as smooth as possible. Integratedplanning is absolutely critical to ensure that thetraining, equipment, and facilities that will be re-quired to support the system, once deployed, arein place as the system is delivered. The systemsengineering function during this activity is focusedon the integration of the functional specialties tomake certain that no critical omission has beenmade that will render the system less effective thanit might otherwise be. Achieving the user’s requiredinitial operational capability (IOC) schedule de-mands careful attention to the details of the transi-tion at this point. Furthermore, as the system isdelivered and operational capability achieved, the
system transitions to the Sustainment and Disposalphase of the system life cycle—the longest andmost expensive of all phases.
Sustainment and Disposal
There is no separate milestone decision requiredfor a program to enter this phase of the system lifecycle. The requirement for the Sustainment phaseis implicit in the decision to produce and deploythe system. This phase overlaps the Productionphase. Systems Engineering activities in theSustainment phase are focused on maintainingthe system’s performance capability relative tothe threat the system faces. If the military threatchanges or a technology opportunity emerges, thenthe system may require modification. Thesemodifications must be approved at an appropriatelevel for the particular change being considered.The change then drives the initiation of new sys-tems engineering processes, starting the cycle (orparts of it) all over again.
Figure 2-7. Sustainment and Disposal
ProductionItems
Sustainment Disposal
Evolutionary Requirements
Development
Engineering Change
Proposals
BlockModifications
Test andEvaluation
DisposalRequirementsAnalysis
RequirementsLoop
Verification
DesignLoop
Functional Analysisand Allocation
Design Synthesis
System Analysisand Control(Balance)
Chapter 2 Systems Engineering Management in DoD Acquisition
21
Also, in an evolutionary development environment,there will be a continuing effort to develop andrefine additional operational requirements basedon the experience of the user with the portion ofthe system already delivered. As new requirementsare generated, a new development cycle begins,with technology demonstrations, risk reduction,system demonstrations and testing—the same cyclejust described—all tailored to the specific needsand demands of the technology to be added to thecore system already delivered.
The final activity in the system life cycle is Dis-posal. System engineers plan for and conduct sys-tem disposal throughout the life cycle beginningwith concept development. System componentscan require disposal because of decommissioning,their destruction, or irreparable damage. In addi-tion, processes and material used for development,production, operation, or maintenance can raisedisposal issues throughout the life cycle. Disposalmust be done in accordance with applicable laws,regulations, and directives that are continuallychanging, usually to require more severe con-straints. They mostly relate to security and environ-ment issues that include recycling, material recov-ery, salvage, and disposal of by-products fromdevelopment and production.
Every Development is Different
The process described above is intended to be veryflexible in application. There is no “typical” sys-tem acquisition. The process is therefore definedto accommodate a wide range of possibilities, fromsystems that have been proven in commercialapplications and are being purchased for militaryuse, to systems that are designed and developedessentially from scratch. The path that the systemdevelopment takes through the process will dependprimarily on the level of maturity of the technol-ogy employed. As explained in the preceding dis-cussion, if the system design will rely significantlyon the use of proven or commercial items, thenprocess can be adjusted to allow the system to skipphases, or move quickly from stage to stage withinphases. If the type of system is well understoodwithin the applicable technical domains, or it is anadvanced version of a current well understood
system, then the program definition and riskreduction efforts could be adjusted appropriately.
It is the role of the system engineer to advise theprogram manager of the recommended path thatthe development should take, outlining the reasonsfor that recommendation. The decision as to theappropriate path through the process is actuallymade by the MDA, normally based on the recom-mendation of the program manager. The processmust be tailored to the specific development, bothbecause it is good engineering and because it isDoD policy as part of the Acquisition Reform ini-tiative. But tailoring must done with the intent ofpreserving the requirements traceability, baselinecontrol, lifecycle focus, maturity tracking, andintegration inherent in the systems engineeringapproach. The validity of tailoring the processshould always be a risk management issue. Acqui-sition Reform issues will be addressed again in PartIV of this text.
2.5 SUMMARY POINTS
• The development, acquisition, and operation ofmilitary systems is governed by a multitude ofpublic laws, formal DoD directives, instructionsand manuals, numerous Service and Compo-nent regulations, and many inter-service andinternational agreements.
• The system acquisition life cycle process is amodel used to guide the program manager throughthe process of maturing technology based sys-tems and readying them for production anddeployment to military users.
• The acquisition process model is intended tobe flexible and to accommodate systems andtechnologies of varying maturities. Systemsdependent on immature technologies will takelonger to develop and produce, while those thatemploy mature technologies can proceedthrough the process relatively quickly.
• The system engineering effort is integrated intothe systems acquisition process such that theactivities associated with systems engineering
Systems Engineering Fundamentals Chapter 2
22
(development of documentation, technical re-views, configuration management, etc.) supportand strengthen the acquisition process. Thechallenge for the engineering manager is toensure that engineering activities are conducted
at appropriate points in the process to ensurethat the system has, in fact, achieved the levelsof maturity expected prior to progressing intosucceeding phases.
Chapter 2 Systems Engineering Management in DoD Acquisition
23
SUPPLEMENT 2-A
TECHNOLOGYREADINESS LEVELS
Technology Readiness Level Description
1. Basic principles observed Lowest level of technology readiness. Scientific research beginsand reported. to be translated into technology’s basic properties.
2. Technology concept and/or Invention begins. Once basic principles are observed, practicalapplication formulated. applications can be invented. The application is speculative and
there is no proof or detailed analysis to support the assumption.Examples are still limited to paper studies.
3. Analytical and experimental Active R&D is initiated. This includes analytical studies andcritical function and/or char- laboratory studies to physically validate analytical predictionsacteristic proof of concept. of separate elements of the technology. Examples include
components that are not yet integrated or representative.
4. Component and/or bread- Basic technological components are integrated to establish thatboard validation in labora- the pieces will work together. This is relatively “low fidelity”tory environment. compared to the eventual system. Examples include integration
of “ad hoc” hardware in a laboratory.
5. Component and/or bread- Fidelity of breadboard technology increases significantly. Theboard validation in relevant basic technological components are integrated with reasonablyenvironment. realistic supporting elements so that the technology can be
tested in simulated environment. Examples include “highfidelity” laboratory integration of components.
6. System/subsystem model or Representative model or prototype system, which is well beyondprototype demonstration in a the breadboard tested for level 5, is tested in a relevant environ-relevant environment. ment. Represents a major step up in a technology’s demon-
strated readiness. Examples include testing a prototype in a highfidelity laboratory environment or in a simulated operationalenvironment.
7. System prototype demon- Prototype near or at planned operational system. Represents astration in an operational major step up from level 6, requiring the demonstration of anenvironment. actual system prototype in an operational environment.
Examples include testing the prototype in a test bed aircraft.
(continued)
Systems Engineering Fundamentals Chapter 2
24
Technology Readiness Level Description
8. Actual system completed and Technology has been proven to work in its final form and underqualified through test and expected conditions. In almost all cases, this level represents thedemonstration. end of true system development. Examples include develop-
mental test and evaluation of the system in its intended weaponsystem to determine if it meets design specifications.
9. Actual system proven Actual application of the technology in its final form and underthrough successful mission mission conditions, such as those encountered in operationaloperations. test and evaluation. Examples include using the system under
operational mission conditions.
Chapter 2 Systems Engineering Management in DoD Acquisition
25
SUPPLEMENT 2-B
EVOLUTIONARY ACQUISITIONCONSIDERATIONS
As shown by Figure 2-8, evolutionary acquisitionstarts with the development and delivery of a corecapability. As knowledge is gained through sys-tem use and as technology changes, the system isevolved to a more useful or effective product. Atthe beginning of an evolutionary acquisition theultimate user need is understood in general terms,but a core need that has immediate utility can bewell-defined. Because future events will affect theeventual form of the product, the requirements cannot be fully defined at the program initiation. How-ever, the evolutionary development must be accom-plished in a management system that demands
The evolutionary approach to defense acquisitionis the simple recognition that systems evolve as aresult of changing user needs, technologicalopportunities, and knowledge gained in operation.Evolutionary Acquisition is not new to militarysystems. No naval ship in a class is the same; air-craft and vehicles have block changes designed toimprove the design; variants of systems performdifferent missions; satellites have evolutionaryimprovements between the first and last launched;and due to fast evolving technology, computerresources and software systems are in constantevolution.
Figure 2-8. Evolutionary Acquisition
• User Feedback• Tech Opportunity• Evolving Threat
Requirements Analysis• General for the System
• Specific for the Core
Requirements Analysis
Concept of Operations
PreliminarySystem
Architecture
Define – Develop – Operationally Test > Block A
…continue “as required”
Flexible/Incremental ORD, TEMP, etc.
Refine and UpdateRequirements
Define – Develop – Operationally Test > CORE
Systems Engineering Fundamentals Chapter 2
26
requirements validation, fully funded budgets, andrigorous review. In addition, the systems engineer-ing function remains responsible for controllingrequirements traceability and configuration con-trol in the absence of complete definition of allrequirements or final configurations. These con-straints and concerns require the evolutionaryapproach be accomplished in a manner such thevarious concerns of users, developers, and man-agers are adequately addressed, while the risksassociated with these issues are mitigated.
Acquisition Managment
Acquisition management requirements establishedin the DoD 5000 documents and associated com-ponent regulations or instructions establish a seriesof program-specific analyses, reports, and decisiondocuments that support the milestone decision pro-cess. In addition, prior to decision points in theacquisition process, substantial coordination is re-quired with an array of stakeholders. This processis resource consuming but necessary to establishthe program’s validity in the eyes of those respon-sible to approve the public resources committedto the program.
Evolutionary acquisition, by its nature, representsan “acquisition within an acquisition.” On onelevel, the engineering manager is confronted withthe management and control of the system as itprogresses to its eventual final configuration, and,on another level, there is the management and con-trol of the modifications, or blocks, that are suc-cessively integrated into the system as they aredeveloped. The system has associated require-ments, baselines, reviews—the normal elementsof a system acquisition; however, each block alsohas specified requirements, configuration, andmanagement activities. The challenge for techni-cal management then becomes to ensure that goodtechnical management principles are applied to thedevelopment of each block, while simultaneouslyensuring that the definition and control of require-ments and baselines at the system level includeand accommodate the evolving architecture.
System Engineering Concerns
Evolutionary acquisition will require incrementaland parallel development activities. These activi-ties are developing evolutionary designs thatrepresent a modification as well as an evolvedsystem. The evolutionary upgrade is developed asa modification, but the new evolved system mustbe evaluated and verified as a system with new,evolved requirements. This implies that, thoughwe can enter the acquisition process at any point,the basic baselining process required by systemsengineering must somehow be satisfied for eachblock upgrade to assure requirements traceabilityand configuration control.
As shown by Figure 2-9, incremental delivery ofcapability can be the result of an evolutionary blockupgrade or be an incremental release of capa-bility within the approved program (or currentevolutionary block) baseline. System engineeringis concerned with both. There is no check list ap-proach to structure these relationships, but the fol-lowing is presented to provide some general guid-ance in a difficult and complex area of acquisitionmanagement planning and implementation.
Evolutionary upgrades may be based on knownoperational requirements where delivery of thecapability is incremental due to immediate opera-tional need, continuing refinement of the productbaseline prior to full operational capability, andpre-planned parallel developments. If the modifi-cation is only at the allocated or product baseline,and the program’s approved performance, cost, andschedule is not impacted, then the system wouldnot necessarily require the management approvalsand milestones normal to the acquisition process.
In all cases, the key to maintaining a proper sys-tems engineering effort is to assure that architec-tures and configuration baselines used for evolu-tionary development can be upgraded with mini-mal impact to documented and demonstrated con-figurations. The risk associated with this issue canbe significantly reduced through program planningthat addresses optimization of the acquisitionbaseline and control of the evolving configuration.
Chapter 2 Systems Engineering Management in DoD Acquisition
27
Planning
Evolutionary acquisition program planning mustclearly define how the core and evolutionary blockswill be structured, including:
1. A clear description of an operationally suitablecore system including identification of sub-systems and components most likely to evolve.
2. Establishment of a process for obtaining, evalu-ating and integrating operational feedback,technology advancements, and emergingcommercial products.
3. Planning for evolutionary block upgrade evalu-ation, requirements validation, and programinitiation.
4. Description of the management approach forevolutionary upgrades within a block and theconstraints and controls associated withincremental delivery of capability.
5. Risk analysis of the developmental approach,both technical and managerial.
Systems engineering planning should emphasize:
1. The openness and modularity of the designof the core system architecture in order tofacilitate modification and upgrades,
2. How baseline documentation is structured toimprove flexibility for upgrade,
3. How evolutionary acquisition planning impactsbaseline development and documentationcontrol,
4. How technical reviews will be structured to bestsupport the acquisition decision points, and
5. How risk management will monitor and con-trol the management and technical complexityintroduced by evolutionary development.
The basic system architecture should be designedto accommodate change. Techniques such as openarchitecting, functional partitioning, modulardesign, and open system design (all described laterin this book) are key to planning for a flexiblesystem that can be easily and affordably modified.
Figure 2-9. Incremental Release Within Evolutionary Blocks
Block A
Block B
A B C
Core
B C
B C
Block A
First Block A Release(Related to Block A IOC)
Release 2
Release 3
Release(Related to P3I)
Final Block ARelease
Etc.P3I
Systems Engineering Fundamentals Chapter 2
28
Example
Table 2-1 illustrates some of the relationships dis-cussed above as it might apply to a Major Auto-mated Information System (MAIS) program. Dueto the nature of complex software development, aMAIS acquisition inevitably will be an evolution-ary acquisition. In the notional MAIS shown inthe table, management control is primarily definedfor capstone, program, subsystem or incrementaldelivery, and supporting program levels. The tableprovides relationships showing how key acquisi-tion and system engineering activities correlate inthe evolutionary environment. Probably the mostimportant lesson of Table 2-1 is that these rela-tionships are complex and if they are not plannedfor properly, they will present a significant risk tothe program.
Table 2-1. Evolutionary Acquisition Relationships
Notional Example of Evolutionary MAIS Acquisition Relationships
Characterization System LevelAcquisition
ProgramLevel
AcquisitionDocumentation
RequiredBaseline
Overall Need
Core andEvolutionary
Blocks
IncrementalDelivery ofCapability
AssociatedProduct
Improvements
Major Programor
Business Area
Build or Blockof
Major Program
Release orVersionof Block
Applicationor
Bridge
Capstone orSub-Portfolio
AcquisitionProgram
Internal toAcquisitionProgram
Parallel ProductImprovement
(Less than MAIS)
CapstoneAcquisition
Documentaion
FullProgram
Documentation
SeparateAcquisition
DocumentationNot Required
Component orLower Decision
Level AcquisitionProcessing
Top LevelFunctionalBaseline
CumulativeFunctional and
AllocatedBaseline
ProductBaseline
Functional,Allocated, and
Product Baselines
CM Authority
PMO
PMO withContractorSupport
Contractor(Must MeetAllocatedBasleine)
PMO/Contractor
Summary
Acquisition oversight is directly related to theperformance, cost, and schedule defined in theacquisition baseline. It establishes the approvedscope of the developmental effort. Evolutionarydevelopment that exceeds the boundaries estab-lished by the acquisition baseline requires a newor revised acquisition review process with addi-tional oversight requirements. The developmentand approval of the ORD and Acquisition ProgramBaseline are key activities that must structure anevolutionary process that provides user and over-sight needs, budgetary control, requirementstraceability, risk mitigation, and configurationmanagement.
Chapter 3 Systems Engineering Process Overview
29
PART 2
THESYSTEMS
ENGINEERINGPROCESS
Systems Engineering Fundamentals Chapter 3
30
Chapter 3 Systems Engineering Process Overview
31
Figure 3-1. The Systems Engineering Process
CHAPTER 3
SYSTEMS ENGINEERINGPROCESS OVERVIEW
definition with each level of development. Asshown by Figure 3-1, the process includes: inputsand outputs; requirements analysis; functionalanalysis and allocation; requirements loop;synthesis; design loop; verification; and systemanalysis and control.
Systems Engineering Process Inputs
Inputs consist primarily of the customer’s needs,objectives, requirements and project constraints.
3.1 THE PROCESS
The Systems Engineering Process (SEP) is acomprehensive, iterative and recursive problemsolving process, applied sequentially top-down byintegrated teams. It transforms needs and require-ments into a set of system product and processdescriptions, generate information for decisionmakers, and provides input for the next level ofdevelopment. The process is applied sequentially,one level at a time, adding additional detail and
Process Input• Customer Needs/Objectives/
Requirements– Missions– Measures of Effectiveness– Environments– Constraints
• Technology Base• Output Requirements from Prior
Development Effort• Program Decision Requirements• Requirements Applied Through
Specifications and Standards
• Trade-Off Studies• Effectiveness Analyses• Risk Management• Configuration Management• Interface Management• Data Management• Perfromance Measurement
– SEMS– TPM– Technical Reviews
Process Output• Development Level Dependent
– Decision Database– System/Configuration Item
Architecture– Specifications and Baselines
Related Terms:Customer = Organizations responsible for Primary Functions
Primary Functions = Development, Production/Construction, Verification,Deployment, Operations, Support, Training, Disposal
Systems Elements = Hardware, Software, Personnel, Facilities, Data, Material,Services, Techniques
Requirements Analysis• Analyze Missions and Environments• Identify Functional Requirements• Define/Refine Performance and Design
Constraint Requirements
Functional Analysis/Allocation• Decompose to Lower-Level Functions• Allocate Performance and Other Limiting Requirements
to All Functional Levels• Define/Refine Functional Interfaces (Internal/External)• Define/Refine/Integrate Functional Architecture
Synthesis• Transform Architectures (Functional to Physical)• Define Alternative System Concepts, Configuration
Items and System Elements• Select Preferred Product and Process Solutions• Define/Refine Physical Interfaces (Internal/External)
System Analysisand Control(Balance)
Requirements Loop
Design Loop
Verification
Systems Engineering Fundamentals Chapter 3
32
Inputs can include, but are not restricted to, mis-sions, measures of effectiveness, environments,available technology base, output requirementsfrom prior application of the systems engineeringprocess, program decision requirements, andrequirements based on “corporate knowledge.”
Requirements Analysis
The first step of the Systems Engineering Processis to analyze the process inputs. Requirements ana-lysis is used to develop functional and performancerequirements; that is, customer requirements aretranslated into a set of requirements that definewhat the system must do and how well it must per-form. The systems engineer must ensure that therequirements are understandable, unambiguous,comprehensive, complete, and concise.
Requirements analysis must clarify and definefunctional requirements and design constraints.Functional requirements define quantity (howmany), quality (how good), coverage (how far),time lines (when and how long), and availability(how often). Design constraints define those fac-tors that limit design flexibility, such as: environ-mental conditions or limits; defense against inter-nal or external threats; and contract, customer orregulatory standards.
Functional Analysis/Allocation
Functions are analyzed by decomposing higher-level functions identified through requirementsanalysis into lower-level functions. The perfor-mance requirements associated with the higherlevel are allocated to lower functions. The result isa description of the product or item in terms ofwhat it does logically and in terms of the perfor-mance required. This description is often calledthe functional architecture of the product or item.Functional analysis and allocation allows for a bet-ter understanding of what the system has to do, inwhat ways it can do it, and to some extent, thepriorities and conflicts associated with lower-levelfunctions. It provides information essential tooptimizing physical solutions. Key tools in func-tional analysis and allocation are Functional Flow
Block Diagrams, Time Line Analysis, and theRequirements Allocation Sheet.
Requirements Loop
Performance of the functional analysis and allo-cation results in a better understanding of therequirements and should prompt reconsiderationof the requirements analysis. Each function iden-tified should be traceable back to a requirement.This iterative process of revisiting requirementsanalysis as a result of functional analysis andallocation is referred to as the requirements loop.
Design Synthesis
Design synthesis is the process of defining theproduct or item in terms of the physical and soft-ware elements which together make up and definethe item. The result is often referred to as the physi-cal architecture. Each part must meet at least onefunctional requirement, and any part may supportmany functions. The physical architecture is thebasic structure for generating the specifications andbaselines.
Design Loop
Similar to the requirements loop described above,the design loop is the process of revisiting the func-tional architecture to verify that the physical designsynthesized can perform the required functions atrequired levels of performance. The design looppermits reconsideration of how the system willperform its mission, and this helps optimize thesynthesized design.
Verification
For each application of the system engineeringprocess, the solution will be compared to the re-quirements. This part of the process is called theverification loop, or more commonly, Verification.Each requirement at each level of developmentmust be verifiable. Baseline documentation devel-oped during the systems engineering process mustestablish the method of verification for eachrequirement.
Chapter 3 Systems Engineering Process Overview
33
Appropriate methods of verification includeexamination, demonstration, analysis (includingmodeling and simulation), and testing. Formal testand evaluation (both developmental and opera-tional) are important contributors to the verificationof systems.
Systems Analysis and Control
Systems Analysis and Control include technicalmanagement activities required to measureprogress, evaluate and select alternatives, and docu-ment data and decisions. These activities apply toall steps of the sysems engineering process.
System analysis activities include trade-off stud-ies, effectiveness analyses, and design analyses.They evaluate alternative approaches to satisfytechnical requirements and program objectives, andprovide a rigorous quantitative basis for selectingperformance, functional, and design requirements.Tools used to provide input to analysis activitiesinclude modeling, simulation, experimentation,and test.
Control activities include risk management, con-figuration management, data management, andperformance-based progress measurement includ-ing event-based scheduling, Technical Perfor-mance Measurement (TPM), and technicalreviews.
The purpose of Systems Analysis and Control isto ensure that:
• Solution alternative decisions are made onlyafter evaluating the impact on system effective-ness, life cycle resources, risk, and customerrequirements,
• Technical decisions and specification require-ments are based on systems engineeringoutputs,
• Traceability from systems engineering processinputs to outputs is maintained,
• Schedules for development and delivery aremutually supportive,
• Required technical disciplines are integratedinto the systems engineering effort,
• Impacts of customer requirements on resultingfunctional and performance requirements areexamined for validity, consistency, desirability,and attainability, and,
• Product and process design requirements aredirectly traceable to the functional and perfor-mance requirements they were designed tofulfill, and vice versa.
Systems Engineering Process Output
Process output is dependent on the level of devel-opment. It will include the decision database, thesystem or configuration item architecture, and thebaselines, including specifications, appropriate tothe phase of development. In general, it is any datathat describes or controls the product configura-tion or the processes necessary to develop thatproduct.
3.2 SUMMARY POINTS
• The system engineering process is the enginethat drives the balanced development of sys-tem products and processes applied to each levelof development, one level at a time.
• The process provides an increasing level ofdescriptive detail of products and processes witheach system engineering process application.The output of each application is the input tothe next process application.
Systems Engineering Fundamentals Chapter 3
34
Chapter 4 Requirements Analysis
35
Figure 4-1. Operational Requirements – Basic Questions
Operational distribution or deployment: Where will the system be used?
Mission profile or scenario: How will the system accomplish its mission objective?
Performance and related parameters: What are the critical system parameters to accom-plish the mission?
Utilization environments: How are the various system components to be used?
Effectiveness requirements: How effective or efficient must the system be in performing itsmission?
Operational life cycle: How long will the system be in use by the user?
Environment: What environments will the system be expected to operate in an effectivemanner?
CHAPTER 4
REQUIREMENTSANALYSIS
the constraints. They eventually must be verifiedto meet both the requirements and constraints.
Types of Requirements
Requirements are categorized in several ways. Thefollowing are common categorizations of require-ments that relate to technical management:
Customer Requirements: Statements of fact andassumptions that define the expectations of thesystem in terms of mission objectives, environ-ment, constraints, and measures of effectivenessand suitability (MOE/MOS). The customers arethose that perform the eight primary functions ofsystems engineering (Chapter 1), with specialemphasis on the operator as the key customer.Operational requirements will define the basic needand, at a minimum, answer the questions posed inFigure 4-1.
4.1 SYSTEMS ENGINEERING PROCESSINPUTS
The inputs to the process include the customer’srequirements and the project constraints. Require-ments relate directly to the performance charac-teristics of the system being designed. They arethe stated life-cycle customer needs and objectivesfor the system, and they relate to how well thesystem will work in its intended environment.
Constraints are conditions that exist because oflimitations imposed by external interfaces, projectsupport, technology, or life cycle support systems.Constraints bound the development teams’ designopportunities.
Requirements are the primary focus in the systemsengineering process because the process’s primarypurpose is to transform the requirements into de-signs. The process develops these designs within
Systems Engineering Fundamentals Chapter 4
36
Functional Requirements: The necessary task,action or activity that must be accomplished. Func-tional (what has to be done) requirements identifiedin requirements analysis will be used as the top-level functions for functional analysis.
Performance Requirements: The extent to whicha mission or function must be executed; generallymeasured in terms of quantity, quality, coverage,timeliness or readiness. During requirements analy-sis, performance (how well does it have to be done)requirements will be interactively developed acrossall identified functions based on system life cyclefactors; and characterized in terms of the degreeof certainty in their estimate, the degree of criti-cality to system success, and their relationship toother requirements.
Design Requirements: The “build to,” “code to,”and “buy to” requirements for products and “howto execute” requirements for processes expressedin technical data packages and technical manuals.
Derived Requirements: Requirements that areimplied or transformed from higher-level require-ment. For example, a requirement for long rangeor high speed may result in a design requirementfor low weight.
Allocated Requirements: A requirement that isestablished by dividing or otherwise allocating ahigh-level requirement into multiple lower-levelrequirements. Example: A 100-pound item thatconsists of two subsystems might result in weightrequirements of 70 pounds and 30 pounds for thetwo lower-level items.
Attributes of Good Requirements
The attributes of good requirements include thefollowing:
• A requirement must be achievable. It mustreflect need or objective for which a solution istechnically achievable at costs consideredaffordable.
• It must be verifiable—that is, not defined bywords such as excessive, sufficient, resistant,etc. The expected performance and functionalutility must be expressed in a manner thatallows verification to be objective, preferablyquantitative.
• A requirement must be unambiguous. It musthave but one possible meaning.
• It must be complete and contain all missionprofiles, operational and maintenance concepts,utilization environments and constraints. Allinformation necessary to understand thecustomer’s need must be there.
• It must be expressed in terms of need, notsolution; that is, it should address the “why”and “what” of the need, not how to do it.
• It must be consistent with other requirements.Conflicts must be resolved up front.
• It must be appropriate for the level of systemhierarchy. It should not be too detailed that itconstrains solutions for the current level ofdesign. For example, detailed requirementsrelating to components would not normally bein a system-level specification.
4.2 REQUIREMENTS ANALYSIS
Requirements analysis involves defining customerneeds and objectives in the context of plannedcustomer use, environments, and identified sys-tem characteristics to determine requirementsfor system functions. Prior analyses are reviewedand updated, refining mission and environmentdefinitions to support system definition.
Requirements analysis is conducted iteratively withfunctional analysis to optimize performancerequirements for identified functions, and toverify that synthesized solutions can satisfy cus-tomer requirements. The purpose of RequirementsAnalysis is to:
Chapter 4 Requirements Analysis
37
Figure 4-2. Inputs to Requirements Analysis
• Inputs converted to putputs:– Customer requirements– Mission and MOEs (MNS, ORD)– Maintenance concept and other life-cycle function
planning– SE outputs from prior development efforts
• Controls:– Laws and organizational policies and procedures– Military specific requirements– Utilization environments– Tech base and other constraints
• Enablers:– Multi-disciplinary product teams– Decision and requirements database including
system/configuration item descriptions from priorefforts
– System analysis and control
Controls
Enablers
OutputsInputs
Transformedinto Outputs
RequirementsAnalysis
• Refine customer objectives and requirements;
• Define initial performance objectives and refinethem into requirements;
• Identify and define constraints that limitsolutions; and
• Define functional and performance require-ments based on customer provided measuresof effectiveness.
In general, Requirements Analysis should resultin a clear understanding of:
• Functions: What the system has to do,
• Performance: How well the functions have tobe performed,
• Interfaces: Environment in which the systemwill perform, and
• Other requirements and constraints.
The understandings that come from requirementsanalysis establish the basis for the functional andphysical designs to follow. Good requirements
analysis is fundamental to successful designdefinition.
Inputs
Typical inputs include customer needs and objec-tives, missions, MOE/MOS, environments, keyperformance parameters (KPPs), technology base,output requirements from prior application of SEP,program decision requirements, and suitabilityrequirements. (See Figure 4-2 for additionalconsiderations.)
Input requirements must be comprehensive anddefined for both system products and system pro-cesses such as development, manufacturing, veri-fication, deployment, operations, support, trainingand disposal (eight primary functions).
Role of Integrated Teams
The operator customers have expertise in theoperational employment of the product or itembeing developed. The developers (government andcontractor) are not necessarily competent in theoperational aspects of the system under develop-ment. Typically, the operator’s need is neitherclearly nor completely expressed in a way directly
Systems Engineering Fundamentals Chapter 4
38
usable by developers. It is unlikely that develop-ers will receive a well-defined problem from whichthey can develop the system specification. Thus,teamwork is necessary to understand theproblem and to analyze the need. It is imperativethat customers are part of the definition team.
On the other hand, customers often find it easierto describe a system that attempts to solve the prob-lem rather than to describe the problem itself.Although these “solutions” may be workable tosome extent, the optimum solution is obtainedthrough a proper technical development effortthat properly balances the various customer mis-sion objectives, functions, MOE/MOS, and con-straints. An integrated approach to product andprocess development will balance the analysis ofrequirements by providing understanding andaccommodation among the eight primary functions.
Requirements Analysis Questions
Requirements Analysis is a process of inquiry andresolution. The following are typical questions thatcan initiate the thought process:
• What are the reasons behind the systemdevelopment?
• What are the customer expectations?
• Who are the users and how do they intend touse the product?
• What do the users expect of the product?
• What is their level of expertise?
• With what environmental characteristics mustthe system comply?
• What are existing and planned interfaces?
• What functions will the system perform,expressed in customer language?
• What are the constraints (hardware, software,economic, procedural) to which the system mustcomply?
• What will be the final form of the product: suchas model, prototype, or mass production?
This list can start the critical, inquisitive outlooknecessary to analyze requirements, but it is onlythe beginning. A tailored process similar to theone at the end of this chapter must be developedto produce the necessary requirements analysisoutputs.
4.3 REQUIREMENTS ANALYSISOUTPUTS
The requirements that result from requirementsanalysis are typically expressed from one of threeperspectives or views. These have been describedas the Operational, Functional, and Physical views.All three are necessary and must be coordinatedto fully understand the customers’ needs andobjectives. All three are documented in the decisiondatabase.
Operational View
The Operational View addresses how the systemwill serve its users. It is useful when establishingrequirements of “how well” and “under what con-dition.” Operational view information should bedocumented in an operational concept documentthat identifies:
• Operational need definition,
• System mission analysis,
• Operational sequences,
• Operational environments,
• Conditions/events to which a system mustrespond,
• Operational constraints on system,
• Mission performance requirements,
• User and maintainer roles (defined by job tasksand skill requirements or constraints),
Chapter 4 Requirements Analysis
39
• Structure of the organizations that will operate,support and maintain the system, and
• Operational interfaces with other systems.
Analyzing requirements requires understandingthe operational and other life cycle needs andconstraints.
Functional View
The Functional View focuses on WHAT the sys-tem must do to produce the required operationalbehavior. It includes required inputs, outputs,states, and transformation rules. The functionalrequirements, in combination with the physicalrequirements shown below, are the primary sourcesof the requirements that will eventually be reflectedin the system specification. Functional Viewinformation includes:
• System functions,
• System performance,– Qualitative — how well– Quantitative — how much, capacity– Timeliness — how often
• Tasks or actions to be performed,
• Inter-function relationships,
• Hardware and software functional relationships,
• Performance constraints,
• Interface requirements including identificationof potential open-system opportunities (poten-tial standards that could promote open systemsshould be identified),
• Unique hardware or software, and
• Verification requirements (to include inspection,analysis/simulation, demo, and test).
Physical View
The Physical View focuses on HOW the system isconstructed. It is key to establishing the physicalinterfaces among operators and equipment, andtechnology requirements. Physical Viewinformation would normally include:
• Configuration of System:– Interface descriptions,– Characteristics of information displays and
operator controls,– Relationships of operators to system/
physical equipment, and– Operator skills and levels required to
perform assigned functions.
• Characterization of Users:– Handicaps (special operating environments),
and– Constraints (movement or visual limita-
tions).
• System Physical Limitations:– Physical limitations (capacity, power, size,
weight),– Technology limitations (range, precision,
data rates, frequency, language),– Government Furinished Equipment (GFE),
Commercial-Off-the-Shelf (COTS),Nondevelopmental Item (NDI), reusabilityrequirements, and
– Necessary or directed standards.
4.4 SUMMARY POINTS
• An initial statement of a need is seldom definedclearly.
• A significant amount of collaboration betweenvarious life cycle customers is necessary toproduce an acceptable requirements document.
• Requirements are a statement of the problemto be solved. Unconstrained and noninte-grated requirements are seldom sufficient fordesigning a solution.
Systems Engineering Fundamentals Chapter 4
40
• Because requirements from different custom-ers will conflict, constraints will limit options,and resources are not unlimited; trade studies
must be accomplished in order to select a bal-anced set of requirements that provide feasiblesolutions to customer needs.
Chapter 4 Requirements Analysis
41
Figure 4-3. IEEE P1220 Requirements Analysis Task Areas
1. Customer expectations
2. Project and enterprise constraints
3. External constraints
4. Operational scenarios
5. Measure of effectiveness (MOEs)
6. System boundaries
7. Interfaces
8. Utilization environments
9. LIfe cycle
10. Functional requirements
11. Performance requirements
12. Modes of operation
13. Technical performance measures
14. Physical characteristics
15. Human systems integration
SUPPLEMENT 4-A
A PROCEDURE FORREQUIREMENTS ANALYSIS
requirements for future reference. It is the primarymeans for maintaining requirements traceability.This database decision management system mustbe developed or the existing system must bereviewed and upgraded as necessary to accommo-date the new stage of product development. A keypart of this database management system is aRequirements Traceability Matrix that maps re-quirements to subsystems, configuration items, andfunctional areas.
This must be developed, updated, and reissued ona regular basis. All requirements must be recorded.Remember: If it is not recorded, it cannot be anapproved requirement!
The 15 Tasks of IEEE P1220
The IEEE Systems Engineering Standard offers aprocess for performing Requirements Analysis thatcomprehensively identifies the important tasks thatmust be performed. These 15 task areas to be ana-lyzed follow and are shown in Figure 4-3.
The following section provides a list of tasks thatrepresents a plan to analyze requirements. Part ofthis notional process is based on the 15 require-ments analysis tasks listed in IEEE P1220. Thisindustry standard and others should be consultedwhen preparing engineering activities to helpidentify and structure appropriate activities.
As with all techniques, the student should be care-ful to tailor; that is, add or subtract, as suits theparticular system being developed. Additionally,these tasks, though they build on each other, shouldnot be considered purely sequential. Every taskcontributes understanding that may cause a needto revisit previous task decisions. This is the natureof all System Engineering activities.
Preparation: Establish andMaintain Decision Database
When beginning a systems engineering process,be sure that a system is in place to record and man-age the decision database. The decision databaseis an historical database of technical decisions and
Systems Engineering Fundamentals Chapter 4
42
Task 1. Customer Expectations
Define and quantify customer expectations. Theymay come from any of the eight primary functions,operational requirements documents, missionneeds, technology-based opportunity, direct com-munications with customer, or requirements froma higher system level. The purpose of this task isto determine what the customer wants the systemto accomplish, and how well each function mustbe accomplished. This should include natural andinduced environments in which the product(s) ofthe system must operate or be used, and constraints(e.g. funding, cost, or price objectives, schedule,technology, nondevelopmental and reusable items,physical characteristics, hours of operation per day,on-off sequences, etc.).
Task 2. Project and Enterprise Constraints
Identify and define constraints impacting designsolutions. Project specific constraints can include:
• Approved specifications and baselines devel-oped from prior applications of the SystemsEngineering Process,
• Costs,
• Updated technical and project plans,
• Team assignments and structure,
• Control mechanisms, and
• Required metrics for measuring progress.
Enterprise constraints can include:
• Management decisions from a precedingtechnical review,
• Enterprise general specifications,
• Standards or guidelines,
• Policies and procedures,
• Domain technologies, and
• Physical, financial, and human resourceallocations to the project.
Task 3. External Constraints
Identify and define external constraints impactingdesign solutions or implementation of the SystemsEngineering Process activities. External constraintscan include:
• Public and international laws and regulations,
• Technology base,
• Compliance requirements: industry, interna-tional, and other general specifications, stan-dards, and guidelines which require compliancefor legal, interoperability, or other reasons,
• Threat system capabilities, and
• Capabilities of interfacing systems.
Task 4. Operational Scenarios
Identify and define operational scenarios that scopethe anticipated uses of system product(s). For eachoperational scenario, define expected:
• Interactions with the environment and othersystems, and
• Physical interconnectivities with interfacingsystems, platforms, or products.
Task 5. Measures of Effectiveness andSuitability (MOE/MOS)
Identify and define systems effectiveness measuresthat reflect overall customer expectations andsatisfaction. MOEs are related to how well thesystem must perform the customer’s mission. KeyMOEs include mission performance, safety, oper-ability, reliability, etc. MOSs are related to howwell the system performs in its intended environ-ment and includes measures of supportability,maintainability, ease of use, etc.
Chapter 4 Requirements Analysis
43
Task 6. System Boundaries
Define system boundaries including:
• Which system elements are under design con-trol of the performing activity and which falloutside of their control, and
• The expected interactions among system ele-ments under design control and external and/orhigher-level and interacting systems outside thesystem boundary (including open systemsapproaches).
Task 7. Interfaces
Define the functional and physical interfaces toexternal or higher-level and interacting systems,platforms, and/or products in quantitative terms(include open systems approach). Functional andphysical interfaces would include mechanical, elec-trical, thermal, data, control, procedural, and otherinteractions. Interfaces may also be consideredfrom an internal/external perspective. Internalinterfaces are those that address elements insidethe boundaries established for the system ad-dressed. These interfaces are generally identifiedand controlled by the contractor responsible fordeveloping the system. External interfaces, on theother hand, are those which involve entity rela-tionships outside the established boundaries, andthese are typically defined and controlled by thegovernment.
Task 8. Utilization Environments
Define the environments for each operationalscenario. All environmental factors (natural orinduced) which may impact system performancemust be identified and defined. Environmentalfactors include:
• Weather conditions (e.g., rain, snow, sun, wind,ice, dust, fog),
• Temperature ranges,
• Topologies (e.g., ocean, mountains, deserts,plains, vegetation),
• Biological (e.g., animal, insects, birds, fungi),
• Time (e.g., dawn, day, night, dusk), and
• Induced (e.g., vibration, electromagnetic,chemical).
Task 9. Life Cycle Process Concepts
Analyze the outputs of tasks 1-8 to define key lifecycle process requirements necessary to develop,produce, test, distribute, operate, support, train, anddispose of system products under development.Use integrated teams representing the eight primaryfunctions. Focus should be on the cost drivers andhigher risk elements that are anticipated to impactsupportability and affordability over the useful lifeof the system.
Task 10. Functional Requirements
Define what the system must accomplish or mustbe able to do. Functions identified through require-ments analysis will be further decomposed duringfunctional analysis and allocation.
Task 11. Performance Requirements
Define the performance requirements for eachhigher-level function performed by the system. Pri-mary focus should be placed on performance re-quirements that address the MOEs, and otherKPPs established in test plans or identified asinterest items by oversight authorities.
Task 12. Modes of Operation
Define the various modes of operation for the sys-tem products under development. Conditions (e.g.,environmental, configuration, operational, etc.) thatdetermine the modes of operation should beincluded in this definition.
Task 13. Technical Performance Measures(TPMs)
Identify the key indicators of system performancethat will be tracked during the design process.Selection of TPMs should be limited to critical
Systems Engineering Fundamentals Chapter 4
44
technical thresholds and goals that, if not met, putthe project at cost, schedule, or performance risk.TPMs involve tracking the actual versus plannedprogress of KPPs such that the manager can makejudgments about technical progress on a by-ex-ception basis. To some extent TPM selection isphase dependent. They must be reconsidered ateach systems engineering process step and at thebeginning of each phase.
Task 14. Physical Characteristics
Identify and define required physical characteris-tics (e.g., color, texture, size, weight, buoyancy)for the system products under development. Iden-tify which physical characteristics are true con-straints and which can be changed, based on tradestudies.
Task 15. Human Factors
Identify and define human factor considerations(e.g., physical space limits, climatic limits, eyemovement, reach, ergonomics) which will affectoperation of the system products under develop-ment. Identify which human systems integrationare constraints and which can be changed basedon trade studies.
Follow-on Tasks
The follow-on tasks are related to the iterativenature of the Systems Engineering Process:
Integrate Requirements:
Take an integrated team approach to requirementsdetermination so that conflicts among and betweenrequirements are resolved in ways that result indesign requirements that are balanced in terms ofboth risk and affordability.
Validate Requirements:
During Functional Analysis and Allocation, vali-date that the derived functional and performancecan be traced to the operational requirements.
Verify Requirements:
• Coordinate design, manufacturing, deploymentand test processes,
• Ensure that requirements are achievable andtestable,
• Verify that the design-to-cost goals areachievable, and
• Verify that the functional and physical archi-tectures defined during Functional Analysis/Allocation and Synthesis meet the integratedtechnical, cost, and schedule requirementswithin acceptable levels of risk.
Chapter 5 Functional Analysis and Allocation
45
CHAPTER 5
FUNCTIONAL ANALYSISAND ALLOCATION
requirements. Functional Analysis and Allocationis repeated to define successively lower-level func-tional and performance requirements, thus defin-ing architectures at ever-increasing levels of detail.System requirements are allocated and defined insufficient detail to provide design and verificationcriteria to support the integrated system design.
This top-down process of translating system-level requirements into detailed functional andperformance design criteria includes:
• Defining the system in functional terms, thendecomposing the top-level functions intosubfunctions. That is, identifying at successivelylower levels what actions the system has to do,
• Translating higher-level performance require-ments into detailed functional and performancedesign criteria or constraints. That is, identi-fying how well the functions have to beperformed,
• Identifying and defining all internal and externalfunctional interfaces,
• Identifying functional groupings to minimizeand control interfaces (functional partitioning),
• Determining the functional characteristics of exist-ing or directed components in the system and in-corporating them in the analysis and allocation,
• Examining all life cycle functions, includingthe eight primary functions, as appropriate forthe specific project,
• Performing trade studies to determine alterna-tive functional approaches to meet requirements,and
5.1 INTRODUCTION
The purpose of this systems engineering processactivity is to transform the functional, performance,interface and other requirements that were identi-fied through requirements analysis into a coherentdescription of system functions that can be usedto guide the Design Synthesis activity that follows.The designer will need to know what the systemmust do, how well, and what constraints will limitdesign flexibility.
This is accomplished by arranging functions inlogical sequences, decomposing higher-levelfunctions into lower-level functions, and allocat-ing performance from higher- to lower-level func-tions. The tools used include functional flow blockdiagrams and timeline analysis; and the product isa functional architecture, i.e., a description of thesystem—but in terms of functions and performanceparameters, rather than a physical description.Functional Analysis and Allocation facilitatestraceability from requirements to the solutiondescriptions that are the outcome of DesignSynthesis.
Functions are discrete actions (use action verbs)necessary to achieve the system’s objectives. Thesefunctions may be stated explicitly, or they may bederived from stated requirements. The functionswill ultimately be performed or accomplishedthrough use of equipment, personnel, facilities,software, or a combination.
5.2 FUNCTIONAL ANALYSIS ANDALLOCATION
Functional and performance requirements at anylevel in the system are developed from higher-level
Systems Engineering Fundamentals Chapter 5
46
Figure 5-1. Functional Analysis and Allocation
• Outputs:– Functional architecture and supporting detail
• Inputs:– Outputs of the Requirements Analysis
• Enablers:– Multi-discipline product teams, decision database; Tools & Models, such as QFD, Functional Flow
Block Diagrams, IDEF, N2 charts, Requirement Allocation Sheet, Timelines, Data Flow Diagrams,State/Mode Diagrams, Behavior Diagrams
• Controls:– Constraints; GFE, COTS, & Reusable S/W; System concept
& subsystem choices; organizational procedures
• Activities:– Define system states and modes– Define system functions & external interfaces– Define functional interfaces– Allocate performance requirements to functions– Analyze performance– Analyze timing and resources– Analyze failure mode effects and criticality– Define fault detection and recovery behavior– Integrate functions
Controls
Enablers
OutputsInputsFunctionalAnalysis &Allocation
• Revisiting the requirements analysis step asnecessary to resolve functional issues.
The objective is to identify the functional, per-formance, and interface design requirements; itis not to design a solution…yet!
Functional Partitioning
Functional partitioning is the process of groupingfunctions that logically fit with the componentslikely to be used, and to minimize functional in-terfaces. Partitioning is performed as part of func-tional decomposition. It identifies logical group-ings of functions that facilitate the use of modularcomponents and open-system designs. Functionalpartitioning is also useful in understanding howexisting equipment or components (includingcommercial) will function with or within thesystem.
Requirements Loop
During the performance of the Functional Analysisand Allocation process, it is expected that revisit-ing the requirements analysis process will benecessary. This is caused by the emergence offunctional issues that will require re-examinationof the higher-level requirements. Such issues mightinclude directed components or standards thatcause functional conflict, identification of a revisedapproach to functional sequencing, or, most likely,a conflict caused by mutually incompatiblerequirements.
Figure 5-1 gives an overview of the basic param-eters of Functional Analysis and Allocation. Theoutput of the process is the functional architec-ture. In its most basic form, the functional archi-tecture is a simple hierarchical decomposition ofthe functions with associated performance require-ments. As the architecture definition is refined andmade more specific with the performance of the
Chapter 5 Functional Analysis and Allocation
47
Figure 5-2. Functional Architecture Example
activities listed in Figure 5-1, the functionalarchitecture becomes more detailed and compre-hensive. These activities provide a functionalarchitecture with sufficient detail to support theDesign Synthesis. They are performed with the aidof traditional tools that structure the effort and pro-vide documentation for traceability. There aremany tools available. The following are traditionaltools that represent and explain the primary tasksof Functional Analysis and Allocation (several ofthese are defined and illustrated beginning on page49):
• Functional flow block diagrams that define tasksequences and relationships,
• IDEF0 diagrams that define process and dataflows,
• Timeline analyses that define the time sequenceof time critical functions, and
• Requirements allocation sheets that identifyallocated performance and establish traceabilityof performance requirements.
5.3 FUNCTIONAL ARCHITECTURE
The functional architecture is a top-down decom-position of system functional and performance re-quirements. The architecture will show not onlythe functions that have to be performed, but alsothe logical sequencing of the functions andperformance requirements associated with thefunctions. It also includes the functional descrip-tion of existing and government-furnished itemsto be used in the system. This may require reverseengineering of these existing components.
The functional architecture produced by theFunctional Analysis and Allocation process is thedetailed package of documentation developed toanalyze the functions and allocate performancerequirements. It includes the functional flow blockdiagrams, timeline sheets, requirements allocationsheets, IDEF0 diagrams, and all other documenta-tion developed to describe the functionalcharacteristics of the system. However, there is abasic logic to the functional architecture, which inits preliminary form is presented in the exampleof Figure 5-2. The Functional Analysis andAllocation process would normally begin with the
First Level:Basic FunctionalRequirement
Second Level:Transport andcommunicateshowing asparallel functions
Third Level:Showing decom-position of thetransport func-tion
Required transportrequirements allocatedfrom mission requirements
Load Start Move Stop Unload
Communicate
Transport
Perform Mission
8 min 1 min 75 min 1 min 5 min0 km 0 km 50 km 0 km 0 km
50 km 90 min
Performance Allocation:Performance requirementsallocated to functions
A Simple Rule:Look to see if all the functions are verbs. If there is a function identified asa noun, then there is a problem with the understanding of the functions.
Systems Engineering Fundamentals Chapter 5
48
IPT drafting such a basic version of the archi-tecture. This would generally give the IPT anunderstanding of the scope and direction of theeffort.
Functional Architecture Example
The Marine Corps has a requirement to transporttroops in squad-level units over a distance of 50kilometers. Troops must be transported within 90minutes from the time of arrival of the transportsystem. Constant communication is required dur-ing the transportation of troops. Figure 5-2 illus-trates a preliminary functional architecture for thissimple requirement.
5.4 SUMMARY POINTS
Functional analysis begins with the output ofrequirements analysis (that is, the identification ofhigher-level functional and performance require-ments). Functional Analysis and Allocation con-sists of decomposition of higher-level functions tolower-levels and then allocation of requirementsto those functions.
• There are many tools available to support thedevelopment of a Functional Architecture, suchas: functional-flow block diagrams, timelineanalysis sheet, requirements allocation sheet,Integrated Definition, and others.
• Use of the tools illustrated in this chapter is notmandatory, but the process they represent is:
– Define task sequences and relationships(functional flow block diagram (FFBD)),
– Define process and data flows (IDEF0diagrams),
– Define the time sequence of time-criticalfunctions (timeline analysis sheets (TLS)),and
– Allocate performance and establish trace-ability of performance requirements (require-ments allocation sheets (RAS)).
Chapter 5 Functional Analysis and Allocation
49
Figure 5-3. FFBD Traceability and Indenture
SUPPLEMENT 5-A
FUNCTIONAL FLOWBLOCK DIAGRAM
• Proper sequencing of activities and designrelationships are established including criticaldesign interfaces.
Characteristics
The FFBD is functionally oriented—not solutionoriented. The process of defining lower-level func-tions and sequencing relationships is often referredto as functional decomposition. It allows traceabil-ity vertically through the levels. It is a key step indeveloping the functional architecture from whichdesigns may be synthesized.
Figure 5-3 shows the flow-down structure of a setof FFBDs and Figure 5-4 shows the format of anFFBD.
The purpose of the functional flow block diagram(FFBD) is to describe system requirements infunctional terms.
Objectives
The FFBD is structured to ensure that:
• All life cycle functions are covered.
• All elements of system are identified anddefined (e.g. prime equipment, training, spareparts, data, software, etc.).
• System support requirements are identified tospecific system functions.
2.6 2.7
2.8
Top Level
1st Level
2nd Level
System Function
1.0 2.0 3.0 5.0 6.0
4.0
Subfunction 1.4
1.4.1 1.4.2 1.4.3 1.4.6
Subfunction 1.0
1.1 1.2 1.3 1.6 1.7
1.4.51.4.4
1.5.51.5.41.5.3
1.4 1.5
Systems Engineering Fundamentals Chapter 5
50
Figure 5-4. Functional Flow Block Diagrams (FFBD) Format
Tentativefunction
Interface referenceblock (used on first-and lower-levelfunction diagramsonly)
Ref 9.2, Provide guidance
Functionaldescription
Functionnumber Summing
gate
Parallelfunctions
9.2.1
and3.5 Ref
1.1.2 Ref
or
and
or
and
or
or
9.2.2
9.2.3
9.2.4
Ref.11.3.1
Alternatefunctions
See Detail Diagram
Sys
Malf.
Leader note See Detail Diagram
No go flow
G
See Detail Diagram
Go flow
Flow level designator2nd Level
Title block and standard drawing number
Abbreviations/Notes:
“And” Gate: Parallel Function“Or” Gate: Alternate Function
Functional Flow BlockDiagram Format
Scope Note:
G
Key FFBD Attributes
Function block: Each function on an FFBD shouldbe separate and be represented by single box (solidline). Each function needs to stand for definite,finite, discrete action to be accomplished by systemelements.
Function numbering: Each level should have aconsistent number scheme and provide informa-tion concerning function origin. (E.g., top level—1.0, 2.0, 3.0, etc; first indenture (level 2)—1.1, 1.2,1.3, etc; second indenture (level 3)—1.1.1, 1.1.2,1.1.3, etc.) These numbers establish identificationand relationships that will carry through all Func-tional Analysis and Allocation activities andfacilitate traceability from lower to top levels.
Functional reference: Each diagram should con-tain a reference to other functional diagrams byusing a functional reference (box in brackets).
Flow connection: Lines connecting functionsshould only indicate function flow and not a lapsein time or intermediate activity.
Flow direction: Diagrams should be laid out sothat the flow direction is generally from left to right.Arrows are often used to indicate functional flows.
Summing gates: A circle is used to denote a sum-ming gate and is used when AND/OR is present.AND is used to indicate parallel functions and allconditions must be satisfied to proceed. OR is usedto indicate that alternative paths can be satisfied toproceed.
GO and NO-GO paths: “ G” and “bar G” are usedto denote “go” and “no-go” conditions. These sym-bols are placed adjacent to lines leaving a particularfunction to indicate alternative paths.
Chapter 5 Functional Analysis and Allocation
51
Figure 5-5. Integration Definition for Function Modeling (IDEF0) Box Format
Control
Input Output
Mechanism
Function Name
FunctionNumber
SUPPLEMENT 5-B
IDEF0
referenced to each other. The two primary model-ing components are: functions (represented on adiagram by boxes), and data and objects that in-terrelate those functions (represented by arrows).As shown by Figure 5-5 the position at which thearrow attaches to a box conveys the specific roleof the interface. The controls enter the top of thebox. The inputs, the data or objects acted upon bythe operation, enter the box from the left. The out-puts of the operation leave the right-hand side ofthe box. Mechanism arrows that provide support-ing means for performing the function join (pointup to) the bottom of the box.
The IDEF0 process starts with the identificationof the prime function to be decomposed. This func-tion is identified on a “Top Level Context Dia-gram,” that defines the scope of the particularIDEF0 analysis. An example of a Top Level Con-text Diagram for an information system manage-ment process is shown in Figure 5-6. From thisdiagram lower-level diagrams are generated. Anexample of a derived diagram, called a “child” in
Integration Definition for Function Modeling(IDEF0) is a common modeling technique for theanalysis, development, re-engineering, and inte-gration of information systems; business processes;or software engineering analysis. Where the FFBDis used to show the functional flow of a product,IDEF0 is used to show data flow, system control,and the functional flow of life cycle processes.
IDEF0 is capable of graphically representing awide variety of business, manufacturing and othertypes of enterprise operations to any level of detail.It provides rigorous and precise description, andpromotes consistency of usage and interpretation.It is well-tested and proven through many years ofuse by government and private industry. It can begenerated by a variety of computer graphics tools.Numerous commercial products specifically sup-port development and analysis of IDEF0 diagramsand models.
IDEF0 is a model that consists of a hierarchicalseries of diagrams, text, and glossary cross-
Systems Engineering Fundamentals Chapter 5
52
Figure 5-6. Top-Level Context Diagram
Purpose: The assessment, planning, and streamlining of information managementfunctions.
Viewpoint: The Information Integration Assessment Team.
Program Charter
Issues
ProgramPlan
ProgramTeam
Plan NewInformation
ProgramOperationsData
QA/A-0 Manage Information Resources
IDEF0 terminology, for a life cycle function isshown in Figure 5-7.
An associated technique, Integration Definition forInformation Modeling (IDEF1x), is used to supple-
ment IDEF0 for data intensive systems. The IDEF0standard, Federal Information Processing Stan-dards Publication 183 (FIPS 183), and the IDEF1xstandard (FIPS 184) are maintained by the NationalInstitute of Standards and Technology (NIST).
Chapter 5 Functional Analysis and Allocation
53
Figure 5-7. IDEF0 Diagram Example
Detected or suspected malfunction, oritem is scheduled for bench-check
In-serviceasset
Spareasset
Reparableasset
Asset(beforerepair)
Status records
Supplyparts
Completedasset
Asset(after
repair)
Replacementor original(repaired)
Spare
Removeand
replace
Scheduleinto
shop
Inspector
repair
Monitorand
route
Replaced asset
Assetsawaiting
parts
1
2
3
4
Node: Title: Number:
A0F Maintain Reparable Spares pg. 4–5
Systems Engineering Fundamentals Chapter 5
54
Figure 5-8. Time Analysis Sheet
Function 3.1 Establish and maintain vehiclereadiness from 35 hrs to 2 hrs prior to launch.
Function Hours
Number Name 30 25 20 15 10 5 4 3 2
3.1.1 Provide ground power
3.1.2 Provide vehicle air conditioning
3.1.3 Install and connect batteries
3.1.4 Install ordnance
3.1.5 Perform stray voltage checks andconnect ordnance
3.1.6 Load fuel tanks
3.1.7 Load oxidizer tanks
3.1.8 Activate guidance system
3.1.9 Establish propulsion flight pressure
3.1.10 Telemetry system “on”
2.5
7.5
2.6
7.5
7.5
2.5
1.0
2.5
SUPPLEMENT 5-C
TIMELINE ANALYSISSHEETS
function, and design constraints. It identifiesboth quantitative and qualitative performancerequirements. Initial resource requirements areidentified.
Figure 5-8 shows an example of a TLS. The timerequired to perform function 3.1 and its subfunc-tions are presented on a bar chart showing how thetimelines relate. (Function numbers match theFFBD.)
The timeline analysis sheet (TLS) adds detail todefining durations of various functions. It definesconcurrency, overlapping, and sequential relation-ships of functions and tasks. It identifies time criti-cal functions that directly affect system availabil-ity, operating time, and maintenance downtime. Itis used to identify specific time-related designrequirements.
The TLS includes purpose of function and thedetailed performance characteristics, criticality of
Chapter 5 Functional Analysis and Allocation
55
Figure 5-9. Requirements Allocation Sheet (Example)
Requirements Functional Flow Diagram Title and No. 2.58.4 EquipmentAllocation Sheet Provide Guidance Compartment Cooling Identification
Function Name Functional Performance and Facility Nomen- CI or Detailand No. Design Requirements Rqmnts clature Spec No.
2.58.4 Provide The temperature in the guidanceGuidance compartment must be maintained at theCompartment initial calibration temperature of +0.2 Deg F.Cooling The initial calibration temperature of the
compartment will be between 66.5 and 68.5Deg F.
2.58.4.1 Provide A storage capacity for 65 gal of chilled liquidChilled Coolant coolant (deionized water) is required. The(Primary) temperature of the stored coolant must be
monitored continuously. The stored coolantmust be maintained within a temperaturerange of 40–50 Deg F. for an indefiniteperiod of time. The coolant supplied mustbe free of obstructive particles 0.5 micron atall times.
SUPPLEMENT 5-D
REQUIREMENTSALLOCATION SHEET
The Requirements Allocation Sheet documents theconnection between allocated functions, allocatedperformance and the physical system. It providestraceability between Functional Analysis andAllocation and Design Synthesis, and shows any
disconnects. It is a major tool in maintaining con-sistency between functional architectures and de-signs that are based on them. (Function numbersmatch the FFBD.)
Systems Engineering Fundamentals Chapter 5
56
Chapter 6 Design Synthesis
57
Figure 6-1. Design Synthesis
• Outputs:– Physical Architecture (Product Elements and Software Code)– Decision Database
• Inputs:– Functional Architecture
• Enablers:– IPTs, Decision Database, Automated Tools, Models
• Controls:– Constraints; GFE, COTS, & Reusable S/W; System concept
& subsystem choices; organizational procedures
• Activities:– Allocate functions and constraints to system elements– Synthesize system element alternatives– Assess technology alternatives– Define physical interfaces– Define system product WBS– Develop life cycle techniques and procedures– Integrate system elements– Select preferred concept/design
Controls
Enablers
OutputsInputsDesign
Synthesis
CHAPTER 6
DESIGN SYNTHESIS
and restructure hardware and software componentsin such a way as to achieve a design solutioncapable of satisfying the stated requirements.During concept development, synthesis producessystem concepts and establishes basic relation-ships among the subsystems. During preliminaryand detailed design, subsystem and componentdescriptions are elaborated, and detailed interfacesbetween all system components are defined.
The physical architecture forms the basis fordesign definition documentation, such as, speci-fications, baselines, and work breakdown struc-tures (WBS). Figure 6-1 gives an overview of thebasic parameters of the synthesis process.
6.1 DESIGN DEVELOPMENT
Design Synthesis is the process by which conceptsor designs are developed based on the functionaldescriptions that are the products of FunctionalAnalysis and Allocation. Design synthesis is a cre-ative activity that develops a physical architecture(a set of product, system, and/or software elements)capable of performing the required functions withinthe limits of the performance parameters pre-scribed. Since there may be several hardware and/or software architectures developed to satisfy agiven set of functional and performance require-ments, synthesis sets the stage for trade studies toselect the best among the candidate architectures.The objective of design synthesis is to combine
Systems Engineering Fundamentals Chapter 6
58
Characteristics
Physical architecture is a traditional term. Despitethe name, it includes software elements as well ashardware elements. Among the characteristics ofthe physical architecture (the primary output ofDesign Synthesis) are the following:
• The correlation with functional analysisrequires that each physical or software compo-nent meets at least one (or part of one) func-tional requirement, though any component canmeet more than one requirement,
• The architecture is justified by trade studies andeffectiveness analyses,
• A product WBS is developed from the physicalarchitecture,
• Metrics are developed to track progress amongKPPs, and
• All supporting information is documented in adatabase.
Modular Designs
Modular designs are formed by grouping compo-nents that perform a single independent functionor single logical task; have single entry and exitpoints; and are separately testable. Grouping re-lated functions facilitates the search for modulardesign solutions and furthermore increases thepossibility that open-systems approaches can beused in the product architecture.
Desirable attributes of the modular units includelow coupling, high cohesion, and low connectiv-ity. Coupling between modules is a measure of theirinterdependence, or the amount of informationshared between two modules. Decoupling mod-ules eases development risks and makes later modi-fications easier to implement. Cohesion (also calledbinding) is the similarity of tasks performed withinthe module. High cohesion is desirable because itallows for use of identical or like (family or se-ries) components, or for use of a single componentto perform multiple functions. Connectivity refers
to the relationship of internal elements within onemodule to internal elements within another mod-ule. High connectivity is undesirable in that it cre-ates complex interfaces that may impede design,development, and testing.
Design Loop
The design loop involves revisiting the functionalarchitecture to verify that the physical architecturedeveloped is consistent with the functional andperformance requirements. It is a mapping betweenthe functional and physical architectures. Figure6-2 shows an example of a simple physical archi-tecture and how it relates to the functional archi-tecture. During design synthesis, re-evaluation ofthe functional analysis may be caused by the dis-covery of design issues that require re-examinationof the initial decomposition, performance alloca-tion, or even the higher-level requirements. Theseissues might include identification of a promisingphysical solution or open-system opportunities thathave different functional characteristics than thoseforeseen by the initial functional architecturerequirements.
6.2 SYNTHESIS TOOLS
During synthesis, various analytical, engineering,and modeling tools are used to support anddocument the design effort. Analytical devices suchas trade studies support decisions to optimizephysical solutions. Requirements Allocation Sheets(RAS) provide traceability to the functional andperformance requirements. Simple descriptionslike the Concept Decription Sheet (CDS) help visu-alize and communicate the system concept. Logicmodels, such as the Schematic Block Diagram(SBD), establish the design and the interrelation-ships within the system.
Automated engineering management tools such asComputer-Aided Design (CAD), Computer-Aided-Systems Engineering (CASE), and theComputer-Aided-Engineering (CAE) can help or-ganize, coordinate and document the design effort.CAD generates detailed documentation describ-ing the product design including SBDs, detailed
Chapter 6 Design Synthesis
59
Figure 6–2. Functional/Physical Matrix
Function Performed
Preflight check X X X X X
Fly
Load X
Taxi X X X
Take-off X X
Cruise X X X X
Recon X X X X
Communicate X
–
–
Surveillance
–
–
FUNCTIONAL
ARCHITECTURE
Communi-cations
NavSystem
FireControl
AirFrame
Engine
Aircraft
PHYSICAL ARCHITECTURE
drawings, three dimensional and solid drawings,and it tracks some technical performance measure-ments. CAD can provide significant input for vir-tual modeling and simulations. It also provides acommon design database for integrated designdevelopments. Computer-Aided Engineering canprovide system requirements and performanceanalysis in support of trade studies, analysis re-lated to the eight primary functions, and cost analy-ses. Computer-Aided Systems Engineering canprovide automation of technical managementanalyses and documentation.
Modeling
Modeling techniques allow the physical productto be visualized and evaluated prior to designdecisions. Models allow optimization of hardware
and software parameters, permit performancepredictions to be made, allow operational se-quences to be derived, and permit optimumallocation of functional and performance require-ments among the system elements. The traditionallogical prototyping used in Design Synthesis is theSchematic Block Diagram.
6.3 SUMMARY POINTS
• Synthesis begins with the output of FunctionalAnalysis and Allocation (the functional archi-tecture). The functional architecture is trans-formed into a physical architecture by definingphysical components needed to perform thefunctions identified in Functional Analysis andAllocation.
Systems Engineering Fundamentals Chapter 6
60
• Many tools are available to support thedevelopment of a physical architecture:
– Define and depict the system concept (CDS),
– Define and depict components and theirrelationships (SBD), and
– Establish traceability of performancerequirements to components (RAS).
• Specifications and the product WBS are derivedfrom the physical architecture.
Chapter 6 Design Synthesis
61
Figure 6-3. Concept Description Sheet Example
SteeringCommands
Missile
Radio
Computer
MissileTracking
Radar
Target
TargetTracking
Radar
External Command Guidance System
SUPPLEMENT 6-A
CONCEPT DESCRIPTIONSHEET
The Concept Description Sheet describes (in tex-tual or graphical form) the technical approach orthe design concept, and shows how the system will
be integrated to meet the performance and func-tional requirements. It is generally used in earlyconcept design to show system concepts.
Systems Engineering Fundamentals Chapter 6
62
Figure 6-4. Schematic Block Diagram Example
Inert GasPressurization
Subsystem
OxidizerStorage
Subsystem
FuelStorage
Subsystem
SolenoidValve
(Oxidizer)
SolenoidValve(Fuel)
ElectricalPower
Subsystem(Ref) Manual Control and
Display Subsystem(Ref)
MSRV Guidanceand Navigation
Subsystem(Ref)
Rocket Engine Nozzle Assemblies
PitchThrust
RollThrust
YawThrust
LongitudinalVelocity
Increments
Inert Gas PressurantOxidizer Remaining Indication
Fuel Remaining Indication
OxidizerFuel
Command Signals
Command Signals
Command Signals
Command Signals
Attitude andNavigational
Signals
Moon Station Rendezvous Vehicle
Attitude Control andManeuvering Subsystem
SUPPLEMENT 6-B
SCHEMATIC BLOCKDIAGRAMS
between components and their functional origin;and provide a valuable tool to enhance configura-tion control. The SBD is also used to developInterface Control Documents (ICDs) and providesan overall understanding of system operations.
A simplified SBD, Figure 6-4, shows how compo-nents and the connection between them are pre-sented on the diagram. An expanded version isusually developed which displays the detailed func-tions performed within each component and a de-tailed depiction of their interrelationships. Ex-panded SBDs will also identify the WBS numbersassociated with the components.
The Schematic Block Diagram (SBD) depicts hard-ware and software components and their interrela-tionships. They are developed at successively lowerlevels as analysis proceeds to define lower-levelfunctions within higher-level requirements. Theserequirements are further subdivided and allocatedusing the Requirements Allocation Sheet (RAS).SBDs provide visibility of related system elements,and traceability to the RAS, FFBD, and other sys-tem engineering documentation. They describe asolution to the functional and performance require-ments established by the functional architecture;show interfaces between the system componentsand between the system components and othersystems or subsystems; support traceability
Chapter 6 Design Synthesis
63
Figure 6-5. Requirements Allocation Sheet (Example)
Requirements Functional Flow Diagram Title and No. 2.58.4 EquipmentAllocation Sheet Provide Guidance Compartment Cooling Identification
Function Name Functional Performance and Facility Nomenclature CI or Detailand No. Design Requirements Rqmnts Spec No.
2.58.4 Provide The temperature in the guidance Guidance Compart- 3.54.5Guidance compartment must be maintained ment CoolingCompartment at the initial calibration tempera- SystemCooling ture of +0.2 Deg F. The initial cal-
ibration temperature of the com-partment will be between 66.5and 68.5 Deg F.
2.58.4.1 Provide A storage capacity for 65 gal of Guidance Compart- 3.54.5.1Chilled Coolant chilled liquid coolant (deionized ment Coolant(Primary) water) is required. The temperature Storage Subsystem
of the stored coolant must bemonitored continuously. The storedcoolant must be maintained withina temperature range of 40–50 DegF. for an indefinite period of time.The coolant supplied must be freeof obstructive particles 0.5 micronat all times.
SUPPLEMENT 6-C
REQUIREMENTS ALLOCATIONSHEET
The RAS initiated in Functional Analysis andAllocation is expanded in Design Synthesis todocument the connection between functionalrequirements and the physical system. It providestraceability between the Functional Analysis and
Allocation and Synthesis activities. It is a majortool in maintaining consistency between functionalarchitectures and the designs that are based onthem. (Configuration Item (CI) numbers match theWBS.)
Systems Engineering Fundamentals Chapter 6
64
Chapter 7 Verification
65
Figure 7-1. Systems Engineering and Verification
CHAPTER 7
VERIFICATION
system to ensure that cost, schedule, and perfor-mance requirements are satisfied with acceptablelevels of risk. Further objectives include generat-ing data (to confirm that system, subsystem, andlower level items meet their specification require-ments) and validating technologies that will be usedin system design solutions. A method to verify eachrequirement must be established and recorded dur-ing requirements analysis and functional alloca-tion activities. (If it can not be verified it can notbe a legitimate requirement.) The verification listshould have a direct relationship to the require-ments allocation sheet and be continually updatedto correspond to it.
7.1 GENERAL
The Verification process confirms that Design Syn-thesis has resulted in a physical architecture thatsatisfies the system requirements. Verification rep-resents the intersection of systems engineering andtest and evaluation.
Verification Objectives
The objectives of the Verification process includeusing established criteria to conduct verificationof the physical architecture (including software andinterfaces) from the lowest level up to the total
SFR = System Functional ReviewPDR = Preliminary Design ReviewCDR = Critical Design Review
TRR = Test Readiness ReviewSVR = System Verification Review
System LevelDesign Requirements
Item LevelDesign Requirements
System Level
Configuration Items
Subsystems
All Design Requirements Complete
Components
Assemblies
SFR
PDR
CDR
SVR
TRR
Fabr
icat
e, In
tegr
ate
& T
estD
esign
Systems Engineering Fundamentals Chapter 7
66
Verification Activities
System design solutions are verified by the fol-lowing types of activities:
1. Analysis – the use of mathematical modelingand analytical techniques to predict the com-pliance of a design to its requirements basedon calculated data or data derived from lowerlevel component or subsystem testing. It isgenerally used when a physical prototype orproduct is not available or not cost effective.Analysis includes the use of both modeling andsimulation which is covered in some detail inchapter 13,
2. Inspection – the visual examination of the sys-tem, component, or subsystem. It is generallyused to verify physical design features orspecific manufacturer identification,
3. Demonstration – the use of system, subsystem,or component operation to show that a require-ment can be achieved by the system. It is gen-erally used for a basic confirmation of perfor-mance capability and is differentiated from test-ing by the lack of detailed data gathering, or
4. Test – the use of system, subsystem, or com-ponent operation to obtain detailed data toverify performance or to provide sufficientinformation to verify performance throughfurther analysis. Testing is the detailed quan-tifying method of verification, and as describedlater in this chapter, it is ultimately required inorder to verify the system design.
Choice of verification methods must be consid-ered an area of potential risk. Use of inappropriatemethods can lead to inaccurate verification. Re-quired defining characteristics, such as key per-formance parameters (KPPs) are verified by dem-onstration and/or test. Where total verification bytest is not feasible, testing is used to verify keycharacteristics and assumptions used in designanalysis or simulation. Validated models and simu-lation tools are included as analytical verificationmethods that complement other methods. Thefocus and nature of verification activities change
as designs progress from concept to detaileddesigns to physical products.
During earlier design stages, verification focuseson proof of concept for system, subsystem andcomponent levels. During later stages, as the prod-uct definition effort proceeds, the focus turns toverifying that the system meets the customerrequirements. As shown by Figure 7-1, design is atop-down process while the Verification activity isa bottom-up process. Components will be fabri-cated and tested prior to the subsystems. Sub-systems will be fabricated and tested prior to thecompleted system.
Performance Verification
Performance requirements must be objectivelyverifiable, i.e., the requirement must be measur-able. Where appropriate, Technical PerformanceMeasurements (TPM) and other managementmetrics are used to provide insight on progresstoward meeting performance goals and require-ments. IEEE Standard P1220 provides a structurefor Verification activity. As shown in Figure 7-2the structure is comprehensive and provides a goodstarting point for Verification planning.
7.2 DOD TEST AND EVALUATION
DoD Test and Evaluation (T&E) policies and pro-cedures directly support the system engineeringprocess of Verification. Testing is the means bywhich objective judgments are made regardingthe extent to which the system meets, exceeds,or fails to meet stated objectives. The purpose ofevaluation is to review, analyze, and assess dataobtained from testing and other means to aid inmaking systematic decisions. The purpose of DoDT&E is to verify technical performance, opera-tional effectiveness, operational suitability; andit provides essential information in support ofdecision making.
Common Types of T&E in DoD
T&E policy requires developmental tests. Theyconfirm that technical requirements have been
Chapter 7 Verification
67
Figure 7-2. Verification Tasks
SelectVerification Approach
DefineVerification Procedures
From: Synthesis
Physical Verification
ConductVerification Evaluation
IdentifyVariance and Conflicts
VerifiedPhysical Architecture
VerifyFunctional and
Performance Measures
Verified PhyscialArchitectures of
Life Cycle Products/Processes
Verify ArchitecturalCompleteness
Establish VerificationEnvironment
Verify Satisfactionof Constraints
To:• Requirements Analysis
• Synthesis
To: Control
Establish Specifications andConfiguration Baselines
Develop ProductBreakdown Structure(s)
To: Control
VerifiedSystem Architecture
• Requirements Vaseline• Functional Architecture
Define, Inspection, Analysis,Demo, or Test Requirements
Adapted from IEEE 1220
satisfied, and independent analysis and tests verifythe system’s operational effectiveness and suita-bility. DoD T&E traditionally and by directive iscategorized as:
• Developmental T&E which focuses primarilyon technical achievement,
• Operational T&E which focuses on operationaleffectiveness and suitability and includes EarlyOperational Assessments (EOA), OperationalAssessment (OA), Initial Operational Test and
Evaluation (IOT&E), and Follow-On Opera-tional Test and Evaluation (FOT&E), and
• Live Fire T&E which provides assessment ofthe vulnerability and lethality of a system bysubjecting it to real conditions comparable tothe required mission.
T&E
The program office plans and manages the testeffort to ensure testing is timely, efficient,
Systems Engineering Fundamentals Chapter 7
68
comprehensive and complete—and that test resultsare converted into system improvements. Test plan-ning will determine the effectiveness of theverification process. Like all systems engineeringplanning activities, careful attention to testplanning can reduce program risk. The key testplanning document is the Test and EvaluationMaster Plan (TEMP). This document lays out theobjectives, schedule, and resources reflecting pro-gram office and operational test organization plan-ning decisions. To ensure integration of this ef-fort, the program office organizes a Test PlanningWork Group (TPWG) or Test Working Level IPT(WIPT) to coordinate the test planning effort.
Test Planning Work Group/Test WIPT
The TPWG/Test WIPT is intended to facilitate theintegration of test requirements and activitiesthrough close coordination between the memberswho represent the material developer, designercommunity, logistic community, user, operationaltester, and other stakeholders in the system devel-opment. The team outlines test needs based onsystem requirements, directs test design, deter-mines needed analyses for each test, identifiespotential users of test results, and provides rapiddissemination of test and evaluation results.
Test and Evaluation Master Plan (TEMP)
The Test and Evaluation Master Plan is a manda-tory document prepared by the program office. Theoperational test organization reviews it andprovides the operational test planning for inclu-sion. The TEMP is then negotiated between theprogram office and operational test organization.After differences are resolved, it is approved atappropriate high levels in the stakeholder organi-zations. After approval it becomes binding on man-agers and designers (similar to the binding natureof the Operational Requirements Document (ORD)).
The TEMP is a valuable Verification tool thatprovides an excellent template for technology, sys-tem, and major subsystem-level Verification plan-ning. The TEMP includes a reaffirmation of theuser requirements, and to an extent, an interpreta-tion of what those requirements mean in various
operational scenarios. Part I of the required TEMPformat is System Introduction, which provides themission description, threat assessment, MOEs/MOSs, a system description, and an identificationof critical technical parameters. Part II, IntegratedTest Program Summary, provides an integrated testprogram schedule and a description of the overalltest management process. Part III, DevelopmentalTest & Evaluation (DT&E) Outline, lays out anoverview of DT&E efforts and a description offuture DT&E. Part IV, Operational Test & Evalu-ation (OT&E) Outline, is provided by the opera-tional test organization and includes an OT&Eoverview, critical operational issues, future OT&Edescription, and LFT&E description. Part V, Test& Evaluation Resource Summary, identifies thenecessary physical resources and activity respon-sibilities. This last part includes such items as testarticles, test sites, test instrumentation, test sup-port equipment, threat representation, test targetsand other expendables, operational force testsupport, simulations, models, test-beds, specialrequirements, funding, and training.
Key Performance Parameters
Every system will have a set of KPPs that are theperformance characteristics that must be achievedby the design solution. They flow from the opera-tional requirements and the resulting derivedMOEs. They can be identified by the user, thedecision authority, or the operational tester. Theyare documented in the TEMP.
Developmental Test and Evaluation
The DT&E verifies that the design solution meetsthe system technical requirements and the systemis prepared for successful OT&E. DT&E activitiesassess progress toward resolving critical operationalissues, the validity of cost-performance tradeoffdecisions, the mitigation of acquisition technicalrisk, and the achievement of system maturity.
DT&E efforts:
• Identify potential operational and technologi-cal capabilities and limitations of the alterna-tive concepts and design options being pursued;
Chapter 7 Verification
69
Figure 7-3. DT&E During System Acquisition
• Support the identification of cost-performancetradeoffs by providing analyses of thecapabilities and limitations of alternatives;
• Support the identification and description ofdesign technical risks;
• Assess progress toward resolving CriticalOperational Issues, mitigating acquisitiontechnical risk, achieving manufacturing processrequirements and system maturity;
• Assess validity of assumptions and analysisconclusions; and
• Provide data and analysis to certify the systemready for OT&E, live-fire testing and otherrequired certifications.
Figure 7-3 highlights some of the more signifi-cant DT&E focus areas and where they fit in theacquisition life cycle.
Live Fire Test and Evaluation
LFT&E is performed on any Acquisition Category(ACAT) I or II level weapon system that includesfeatures designed to provide protection to the sys-tem or its users in combat. It is conducted on aproduction configured article to provide informa-tion concerning potential user casualties, vulner-abilities, and lethality. It provides data that canestablish the system’s susceptibility to attack andperformance under realistic combat conditions.
Operational Test and Evaluation
OT&E programs are structured to determine theoperational effectiveness and suitability of a sys-tem under realistic conditions, and to determine ifthe minimum acceptable operational performancerequirements as specified in the ORD and reflectedby the KPPs have been satisfied. OT&E uses threat-representative forces whenever possible, and em-ploys typical users to operate and maintain thesystem or item under conditions simulating bothcombat stress and peacetime conditions. Opera-tional tests will use production or production-
– Modifications– AlternativesSimulation Simulation
SimulationTechnology
Studies andAnalysis Requirements Production
Design
– Validates specifications– System performance– Modifications– Product acceptance
DT&E
DT&E
– Engineering completeness– System performance– Validate specifications– Technical compliance test– Qualification tests
DT&E
– Preferred tech approach– Technical risk– Engineering design solutions
DT&E
ProductionDesign
DT&E
TechnologyFeasibility
Testing
Concept andTechnology
Development
SystemDevelopment
andDemonstration
Productionand
Deployment
Operationsand
Support
MS A MS B MS C
Systems Engineering Fundamentals Chapter 7
70
Figure 7-4. OT&E During System Acquisition
representative articles for the operational tests thatsupport the full-rate production decision. Live FireTests are usually performed during the operationaltesting period. Figure 7-4 shows the major activi-ties associated with operational testing and wherethey fit in the DoD acquisition life cycle.
OT&E Differences
Though the overall objective of both DT&E andOT&E is to verify the effectiveness and suitabilityof the system, there are distinct differences in theirspecific objects and focus. DT&E primarily fo-cuses on verifying system technical requirements,while OT&E focuses on verifying operational re-quirements. DT&E is a program office responsi-bility that is used to develop the design. OT&E isan independent evaluation of design maturity that
is used to determine if the program should pro-ceed to full-rate production. Figure 7-5 lists themajor differences between the two.
7.3 SUMMARY POINTS
The Verification activities of the Systems Engineer-ing Process are performed to verify that physicaldesign meets the system requirements.
• DoD T&E policy supports the verification pro-cess through a sequence of Developmental,Operational, and Live-Fire tests, analyses, andassessments. The primary management tools forplanning and implementing the T&E effort arethe TEMP and the integrated planning team.
– Validate Requirements– Operational utility– Independent evaluation
MissionArea
Analysi s
IntegratedProgramSummary
Simulation
ConceptStudies
IOT&E
– Operational utility– Production validation– Independent assessment
– Potential effectiveness– Suitability– Alternatives– Independent evaluation– Milestone II decision information
– Follow-on OT&E– Operational utility– Tactics-doctrine personnel_ Interoperability
IOT&E
OA
EOA
EOA
– Operational aspects–Preferred alternatives
IOT&E
Concept andTechnology
Development
SystemDevelopment
andDemonstration
Productionand
Deployment
Operationsand
Support
MS A MS B MS C
Chapter 7 Verification
71
Figure 7-5. DT/OT Comparison
Development Tests
• Controlled by program manager
• One-on-one tests
• Controlled environment
• Contractor environment
• Trained, experienced operators
• Precise performance objectives andthreshold measurements
• Test to specification
• Developmental, engineering, or productionrepresentative test article
Operational Tests
• Controlled by independent agency
• Many-on-many tests
• Realistic/tactical environment withoperational scenario
• No system contractor involvement
• User troops recently trained
• Performance measures of operationaleffectiveness and suitability
• Test to operational requirements
• Production representative test article
Systems Engineering Fundamentals Chapter 7
72
Chapter 8 Systems Engineering Process Outputs
73
CHAPTER 8
SYSTEMS ENGINEERINGPROCESS OUTPUTS
considered. The design contractor will normallydevelop the levels below these first three. Chapter 9of this text describes the WBS in more detail.
Specifications
A specification is a document that clearly andaccurately describes the essential technical require-ments for items, materials, or services includingthe procedures by which it can be determined thatthe requirements have been met. Specificationshelp avoid duplication and inconsistencies, allowfor accurate estimates of necessary work andresources, act as a negotiation and reference docu-ment for engineering changes, provide documen-tation of configuration, and allow for consistentcommunication among those responsible for theeight primary functions of Systems Engineering.They provide IPTs a precise idea of the problemto be solved so that they can efficiently design thesystem and estimate the cost of design alternatives.They provide guidance to testers for verification(qualification) of each technical requirement.
Program-Unique Specifications
During system development a series of specifica-tions are generated to describe the system at dif-ferent levels of detail. These program unique speci-fications form the core of the configurationbaselines. As shown by Figure 8-2, in addition toreferring to different levels within the system hi-erarchy, these baselines are defined at differentphases of the design process.
Initially the system is described in terms of thetop-level (system) functions, performance, and in-terfaces. These technical requirements are derivedfrom the operational requirements established by
8.1 DOCUMENTING REQUIREMENTSAND DESIGNS
Outputs of the systems engineering process con-sist of the documents that define the system re-quirements and design solution. The physicalarchitecture developed through the synthesis pro-cess is expanded to include enabling products andservices to complete the system architecture. Thissystem level architecture then becomes the refer-ence model for further development of system re-quirements and documents. System engineeringprocess outputs include the system and configura-tion item architectures, specifications, andbaselines, and the decision database.
Outputs are dependent on the level of development.They become increasingly technically detailed assystem definition proceeds from concept to detaileddesign. As each stage of system definition isachieved, the information developed forms theinput for succeeding applications of the systemengineering process.
Architectures: System/Configuration Item
The System Architecture describes the entire sys-tem. It includes the physical architecture producedthrough design synthesis and adds the enablingproducts and services required for life cycleemployment, support, and management. MilitaryHandbook (MIL-HDBK)-881, Work BreakdownStructures, provides reference models for weaponsystems architectures. As shown by Figure 8-1,MIL-HDBK-881 illustrates the first three levelsof typical system architectures. Program Officescan use MIL-HDBK-881 templates during systemdefinition to help develop a top-level architec-ture tailored to the needs of the specific system
Systems Engineering Fundamentals Chapter 8
74
Figure 8-1. Example from MIL-HDBK-881
Aircraft SystemLevel 1
Airframe
Propulsion
Application Software
System Software
C3I
Navigation/Guidance
Central Computer
Fire Control
Data Display and Controls
Survivability
Reconnaissance
Automatic Flight Control
Central Integrated Checkout
Antisubmarine Warfare
Armament
Weapons Delivery
Auxiliary Equipment
DT&E
OT&E
Mockups
T&ESupport
TestFacilities
Equipment
Services
Facilities
Tech Pubs
Engrg Data
SupportData
Manage-ment Data
DataDepository
Test andMeasurem’tEquipment
SupportandHandlingEquipment
Test andMeasurem’tEquipment
SupportandHandlingEquipment
SysAssembly,InstallationandCheckouton Site
ContractorTech Support
SiteConstruction
Site/ShipVehicleConversion
Construc-tion/Conver-sion/Expan-sion
EquipmentAcquisitionor Mod
Maintenance
(Specify byAllowanceList,Groupingor H/WElement)
PeculiarSupport
Equipment
CommonSupport
Equipment
Op/SiteActivation
IndustrialFacilities
InitialSpares and
InitialRepairParts
AirVehicle
SE/Program
Mgmt
SystemT&E
Training Data
Level 2
Level 3
the user. This system-level technical description isdocumented in the System Specification, which isthe primary documentation of the system-levelFunctional Baseline. The system requirements arethen flowed down (allocated) to the items belowthe system level, such that a set of design criteriaare established for each of those items. These itemdescriptions are captured in a set of Item Perfor-mance Specifications, which together with otherinterface definitions, process descriptions, anddrawings, document the Allocated Baseline (some-times referred to as the “Design To” baseline).Having baselined the design requirements for theindividual items, detailed design follows. Detaileddesign involves defining the system from top tobottom in terms of the physical entities that willbe employed to satisfy the design requirements.When detailed design is complete, a final baselineis defined. This is generally referred to as the Prod-uct Baseline, and, depending on the stage of de-velopment, may reflect a “Build to” or “As built”description. The Product Baseline is documented
by the Technical Data Package, which will includenot only Item Detail Specifications, but also, Pro-cess and Material Specifications, as well as draw-ings, parts lists, and other information that de-scribes the final system in full physical detail. Fig-ure 8-3 shows how these specifications relate totheir associated baselines.
Role of Specifications
Requirements documents express why the devel-opment is needed. Specification documents are anintermediate expression of what the needed sys-tem has to do in terms of technical requirements(function, performance, and interface). Designdocuments (drawings, associated lists, etc.) de-scribe the means by which the design requirementsare to be satisfied. Figure 8-4 illustrates howrequirements flow down from top-level specifica-tions to design documentation. Preparation ofspecifications are part of the system engineeringprocess, but also involve techniques that relate to
Chapter 8 Systems Engineering Process Outputs
75
Figure 8-2. Specifications and Levels of Development
Figure 8-3. Specification Types
Specification Content Baseline
System Defines mission/technical performance requirements. FunctionalSpec Allocates requirements to functional areas and defines interfaces.
Item Defines performance characteristics of CIs and CSCIs. AllocatedPerformance Details design requirements and with drawings and other “Design To”
Spec documents form the Allocated Baseline.
Item Detail Defines form, fit, function, performance, and test requirements ProductSpec for acceptance. (Item, process, and material specs start the “Build To”
Product Baseline effort, but the final audited baseline includes orall the items in the TDP.) “As Built”
Process Defines process performed during fabrication.Spec
Material Defines production of raw materials or semi-fabricatedSpec material used in fabrication.
SystemConcept
Requirements
System Definition (Functional Baseline)Integrated and
Qualified System
System Specification
Preliminary Design (Allocated Baseline)
Item Performance Specifications
Detail Design (Product BL)
Performance Item Specs
SI&T = System Integration and Text
SystemSpec
ItemDetailSpecs
Drawings
Processand
MaterialSpecs
SI&T
SI&T
Drawings
Product
Input
ProductOutput
ProductOutput
ProductOutput
Product
Input
Product
Input
Product
Input
ProductOutput
SI&T
Systems Engineering Fundamentals Chapter 8
76
Figure 8-4. How Specifications Lead to Design Documents
communication skills, both legal and editorial.Figure 8-5 provides some rules of thumb thatillustrate this.
In summary, specifications document what thesystem has to do, how well it has to do it, and howto verify it can do it.
Baselines
Baselines formally document a product at somegiven level of design definition. They are refer-ences for the subsequent development to follow.Most DoD systems are developed using the threeclassic baselines described above: functional,allocated, and product. Though the program uniquespecifications are the dominant baseline documen-tation, they alone do not constitute a baseline.
Additional documents include both end and en-abling product descriptions. End product baselinedocuments normally include those describingsystem requirements, functional architecture,physical architecture, technical drawing package,
and requirements traceability. Enabling productbaseline documents include a wide range ofdocuments that could include manufacturing plansand processes, supportability planning, supplydocumentation, manuals, training plans and pro-grams, test planning, deployment planning, andothers. All enabling products should be reviewedfor their susceptibility to impact from system con-figuration changes. If a document is one thatdescribes a part of a system and could requirechange if the configuration changes, then mostlikely it should be included as a baseline document.
Acquisition Program Baselines
Acquisition Program Baselines and ConfigurationBaselines are related. To be accurate the Programbaseline must reflect the realities of the Configu-ration Baseline, but the two should not be con-fused. Acquisition Program Baselines are high levelassessments of program maturity and viability.Configuration Baselines are system descriptions.Figure 8-6 provides additional clarification.
Technical Data Package which includes:
• Engineering Drawings and associated lists• Technical manuals• Manufacturing part programs• Verfication provisions• Spares provisioning lists• Specifications, those listed above plus any of the
following may be referenced;– Defense specs– Commercial item descriptions– International specs– Non-government standards– Commercial standards– Etc.
ItemDetail Specs
Product Baseline“Build To” Specs
Item PerformanceSpecs
SystemSpec
ProcessMaterialSpecs
Chapter 8 Systems Engineering Process Outputs
77
• Use a table of contents and define all abbreviations and acronyms.
• Use active voice.
• Use “shall” to denote mandatory requirement and “may” or “should” to denote guidanceprovisions.
• Avoid ambiguous provisions, such as “as necessary,” “contractor’s best practice,” “smoothfinish,” and similar terms.
• Use the System Engineering Process to identify requirements. Do not over-specify.
• Avoid “tiering.” Any mandatory requirement in a document below the first tier, should be statedin the specification.
• Only requirement sections of the MIL-STD-491D formats are binding. Do not put requirementsin non-binding sections, such as Scope , Documents , or Notes .
• Data documentation requirements are specified in a Contract Data Requirements List.
Figure 8–5. Rules of Thumb for Specification Preparation
Figure 8–6. Acquisition Program Baselines and Configuration Baselines
• Program Baselines
– Embody only the most important cost,schedule, and performance objectivesand thresholds
– Threshold breach results in re-evalua-tion of program at MDA level
– Selected key performance parameters
– Specifically evolves over the develop-ment cycle and is updated at each majormilestone review or program restructure
• Required on ALL programs for measuringand reporting status
• Configuration Baselines
Identify and define an item’s functionaland physical characteristics
– Functional Baseline – Describes systemlevel requirements
– Allocated Baseline – Describes designrequirements for items below systemlevel
– Product Baseline – Describes productphysical detail
• Documents outputs of Systems EngineeringProcess
Decision Database
The decision database is the documentation thatsupports and explains the configuration solutiondecisions. It includes trade studies, cost effective-ness analyses, Quality Function Deployment (QFD)analysis, models, simulations, and other datagenerated to understand a requirement, developalternative solutions, or make a choice betweenthem. These items are retained and controlled aspart of the Data Management process described inChapter 10.
8.2 DOD POLICY AND PRACTICE—SPECIFICATIONS AND STANDARDS
DoD uses specifications to communicate productrequirements and standards to provide guidanceconcerning proven methods and practices.
Specifications
DoD uses three basic classifications of specifica-tions: materiel specifications (developed by DoDcomponents), Program-Unique Specifications, andnon-DoD specifications.
Systems Engineering Fundamentals Chapter 8
78
DoD developed specifications describe essentialtechnical requirements for purchase of materiel.Program-Unique Specifications are an integral partof the system development process. Standard prac-tice for preparation of DoD and Program-UniqueSpecifications is guided by MIL-STD-961D.This standard provides guidance for the develop-ment of performance and detail specifications.MIL- STD-961D, Appendix A provides furtherguidance for the development of Program-UniqueSpecifications.
Non-DoD specifications and standards approvedfor DoD use are listed in the DoD Index ofSpecifications and Standards (DoDISS).
DoD Policy (Specifications)
DoD policy is to develop performance specifica-tions for procurement and acquisition. In general,detail specifications are left for contractor devel-opment and use. Use of a detail specification inDoD procurement or acquisition should be con-sidered only where absolutely necessary, and thenonly with supporting trade studies and acquisitionauthority approval.
DoD policy gives preference to the use of com-mercial solutions to government requirements,rather than development of unique designs. There-fore, the use of commercial item specifications anddescriptions should be a priority in system archi-tecture development. Only when no commercialsolution is available should government detailspecifications be employed.
In the case of re-procurement, where detail speci-fications and drawings are government owned,standardization or interface requirements maypresent a need for use of detailed specifications.Trade studies that reflect total ownership costs andthe concerns related to all eight primary functionsshould govern decisions concerning the type ofspecification used for re-procurement of systems,subsystems, and configuration items. Such tradestudies and cost analysis should be preformed priorto the use of detail specifications or the decision
to develop and use performance specifications ina reprocurement.
Performance Specifications
Performance Specifications state requirements interms of the required results with criteria for veri-fying compliance, but without stating the methodsfor achieving the required results. In general, per-formance specifications define products in termsof functions, performance, and interface require-ments. They define the functional requirements forthe item, the environment in which it must oper-ate, and interface and interchangeability charac-teristics. The contractor is provided the flexibilityto decide how the requirements are best achieved,subject to the constraints imposed by the govern-ment, typically through interface requirements.System Specifications and Item PerformanceSpecifications are examples of performancespecifications.
Detail Specifications
Detail Specifications, such as Item Detail, Mate-rial and Process Specifications, provide design re-quirements. This can include materials to be used,how a requirement is to be achieved, or how anitem is to be fabricated or constructed. If a specifi-cation contains both performance and detail re-quirements, it is considered a Detail Specification,with the following exception: Interface and inter-changeability requirements in Performance Speci-fications may be expressed in detailed terms. Forexample, a Performance Specification for shoeswould specify size requirements in detailed terms,but material or method of construction would bestated in performance terms.
Software Documentation – IEEE/EIA 12207
IEEE/EIA 12207, Software Life Cycle Processes,describes the U.S. implementation of the ISO stan-dard on software processes. This standard describesthe development of software specifications as oneaspect of the software development process.
Chapter 8 Systems Engineering Process Outputs
79
Figure 8–7. Specification Hierarchy
SystemSpec
Item Spec(Performance)
Item Spec(Detail)
ProcessSpec
MaterialSpec
Software Requirements SpecInterface Requirements Spec
Software Product Spec• Software Design Description• Interface Design Description
The process described in IEEE/EIA 12207 forallocating requirements in a top-down fashion anddocumenting the requirements at all levels parallelsthe systems engineering process described in thistext. The standard requires first that system-levelrequirements be allocated to software items (orconfiguration items) and that the softwarerequirements then be documented in terms of func-tionality, performance, and interfaces, and thatqualification requirements be specified. Softwareitem requirements must be traceable to system-level, and be consistent and verifiable.
The developer is then required to decompose eachsoftware item into software components and theninto software units that can be coded. Requirementsare allocated from item level, to component, andfinally to unit level. This is the detailed designactivity and IEEE/EIA 12207 requires that theseallocations of requirements be documented indocuments that are referred to as “descriptions,”or, if the item is a “stand alone” item, as “specifi-cations.” The content of these documents is definedin the IEEE/EIA standard; however, the level ofdetail required will vary by project. Each projectmust therefore ensure that a common level of
expectation is established among all stakeholdersin the software development activity.
Standard Practice for Defense Specifications –MIL-STD-961D
The purpose of MIL-STD-961D is to establishuniform practices for specification preparation, toensure inclusion of essential requirements, toensure Verification (qualification) methods are es-tablished for each requirement, and to aid in theuse and analysis of specification content. MIL-STD-961D establishes the format and content ofsystem, configuration item, software, process andmaterial specifications. These Program-UniqueSpecifications are developed through applicationof the systems engineering process and representa hierarchy as shown in Figure 8-7.
Standards
Standards establish engineering and technicallimitations and applications for items, materials,processes, methods, designs, and engineeringpractices. They are “corporate knowledge” docu-ments describing how to do some process or a
Systems Engineering Fundamentals Chapter 8
80
description of a body of knowledge. Standardscome from many sources, reflecting the practicesor knowledge base of the source. Format and con-tent of Defense Standards, including Handbooks,are governed by MIL-STD-962. Other types ofstandards in use in DoD include Commercial Stan-dards, Corporate Standards, International Stan-dards, Federal Standards, and Federal InformationProcessing Standards.
DoD Policy (Standards)
DoD policy does not require standard managementapproaches or manufacturing processes on con-tracts. This policy applies to the imposition of bothMilitary Specifications and Standards and, in ad-dition, to the imposition of Commercial and In-dustry Standards. In general, the preferred ap-proach is to allow contractors to use industry, gov-ernment, corporate, or company standards theyhave determined to be appropriate to meetgovernment’s needs. The government reviews andaccepts the contractor’s approach through acontract selection process or a contractual reviewprocess.
The government should impose a process orstandard only as a last resort, and only with thesupport of an appropriate trade study analysis. If aspecific standard is imposed in a solicitation orcontract, a waiver will be required from anappropriate Service authority.
However, there is need on occasion to direct theuse of some standards for reasons of standardiza-tion, interfaces, and development of open systems.A case in point is the mandated use of the JointTechnical Architecture (JTA) for defininginteroperability standards. The JTA sets forth theset of interface standards that are expected to beemployed in DoD systems. The JTA is justifiablymandatory because it promotes neededinteroperability standardization, establishes sup-portable interface standards, and promotes thedevelopment of open systems.
DoD technical managers should be alert to situa-tions when directed standards are appropriate totheir program. Decisions concerning use of
directed standards should be confirmed by tradestudies and requirements traceability.
DoD Index of Specifications and Standards
The DoDISS lists all international, adopted indus-try standardization documents authorized for useby the military departments, federal and militaryspecifications and standards. Published in threevolumes, it contains over 30,000 documents in 103Federal Supply Groups broken down into 850 Fed-eral Supply Classes. It covers the total DoD use ofspecifications and standards, ranging from fuelspecifications to international quality standards.
8.3 SUMMARY POINTS
• System Engineering Process Outputs includethe system/configuration item architecture,specifications and baselines, and the decisiondatabase.
• System/Configuration Item Architectures in-clude the physical architecture and the associ-ated products and services.
• Program-Unique specifications are a primaryoutput of the System Engineering Process. Pro-gram-Unique specifications describe what thesystem or configuration item must accomplishand how it will be verified. Program-Uniquespecifications include the System, Item Perfor-mance, and Item Detail Specifications. TheSystem Specification describes the system re-quirements, while Item Performance and ItemDetail Specifications describe configurationitem requirements.
• Configuration baselines are used to manage andcontrol the technical development. Programbaselines are used for measuring and supportingprogram status.
• The Decision Database includes those docu-ments or software that support understandingand decision making during formulation of theconfiguration baselines.
Chapter 8 Systems Engineering Process Outputs
81
• DoD policy is to develop performance specifi-cations for procurement and acquisition. Useof other than performance specifications in acontract must be justified and approved.
• It is DoD policy not to require standard manage-ment approaches or manufacturing processeson contracts.
• Mandatory use of some standard practices arenecessary, but must be justified throughanalysis. A case in point is the mandatory useof the standards listed in the Joint TechnicalArchitecture.
Systems Engineering Fundamentals Chapter 8
82
Chapter 9 Work Breakdown Structure
83
PART 3
SYSTEMSANALYSIS
ANDCONTROL
Systems Engineering Fundamentals Chapter 9
84
Chapter 9 Work Breakdown Structure
85
Figure 9–1. Architecture to WBS Flow
WBS ElementsWBSArchitecture
Landing Gear System
Aircraft Subsystems
Air Vehicle
1000 Aircraft Subsystems
1610 Landing Gear System
1000 Air Vehicle
System System
••
1000 Aircraft Subsystems
1610 Landing Gear••
CHAPTER 9
WORK BREAKDOWNSTRUCTURE
is used to structure development activities, to iden-tify data and documents, and to organize integratedteams, and for other non-technical programmanagement purposes.
WBS Role in DoD Systems Engineering
DoD 5000.2-R requires that a program WBS beestablished to provide a framework for programand technical planning, cost estimating, resourceallocation, performance measurement, and statusreporting. The WBS is used to define the totalsystem, to display it as a product-oriented familytree composed of hardware, software, services,data, and facilities, and to relate these elements toeach other and to the end product. Program officesare to tailor a program WBS using the guidanceprovided in MIL-HDBK-881.
9.1 INTRODUCTION
The Work Breakdown Structure (WBS) is a meansof organizing system development activities basedon system and product decompositions. The sys-tems engineering process described in earlier chap-ters produces system and product descriptions.These product architectures, together with associ-ated services (e.g., program management, systemsengineering, etc.) are organized and depicted in ahierarchical tree-like structure that is the WBS.(See Figure 9-1.)
Because the WBS is a direct derivative of the physi-cal and systems architectures it could be consid-ered an output of the systems engineering process.It is being presented here as a Systems Analysisand Control tool because of its essential utility forall aspects of the systems engineering process. It
Systems Engineering Fundamentals Chapter 9
86
The program WBS is developed initially to definethe top three levels. As the program proceedsthrough development and is further defined, pro-gram managers should ensure that the WBS isextended to identify all high-cost and high-riskelements for management and reporting, whileensuring the contractor has complete flexibility toextend the WBS below the reporting requirementto reflect how work will be accomplished.
Basic Purposes of the WBS
Organizational:The WBS provides a coordinated, complete, andcomprehensive view of program management. Itestablishes a structure for organizing systemdevelopment activities, including IPT design,development, and maintenance.
Business:It provides a structure for budgets and cost esti-mates. It is used to organize collection and analy-sis of detailed costs for earned value reports (CostPerformance Reports or Cost/Schedule ControlSystem Criteria reporting).
Technical:The WBS establishes a structure for:
• Identifying products, processes, and data,
• Organizing risk management analysis andtracking,
• Enabling configuration and data management.It helps establish interface identification andcontrol.
• Developing work packages for work orders andmaterial/part ordering, and
• Organizing technical reviews and audits.
The WBS is used to group product items for speci-fication development, to develop Statements ofWork (SOW), and to identify specific contractdeliverables.
WBS – Benefits
The WBS allows the total system to be describedthrough a logical breakout of product elements intowork packages. A WBS, correctly prepared, willaccount for all program activity. It links programobjectives and activities with resources, facilitatesinitial budgets, and simplifies subsequent costreporting. The WBS allows comparison of vari-ous independent metrics and other data to look forcomprehensive trends.
It is a foundation for all program activities, includ-ing program and technical planning, event sched-ule definition, configuration management, riskmanagement, data management, specificationpreparation, SOW preparation, status reportingand problem analysis, cost estimates, and budgetformulation.
9.2 WBS DEVELOPMENT
The physical and system architectures are used toprepare the WBS. The architectures should bereviewed to ensure that all necessary products andservices are identified, and that the top-down struc-ture provides a continuity of flow down for alltasks. Enough levels must be provided to identifywork packages for cost/schedule control purposes.If too few levels are identified, then managementvisibility and integration of work packages maysuffer. If too many levels are identified, then pro-gram review and control actions may becomeexcessively time-consuming.
The first three WBS Levels are organized as:Level 1 – Overall SystemLevel 2 – Major Element (Segment)Level 3 – Subordinate Components (Prime
Items)
Levels below the first three represent componentdecomposition down to the configuration itemlevel. In general, the government is responsible forthe development of the first three levels, and thecontractor(s) for levels below three.
Chapter 9 Work Breakdown Structure
87
DoD Practice
In accordance with DoD mandatory procedures inDoD 5000.2-R and common DoD practice as es-tablished in MIL-HDBK-881, the program officedevelops a program WBS and a contract WBS foreach contract. The program WBS is the WBS thatrepresents the total system, i.e., the WBS thatdescribes the system architecture. The contractWBS is the part of the program WBS that relatesto deliverables and tasks of a specific contract.
MIL-HDBK-881 is used by the program office tosupport the systems engineering process in devel-oping the first three levels of the program WBS,and to provide contractors with guidance for lowerlevel WBS development. As with most standardsand handbooks, use of MIL-HDBK-881 cannot bespecified as a contract requirement.
Though WBS development is a systems engineer-ing activity, it impacts cost and budget profession-als, as well as contracting officers. An integratedteam representing these stakeholders should beformed to support WBS development.
WBS Anatomy
A program WBS has an end product part and anenabling product part. The end product part of the
system typically consists of the prime missionproduct(s) delivered to the operational customer.This part of the WBS is based on the physicalarchitectures developed from operational require-ments. It represents that part of the WBS involvedin product development. Figure 9-2 presents asimple example of a program WBS product part.
The “enabling product” part of the system includesthe products and services required to develop,produce, and support the end product(s). This partof the WBS includes the horizontal elements ofthe system architecture (exclusive of the end prod-ucts), and identifies all the products and servicesnecessary to support the life cycle needs of theproduct. Figure 9-3 shows an example of the topthree levels of a complete WBS tree.
Contract WBS
A contract WBS is developed by the program officein preparation for contracting for work required todevelop the system. It is further developed by thecontractor after contract award. The contract WBSis that portion of the program WBS that is specifi-cally being tasked through the contract. A simpleexample of a contract WBS derived from theprogram WBS shown in Figure 9-2 is provided byFigure 9-4. Figure 9-4, like Figure 9-2, onlyincludes the product part of the contract WBS. A
Figure 9-2. Program WBS – The Product Part (Physical Architecture)
System
Propulsion1.2
Fire Control1.3
Air Frame1.1
Air Vehicle1.0
Etc.1.n
Level 3
Level 2
Level 1
Systems Engineering Fundamentals Chapter 9
88
complete contract WBS would include associatedenabling products, similar to those identified inFigure 9-3. The resulting complete contract WBS
Figure 9-3. The Complete Work Breakdown Structure
Aircraft SystemLevel 1
Airframe
Propulsion
Application Software
System Software
Com/Identification
Navigation/Guidance
Central Computer
Fire Control
Data Display and Controls
Survivability
Reconnaissance
Automatic Flight Control
Central Integrated Checkout
Antisubmarine Warfare
Armament
Weapons Delivery
Auxiliary Equipment
DT&E
OT&E
Mockups
T&ESupport
TestFacilities
Equipment
Services
Facilities
Tech Pubs
Engrg Data
SupportData
Manage-ment Data
DataDepository
Test andMeasurem’tEquipment
SupportandHandlingEquipment
Test andMeasurem’tEquipment
SupportandHandlingEquipment
SysAssembly,InstallationandCheckouton Site
ContractorTech Support
SiteConstruction
Site/ShipVehicleConversion
Construc-tion/Conver-sion/Expan-sion
EquipmentAcquisitionor Mod
Maintenance
(Specify byAllowanceList,Groupingor H/WElement)
PeculiarSupport
Equipment
CommonSupport
Equipment
Op/SiteActivation
IndustrialFacilities
InitialSpares and
InitialRepairParts
AirVehicle
SE/Program
Mgmt
SystemT&E
Training Data
Level 2
Level 3
Aircraft Systems WBS(MIL-HDBK-881)
Figure 9–4. Contract WBS
Fire Control
Receiver
Radar
Level 3
Level 2
Level 1
Transmitter Antenna Radar S/W
is used to organize and identify contractor tasks.The program office’s preliminary version is usedto develop a SOW for the Request for Proposals.
Chapter 9 Work Breakdown Structure
89
Work Breakdown Structure
Work Packages
Feed HornMachining
LaborMaterialOther Direct Costs
WaveguideBending
Aircraft SystemProduct
AirVehicle
FireControl
Propulsion
Test
Airframe
Training
Radar Training
Set-Ups
Fabrication
Assembly
ReceiverGroup
CostAccount
CostAccount
CostAccount
XmtrGroup
CostAccount
CostAccount
CostAccount
CostAccount
CostAccount
CostAccount
Antenna
Man
ufac
turin
gE
ngin
eerin
gTe
st
Com
pany
Org
aniz
atio
nal S
truc
ture
Figure 9-5. WBS Control Matrix
9.3 DESIGNING AND TRACKING WORK
A prime use of the WBS is the design and trackingof work. The WBS is used to establish what workis necessary, a logical decomposition down to workpackages, and a method for organizing feedback.As shown by Figure 9-5, the WBS element ismatrixed against those organizations in the com-pany responsible for the task. This creates costaccounts and task definition at a detailed level. Itallows rational organization of integrated teamsand other organizational structures by helpingestablish what expertise and functional support isrequired for a specific WBS element. It furtherallows precise tracking of technical and othermanagement.
WBS Dictionary
As part of the work and cost control use of theWBS, a Work Breakdown Dictionary is developed.For each WBS element a dictionary entry is pre-pared that describes the task, what costs (activi-ties) apply, and the references to the associatedContract Line Item Numbers and SOW paragraph.An example of a level 2 WBS element dictionaryentry is shown as Figure 9-6.
9.4 SUMMARY POINTS
• The WBS is an essential tool for the organiza-tion and coordination of systems engineering
Systems Engineering Fundamentals Chapter 9
90
Figure 9-6. Work Breakdown Dictionary
Index Item No. 2 WBS Level 2
WBS Element WBS Title
A10100 Air Vehicle
Date Revision No. Revision Auth ApprovedChg
Specification No. Specification Title:Prime Item Development
689E078780028 Specification for AGM 86A Air Vehicle/Airframe
Element Task Description
Technical Content:The Air Vehicle element task description refers to the effortrequired to develop, fabricate, integrate and test theairframe segment, portions of the Navigation/Guidanceelement, and Airborne Development Test Equipment andAirborne Operational Test Equipment and to the integra-tion assembly and check-out of these complete elements,together with the Engine Segment, to produce thecomplete Air Vehicle. The lower-level elements includedand summarized in the Air Vehicle element are:
Airframe Segment (A11100), Navigation/GuidanceSegment (A32100), Airborne Development TestEquipment (A61100), and Airborne Operational TestEquipment (A61200).
CONTRACT NUMBERF33657-72-C-0923
ContractLine Item:
0001, 0001AA, 0001AB, 0001AC, 0001AD0001AE, 0001AF, 0001AG, 0001AH
Cost Description
MPC/PMC Work Order/Work AuthA10100 See lower level
WBS Elements
Cost Content – System ContractorThe cost to be accumulated against this element includesa summarization of all costs required to plan, develop,fabricate, assemble, integrate and perform developmenttesting, analysis and reporting for the air vehicle. It alsoincludes all costs associated with the required efforts inintegrating, assembling and checking our GFP required tocreate this element.
Applicable SOW Paragraph3.6.2
processes, and it is a product of the systemsengineering process.
• Its importance extends beyond the technicalcommunity to business professionals and con-tracting officers. The needs of all stakeholdersmust be considered in its development. The pro-gram office develops the program WBS and ahigh-level contract WBS for each contract. The
contractors develop the lower levels of thecontract WBS associated with their contract.
• The system architecture provides the structurefor a program WBS. SOW tasks flow from thisWBS.
• The WBS provides a structure for organizingIPTs and tracking metrics.
Chapter 10 Configuration Management
91
CHAPTER 10
CONFIGURATIONMANAGEMENT
of configuration control authority correspondingto the baseline structure. Since lower level baselineshave to conform to a higher-level baseline, changesat the lower levels must be examined to assure theydo not impact a higher-level baseline. If they do,they must be approved at the highest level im-pacted. For example, suppose the only engineturbine assembly affordably available for an enginedevelopment cannot provide the continuous oper-ating temperature required by the allocated base-line. Then not only must the impact of the changeat the lower level (turbine) be examined, but thechange should also be reviewed for possible im-pact on the functional baseline, where requirementssuch as engine power and thrust might reside.
Configuration management is supported andperformed by integrated teams in an IntegratedProduct and Process Development (IPPD) envi-ronment. Configuration management is closelyassociated with technical data management andinterface management. Data and interface manage-ment is essential for proper configuration manage-ment, and the configuration management effort hasto include them.
DoD Application ofConfiguration Management
During the development contract, the Governmentshould maintain configuration control of thefunctional and performance requirements only,giving contractors responsibility for the detaileddesign. (SECDEF Memo of 29 Jun 94.) This im-plies government control of the Functional (sys-tem requirements) Baseline. Decisions regardingwhether or not the government will take control ofthe lower-level baselines (allocated and productbaselines), and when ultimately depends on the
10.1 FOUNDATIONS
Configuration Defined
A “configuration” consists of the functional, physi-cal, and interface characteristics of existing orplanned hardware, firmware, software or a combi-nation thereof as set forth in technical documenta-tion and ultimately achieved in a product. The con-figuration is formally expressed in relation to aFunctional, Allocated, or Product configurationbaseline as described in Chapter 8.
Configuration Management
Configuration management permits the orderlydevelopment of a system, subsystem, or configu-ration item. A good configuration management pro-gram ensures that designs are traceable to require-ments, that change is controlled and documented,that interfaces are defined and understood, and thatthere is consistency between the product and itssupporting documentation. Configuration manage-ment provides documentation that describes whatis supposed to be produced, what is being produced,what has been produced, and what modificationshave been made to what was produced.
Configuration management is performed onbaselines, and the approval level for configurationmodification can change with each baseline. In atypical system development, customers or userrepresentatives control the operational require-ments and usually the system concept. The devel-oping agency program office normally controls thefunctional baseline. Allocated and product base-lines can be controlled by the program office, theproducer, or a logistics agent depending on the lifecycle management strategy. This sets up a hierarchy
Systems Engineering Fundamentals Chapter 10
92
requirements and strategies needed for the particu-lar program. In general, government control oflower-level baselines, if exercised, will take placelate in the development program after design hasstabilized.
Configuration Management Planning
When planning a configuration management ef-fort you should consider the basics: what has to bedone, how should it be done, who should do it,when should it be done, and what resources arerequired. Planning should include the organiza-tional and functional structure that will define themethods and procedures to manage functional andphysical characteristics, interfaces, and documentsof the system component. It should also includestatements of responsibility and authority, meth-ods of control, methods of audit or verification,milestones, and schedules. EIA IS-649, NationalConsensus Standard for Configuration Manage-ment, and MIL-HDBK-61 can be used as plan-ning guidance.
Configuration Item (CI)
A key concept that affects planning is the configu-ration item (CI). CI decisions will determine whatconfigurations will be managed. CIs are an aggre-gation of hardware, firmware, or computer soft-ware, or any of their discrete portions, which sat-isfies an end-use function and is designated forseparate configuration management. Any itemrequired for logistic support and designated forseparate procurement is generally identified as CI.Components can be designated CIs because ofcrucial interfaces or the need to be integrated withoperation with other components within or out-side of the system. An item can be designated CIif it is developed wholly or partially with govern-ment funds, including nondevelopmental items(NDI) if additional development of technical datais required. All CIs are directly traceable to theWBS.
Impact of CI Designation
CI designation requires a separate configurationmanagement effort for the CI, or groupings of
related CIs. The decision to place an item, or items,under formal configuration control results in:
• Separate specifications,
• Formal approval of changes,
• Discrete records for configuration statusaccounting,
• Individual design reviews and configurationaudits,
• Discrete identifiers and name plates,
• Separate qualification testing, and
• Separate operating and user manuals.
10.2 CONFIGURATION MANAGEMENTSTRUCTURE
Configuration management comprises fourinterrelated efforts:
• Identification,
• Control,
• Status Accounting, and
• Audits.
Also directly associated with configuration man-agement are data management and interface man-agement. Any configuration management planningeffort must consider all six elements.
Identification
Configuration Identification consists of docu-mentation of formally approved baselines andspecifications, including:
• Selection of the CIs,
• Determination of the types of configurationdocumentation required for each CI,
Chapter 10 Configuration Management
93
• Documenting the functional and physicalcharacteristics of each CI,
• Establishing interface management procedures,organization, and documentation,
• Issuance of numbers and other identifiersassociated with the system/CI configurationstructure, including internal and externalinterfaces, and
• Distribution of CI identification and relatedconfiguration documentation.
Configuration Documentation
Configuration documentation is technical docu-mentation that identifies and defines the item’sfunctional and physical characteristics. It isdeveloped, approved, and maintained through threedistinct evolutionary increasing levels of detail. Thethree levels of configuration documentation formthe three baselines and are referred to as functional,allocated, and product configuration documenta-tion. These provide the specific technical descrip-tion of a system or its components at any point intime.
Configuration Control
Configuration Control is the systematic proposal,justification, prioritization, evaluation, coordina-tion, approval or disapproval, and implementationof all approved changes in the configuration of asystem/CI after formal establishment of itsbaseline. In other words, it is how a system (andits CIs) change control process is executed andmanaged.
Configuration Control provides managementvisibility, ensures all factors associated with aproposed change are evaluated, prevents unneces-sary or marginal changes, and establishes changepriorities. In DoD it consists primarily of achange process that formalizes documentation andprovides a management structure for changeapproval.
Change Documents Used forGovernment Controlled Baselines
There are three types of change documents usedto control baselines associated with governmentconfiguration management: Engineering ChangeProposal, Request for Deviation, and Request forWaivers.
• Engineering Change Proposals (ECP) identifyneed for a permanent configuration change.Upon approval of an ECP a new configurationis established.
• Requests for Deviation or Waiver propose atemporary departure from the baseline. Theyallow for acceptance of non-conformingmaterial. After acceptance of a deviation orwaiver the documented configuration remainsunchanged.
Engineering Change Proposal (ECP)
An ECP is documentation that describes andsuggests a change to a configuration baseline. Sepa-rate ECPs are submitted for each change that has adistinct objective. To provide advanced notice andreduce paperwork, Preliminary ECPs or AdvanceChange/Study Notices can be used preparatory toissue of a formal ECP. Time and effort for theapproval process can be further reduced throughuse of joint government and contractor integratedteams to review and edit preliminary changeproposals.
ECPs are identified as Class I or Class II. Class Ichanges require government approval beforechanging the configuration. These changes canresult from problems with the baseline require-ment, safety, interfaces, operating/servicing capa-bility, preset adjustments, human interface includ-ing skill level, or training. Class I changes can alsobe used to upgrade already delivered systems tothe new configuration through use of retrofit, modkits, and the like. Class I ECPs are also used tochange contractual provisions that do not directlyimpact the configuration baseline; for example,changes affecting cost, warranties, deliveries, or
Systems Engineering Fundamentals Chapter 10
94
Figure 10-1. ECP Designators
Classification
• Class I• Class II
Types
• Preliminary• Formal
Priorities
• Emergency• Urgent• Routine
Justification Codes
D – Correction of deficiency
S – Safety
B – Interface
C – Compatibility
O – OPS or log support
R – Cost reduction
V – Value engineering
P – Production stoppage
A – Record only
data requirements. Class I ECPs require programoffice approval, which is usually handled througha formal Configuration Control Board, chaired bythe government program manager or delegatedrepresentative.
Class II changes correct minor conflicts, typos, andother “housekeeping” changes that basically cor-rect the documentation to reflect the current con-figuration. Class II applies only if the configura-tion is not changed when the documentation ischanged. Class II ECPs are usually handled by thein-plant government representative. Class II ECPsgenerally require only that the government con-curs that the change is properly classified. Underan initiative by the Defense Contract ManagementCommand (DCMC), contractors are increasinglydelegated the authority to make ECP classificationdecisions.
Figure 10-1 shows the key attributes associatedwith ECPs. The preliminary ECP, mentioned inFigure 10-1, is a simplified version of a formalECP that explains the proposed ECP, andestablishes an approximate schedule and cost forthe change. The expense of an ECP developmentis avoided if review of the Preliminary ECP
indicates the change is not viable. The approachused for preliminary ECPs vary in their form andname. Both Preliminary ECPs and AdvancedChange/Study Notices have been used to formal-ize this process, but forms tailored to specificprograms have also been used.
Configuration Control Board (CCB)
A CCB is formed to review Class I ECPs forapproval, and make a recommendation to approveor not approve the proposed change. The CCBchair, usually the program manager, makes the finaldecision. Members advise and recommend, but theauthority for the decision rests with the chair. CCBmembership should represent the eight primaryfunctions with the addition of representation of theprocurement office, program control (budget), andConfiguration Control manager, who serves as theCCB secretariat.
The CCB process is shown in Figure 10-2. Theprocess starts with the contractor. A request to thecontractor for an ECP or Preliminary ECP isnecessary to initiate a government identifiedconfiguration change. The secretariat’s reviewprocess includes assuring appropriate government
Chapter 10 Configuration Management
95
Figure 10-2. Configuration Control Board
contractual and engineering review is done priorto receipt by the CCB.
CCB Management Philosophy
The CCB process is a configuration control pro-cess, but it is also a contractual control process.Decisions made by the CCB chair affects the con-tractual agreement and program baseline as wellas the configuration baseline. Concerns over con-tractual policy, program schedule, and budget caneasily come into conflict with concerns relating toconfiguration management, technical issues, andtechnical activity scheduling. The CCB technicalmembership and CCB secretariat is responsible toprovide a clear view of the technical need and theimpact of alternate solutions to these conflicts. TheCCB secretariat is further responsible to see thatthe CCB is fully informed and prepared, includingensuring that:
• A government/contractor engineering workinggroup has analyzed the ECP and supporting data,prepared comments for CCB consideration, andis available to support the CCB;
• All pertinent information is available for review;
• The ECP has been reviewed by appropriatefunctional activities; and
• Issues have been identified and addressed.
CCB Documentation
Once the CCB chair makes a decision concerningan ECP, the CCB issues a Configuration ControlBoard Directive that distributes the decision andidentifies key information relating to the imple-mentation of the change:
• Implementation plan (who does what when);
• Contracts affected (prime and secondary);
• Dates of incorporation into contracts;
• Documentation affected (drawings, specifica-tions, technical manuals, etc.), associated cost,and schedule completion date; and
CCBDirective
Otherimplementing
activities
ContractorBegins and
ends process
CCB Review
Chairman (PM)User Command
Training CommandLog Command
EngineeringProcurement
Program ControlTest
Config MgmtSafety
MaintenanceEngineering Change
Proposal (ECP)
Alteration in approvedCM doc’s CI or
contractural provision
CCB Secretariat
(ConfigurationManager)
ContractingOfficer
Systems Engineering Fundamentals Chapter 10
96
• Identification of any orders or directives neededto be drafted and issued.
Request for Deviation or Waiver
A deviation is a specific written authorization,granted prior to manufacture of an item, to departfrom a performance or design requirement for aspecific number of units or a specific period oftime.
A waiver is a written authorization to accept a CIthat departs from specified requirements, but issuitable for use “as is” or after repair.
Requests for deviation and waivers relate to a tem-porary baseline departure that can affect systemdesign and/or performance. The baseline remainsunchanged and the government makes a determi-nation whether the alternative “non-conforming”configuration results in an acceptable substitute.Acceptable substitute usually implies that there willbe no impact on support elements, systems affectedcan operate effectively, and no follow-up or cor-rection is required. The Federal Acquisition Regu-lations (FAR) requires “consideration” on govern-ment contracts when the Government accepts a“non-conforming” unit.
The distinction between Request for Deviation andRequest for a Waiver is that a deviation is usedbefore final assembly of the affected unit, and awaiver is used after final assembly or acceptancetesting of the affected unit.
Status Accounting
Configuration Status Accounting is the recordingand reporting of the information that is needed tomanage the configuration effectively, including:
• A listing of the approved configuration docu-mentation,
• The status of proposed changes, waivers anddeviations to the configuration identification,
• The implementation status of approved changes,and
• The configuration of all units, including thosein the operational inventory.
Purpose of Configuration Status Accounting
Configuration Status Accounting provides infor-mation required for configuration management by:
• Collecting and recording data concerning:– Baseline configurations,– Proposed changes, and– Approved changes.
• Disseminating information concerning:– Approved configurations,– Status and impact of proposed changes,– Requirements, schedules, impact and
status of approved changes, and– Current configurations of delivered items.
Audits
Configuration Audits are used to verify a systemand its components’ conformance to their configu-ration documentation. Audits are key milestonesin the development of the system and do not standalone. The next chapter will show how they fit inthe overall process of assessing design maturity.
Functional Configuration Audits (FCA) and theSystem Verification Review (SVR) are performedin the Production Readiness and LRIP stage ofthe Production and Development Phase. FCAis used to verify that actual performance of theconfiguration item meets specification require-ments. The SVR serves as system-level audit afterFCAs have been conducted.
The Physical Configuration Audit (PCA) is nor-mally held during Rate Production and Develop-ment stage as a formal examination of a pro-duction representative unit against the draft tech-nical data package (product baseline documenta-tion).
Most audits, whether FCA or PCA, are todayapproached as a series of “rolling” reviews in whichitems are progressively audited as they are pro-duced such that the final FCA or PCA becomes
Chapter 10 Configuration Management
97
significantly less oppressive and disruptive to thenormal flow of program development.
10.3 INTERFACE MANAGEMENT
Interface Management consists of identifying theinterfaces, establishing working groups to managethe interfaces, and the group’s development of in-terface control documentation. Interface Manage-ment identifies, develops, and maintains the exter-nal and internal interfaces necessary for systemoperation. It supports the configuration manage-ment effort by ensuring that configurationdecisions are made with full understanding of theirimpact outside of the area of the change.
Interface Identification
An interface is a functional, physical, electrical,electronic, mechanical, hydraulic, pneumatic, op-tical, software, or similar characteristic requiredto exist at a common boundary between two ormore systems, products, or components. Normally,in a contractual relationship the procuring agencyidentifies external interfaces, sets requirements forintegrated teams, and provides appropriate person-nel for the teams. The contracted design agent ormanufacturer manages internal interfaces; plans,organizes, and leads design integrated teams; main-tains internal and external interface requirements;and controls interfaces to ensure accountability andtimely dissemination of changes.
Interface Control Working Group (ICWG)
The ICWG is the traditional forum to establishofficial communications link between thoseresponsible for the design of interfacing systemsor components. Within the IPPD frameworkICWGs can be integrated teams that establish link-age between interfacing design IPTs, or could beintegrated into a system-level engineering work-ing group. Membership of ICWGs or comparableintegrated teams should include membership fromeach contractor, significant vendors, and partici-pating government agencies. The procuring
program office (external and selected top-levelinterfaces) or prime contractor (internal interfaces)generally designates the chair.
Interface Control Documentation (ICD)
Interface Control Documentation includes Inter-face Control Drawings, Interface RequirementsSpecifications, and other documentation thatdepicts physical and functional interfaces of relatedor co-functioning systems or components. ICD isthe product of ICWGs or comparable integratedteams, and their purpose is to establish and main-tain compatibility between interfacing systems orcomponents.
Open Systems Interface Standards
To minimize the impact of unique interfacedesigns, improve interoperability, maximize theuse of commercial components, and improve thecapacity for future upgrade, an open-systems ap-proach should be a significant part of interfacecontrol planning. The open-systems approach in-volves selecting industry-recognized specificationsand standards to define system internal and exter-nal interfaces. An open system is characterized by:
• Increased use of functional partitioning andmodular design to enhance flexibility ofcomponent choices without impact on inter-faces,
• Use of well-defined, widely used, non-propri-etary interfaces or protocols based on standardsdeveloped or adopted by industry recognizedstandards institutions or professional societies,and
• Explicit provision for expansion or upgradingthrough the incorporation of additional orhigher performance elements with minimalimpact on the system.
DoD mandatory guidance for information tech-nology standards is in the Joint Technical Archi-tecture.
Systems Engineering Fundamentals Chapter 10
98
10.4 DATA MANAGEMENT
Data management documents and maintains thedatabase reflecting system life cycle decisions,methods, feedback, metrics, and configurationcontrol. It directly supports the configuration sta-tus accounting process. Data Management governsand controls the selection, generation, preparation,acquisition, and use of data imposed on contractors.
Data Required By Contract
Data is defined as recorded information, regard-less of form or characteristic, and includes all theadministrative, management, financial, scientific,engineering, and logistics information and docu-mentation required for delivery from the contrac-tor. Contractually required data is classified as oneof three types:
• Type I: Technical data
• Type II: Non-technical data
• Type III: One-time use data (technical or non-technical)
Data is acquired for two basic purposes:
• Information feedback from the contractor forprogram management control, and
• Decision making information needed tomanage, operate, and support the system (e.g.,specifications, technical manuals, engineeringdrawings, etc.).
Data analysis and management is expensive andtime consuming. Present DoD philosophy requiresthat the contractor manage and maintain signifi-cant portions of the technical data, including theTechnical Data Package (TDP). Note that this doesnot mean the government isn’t paying for itsdevelopment or shouldn’t receive a copy for post-delivery use. Minimize the TDP cost by request-ing the contractor’s format (for example, accept-ing the same drawings they use for production),and asking only for details on items developed withgovernment funds.
Data Call for Government Contracts
As part of the development of an Invitation for Bidor Request for Proposals, the program office is-sues a letter that describes the planned procure-ment and asks integrated team leaders and effectedfunctional managers to identify and justify theirdata requirements for that contract. A descriptionof each data item needed is then developed by theaffected teams or functional offices, and reviewedby the program office. Data Item Descriptions,located in the Acquisition Management SystemsData List (AMSDL) (see Chapter 8) can be usedfor guidance in developing these descriptions.
Concurrent with the DoD policy on specificationsand standards, there is a trend to avoid use of stan-dard Data Item Descriptions on contracts, andspecify the data item with a unique tailored datadescription referenced in the Contract DataRequirements List.
10.5 SUMMARY POINTS
• Configuration management is essential to con-trol the system design throughout the life cycle.
• Use of integrated teams in an IPPD environ-ment is necessary for disciplined configurationmanagement of complex systems.
• Technical data management is essential to tracedecisions and changes and to document designs,processes and procedures.
• Interface management is essential to ensure thatsystem elements are compatible in terms ofform, fit, and function.
• Three configuration baselines are managed:– Functional (System level)– Allocated (Design To)– Product (Build To/As Built)
Configuration management is a shared responsi-bility between the government and the contractor.Contract manager (CM) key elements are Identifi-cation, Control, Status Accounting, and Audits.
Chapter 11 Technical Reviews and Audits
99
CHAPTER 11
TECHNICAL REVIEWSAND AUDITS
• Establishing a common configuration baselinefrom which to proceed to the next level ofdesign, and
• Recording design decision rationale in thedecision database.
Formal technical reviews are preceded by a seriesof technical interchange meetings where issues,problems and concerns are surfaced and addressed.The formal technical review is NOT the place forproblem solving, but to verify problem solving hasbeen done; it is a process rather than an event!
Planning
Planning for Technical Reviews must be extensiveand up-front-and-early. Important considerationsfor planning include the following:
• Timely and effective attention and visibility intothe activities preparing for the review,
• Identification and allocation of resourcesnecessary to accomplish the total review effort,
• Tailoring consistent with program risk levels,
• Scheduling consistent with availability ofappropriate data,
• Establishing event-driven entry and exit criteria,
• Where appropriate, conduct of incrementalreviews,
• Implementation by IPTs,
11.1 PROGRESS MEASUREMENT
The Systems Engineer measures design progressand maturity by assessing its development at keyevent-driven points in the development schedule.The design is compared to pre-established exitcriteria for the particular event to determine if theappropriate level of maturity has been achieved.These key events are generally known as TechnicalReviews and Audits.
A system in development proceeds through asequence of stages as it proceeds from concept tofinished product. These are referred to as “levelsof development.” Technical Reviews are done aftereach level of development to check design matu-rity, review technical risk, and determines whetherto proceed to the next level of development. Tech-nical Reviews reduce program risk and ease thetransition to production by:
• Assessing the maturity of the design/develop-ment effort,
• Clarifying design requirements,
• Challenging the design and related processes,
• Checking proposed design configurationagainst technical requirements, customer needs,and system requirements,
• Evaluating the system configuration at differentstages,
• Providing a forum for communication, coordi-nation, and integration across all disciplines andIPTs,
Systems Engineering Fundamentals Chapter 11
100
Figure 11-1. Technical Review Process
• Track actionitems andissues
• Track actionitem completiontrends
• Document anddistributeresults ofreview andaction itemcompletions
• Assignresponsibility
• Individual andteam reviews
• Facilitate andpace meeting
• Examine reviewdata andanalyses –record andclassify findings
• Address keyissues identi-fied by pre-review activity
• Assess severityof problems
• Identify actionitems
• Individual andteam reviews
• Examine data• Analyze data• Track and
documentanalysis
• Have overviewmeeting
• Identifyparticipants
• Assign rolesand tasks
• Establishguidelines andprocedures
• Establish anduse entrycriteria
• Establish exitcriteria basedon the event-driven schedule
Follow-up
Resolve
Review
Pre-review
Familiarize
Plan
Before During After
• Review of all system functions, and
• Confirmation that all system elements areintegrated and balanced.
The maturity of enabling products are reviewedwith their associated end product. Reviews shouldconsider the testability, producibility, training, andsupportability for the system, subsystem orconfiguration item being addressed.
The depth of the review is a function of the com-plexity of the system, subsystem, or configurationitem being reviewed. Where design is pushingstate-of-the-art technology the review will requirea greater depth than if it is for a commercial off-the-shelf item. Items, which are complex or anapplication of new technology, will require a moredetailed scrutiny.
Planning Tip: Develop a check list of pre-review,review, and post-review activities required. De-velop check lists for exit criteria and required levelof detail in design documentation. Include keyquestions to be answered and what informationmust be available to facilitate the review process.Figure 11-1 shows the review process with keyactivities identified.
11.2 TECHNICAL REVIEWS
Technical reviews are conducted at both the sys-tem level and at lower levels (e.g., sub-system).This discussion will focus on the primary system-level reviews. Lower-level reviews may be thoughtof as events that support and prepare for the sys-tem-level events. The names used in reference to
Chapter 11 Technical Reviews and Audits
101
reviews is unimportant; however, it is importantthat reviews be held at appropriate points in pro-gram development and that both the contractor andgovernment have common expectations regardingthe content and outcomes.
Conducting Reviews
Reviews are event-driven, meaning that they areto be conducted when the progress of the productunder development merits review. Forcing a review(simply based on the fact that a schedule devel-oped earlier) projected the review at a point in timewill jeopardize the review’s legitimacy. Do thework ahead of the review event. Use the reviewevent as a confirmation of completed effort. Thedata necessary to determine if the exit criteria aresatisfied should be distributed, analyzed, andanalysis coordinated prior to the review. The typeof information needed for a technical reviewwould include: specifications, drawings, manuals,
schedules, design and test data, trade studies, riskanalysis, effectiveness analyses, mock-ups, bread-boards, in-process and finished hardware, testmethods, technical plans (Manufacturing, Test,Support, Training), and trend (metrics) data. Re-views should be brief and follow a prepared agendabased on the pre-review analysis and assessmentof where attention is needed.
Only designated participants should personallyattend. These individuals should be those that wereinvolved in the preparatory work for the reviewand members of the IPTs responsible for meetingthe event exit criteria. Participants should includerepresentation from all appropriate governmentactivities, contractor, subcontractors, vendors andsuppliers.
A review is the confirmation of a process. Newitems should not come up at the review. If signifi-cant items do emerge, it’s a clear sign the review is
Figure 11-2. Phasing of Technical Reviews
Tech Reviews
Documents
Baselines
Sys Perf Spec
Item Perf Specs
Item Detail/TDP
Functional
Allocated
Product
ASR SRR SFR PDR CDR SVRFCA
PCA
LRIP Block I
Demonstration
○ ○ ○ ○ ○ ○ ○
○ ○ ○ ○ ○ ○ ○
○ ○ ○ ○ ○ ○ ○ ○
Contractor Government
draft
A BMS
SystemDefinition
SysTech
Rqmts
ItemDesignRqmts
DetailedDesignRqmts
CI
SY
S
Prod Readiness Rate ProdIntegrationCADCE
CORD○ ○ ○
Systems Engineering Fundamentals Chapter 11
102
being held prematurely, and project risk has justincreased significantly. A poorly orchestrated andperformed technical review is a significantindicator of management problems.
Action items resulting from the review are docu-mented and tracked. These items, identified byspecific nomenclature and due dates, are preparedand distributed as soon as possible after the review.The action taken is tracked and results distributedas items are completed.
Phasing of Technical Reviews
As a system progresses through design and devel-opment, it typically passes from a given level ofdevelopment to another, more advanced level ofdevelopment. For example, a typical system willpass from a stage where only the requirements areknown, to another stage where a conceptualsolution has been defined. Or it may pass from astage where the design requirements for theprimary subsystems are formalized, to a stagewhere the physical design solutions for thoserequirements are defined. (See Figure 11-2.)
These stages are the “levels of development” re-ferred to in this chapter. System-level technicalreviews are generally timed to correspond to thetransition from one level of development to an-other. The technical review is the event at whichthe technical manager verifies that the technicalmaturity of the system or item under review is suf-ficient to justify passage into the subsequent phaseof development, with the concomitant commitmentof resources required.
As the system or product progresses throughdevelopment, the focus of technical assessmenttakes different forms. Early in the process, the pri-mary focus is on defining the requirements onwhich subsequent design and development activi-ties will be based. Similarly, technical reviewsconducted during the early stages of develop-ment are almost always focused on ensuring thatthe top-level concepts and system definitionsreflect the requirements of the user. Once system-level definition is complete, the focus turns to de-sign at sub-system levels and below. Technical re-views during these stages are typically design re-views that establish design requirements and then
Figure 11-3. Typical System-Level Technical Reviews
RequirementsReviews
DesignReviews
VerificationReviews
Alternative System Review
System Requirements Review
System Functional Review
Preliminary Design Review(includes System Software Specification Review)
Critical Design Review
Test Readiness Review
Production Readiness Review
Functional Configuration Audit
System Verification Review
Physical Configuration Audit
Chapter 11 Technical Reviews and Audits
103
verify that physical solutions are consistent withthose requirements. In the final stages of develop-ment, technical reviews and audits are conductedto verify that the products produced meet the re-quirements on which the development is based.Figure 11-3 summarizes the typical schedule ofsystem-level reviews by type and focus.
Another issue associated with technical reviews,as well as other key events normally associatedwith executing the systems engineering process,is when those events generally occur relative tothe phases of the DoD acquisition life-cycleprocess. The timing of these events will vary some-what from program to program, based upon theexplicit and unique needs of the situation; how-ever, Figure 11-4 shows a generalized concept ofhow the technical reviews normal to systemsengineering might occur relative to the acquisitionlife-cycle phases.
Specific system-level technical reviews are knownby many different names, and different engi-neering standards and documents often use differ-ent nomenclature when referring to the samereview. The names used to refer to technicalreviews are unimportant; however, it is importantto have a grasp of the schedule of reviews that isnormal to system development and to have anunderstanding of what is the focus and purpose ofthose reviews. The following paragraphs outline aschedule of reviews that is complete in terms ofassessing technical progress from concept throughproduction. The names used were chosen becausethey seemed to be descriptive of the focus of theactivity. Of course, the array of reviews and thefocus of individual reviews is to be tailored to thespecific needs of the program under development,so not all programs should plan on conducting allof the following reviews.
Figure 11-4. Relationship of Systems Engineering Eventsto Acquisition Life Cycle Phases
SystemsEngineering
Activities
ASR
Fabric
ate, In
tegrat
e & Te
st
Design
TRR
PCAFCASVR
CDR
PDR
SFR
SRR
Systems Acquisition(Engineering Development, Demonstration,
LRIP and Production)
Pre-SystemsAcquisition
Sustainment andMaintenance
REQUIREMENTSREVIEW
Configuration Definition
Test
CE CAD Integration Sustainment
BA C
Demonstration LRIP Rate
Prototype Demos EDMs
Relationship to Requirements Process
MNS ORDAll validated by JROC
Systems Engineering Fundamentals Chapter 11
104
Alternative Systems Review (ASR)
After the concept studies are complete a preferredsystem concept is identified. The associated draftSystem Work Breakdown Structure, preliminaryfunctional baseline, and draft system specificationare reviewed to determine feasibility and risk.Technology dependencies are reviewed to ascer-tain the level of technology risk associated withthe proposed concepts. This review is conductedlate during the Concept Exploration stage of theConcept and Technology Development Phase ofthe acquisition process to verify that the preferredsystem concept:
• Provides a cost-effective, operationally-effectiveand suitable solution to identified needs,
• Meets established affordability criteria, and
• Can be developed to provide a timely solutionto the need at an acceptable level of risk.
The findings of this review are a significant inputto decision review conducted after ConceptExploration to determine where the system shouldenter in the life-cycle process to continue devel-opment. This determination is largely based ontechnology and system development maturity.
It is important to understand that the path of thesystem through the life-cycle process will bedifferent for systems of different maturities. Con-sequently, the decision as whether or not to conductthe technical reviews that are briefly described inthe following paragraphs is dependent on the extentof design and development required to bring thesystem to a level of maturity that justifies producingand fielding it.
System Requirements Review (SRR)
If a system architecture system must be developedand a top-down design elaborated, the system willpass through a number of well-defined levels ofdevelopment, and that being the case, a well-planned schedule of technical reviews is impera-tive. The Component Advanced Development stage(the second stage of Concept and Technology
Development in the revised acquisition life-cycleprocess) is the stage during which system-level ar-chitectures are defined and any necessary advanceddevelopment required to assess and control tech-nical risk is conducted. As the system passes intothe acquisition process, i.e., passes a Milestone Band enters System Development and Demonstra-tion, it is appropriate to conduct a SRR. The SRRis intended to confirm that the user’s requirementshave been translated into system specific techni-cal requirements, that critical technologies are iden-tified and required technology demonstrations areplanned, and that risks are well understood andmitigation plans are in place. The draft systemspecification is verified to reflect the operationalrequirements.
All relevant documentation should be reviewed,including:
• System Operational Requirements,
• Draft System Specification and any initial draftPerformance Item Specifications,
• Functional Analysis (top level block diagrams),
• Feasibility Analysis (results of technologyassessments and trade studies to justify systemdesign approach),
• System Maintenance Concept,
• Significant system design criteria (reliability,maintainability, logistics requirements, etc.),
• System Engineering Planning,
• Test and Evaluation Master Plan,
• Draft top-level Technical Performance Measure-ment, and
• System design documentation (layout drawings,conceptual design drawings, selected suppliercomponents data, etc.).
The SRR confirms that the system-level require-ments are sufficiently well understood to permit
Chapter 11 Technical Reviews and Audits
105
the developer (contractor) to establish an initial sys-tem level functional baseline. Once that baseline isestablished, the effort begins to define the function-al, performance, and physical attributes of the itemsbelow system level and to allocate them to thephysical elements that will perform the functions.
System Functional Review (SFR)
The process of defining the items or elementsbelow system level involves substantial engineer-ing effort. This design activity is accompanied byanalysis, trade studies, modeling and simulation,as well as continuous developmental testing toachieve an optimum definition of the major ele-ments that make up the system, with associatedfunctionality and performance requirements. Thisactivity results in two major systems engineeringproducts: the final version of the system perfor-mance specification and draft versions of theperformance specifications, which describe theitems below system level (item performance speci-fications). These documents, in turn, define thesystem functional baseline and the draft allocatedbaseline. As this activity is completed, the systemhas passed from the level of a concept to a well-defined system design, and, as such, it is appropri-ate to conduct another in the series of technicalreviews.
The SFR will typically include the tasks listedbelow. Most importantly, the system technicaldescription (Functional Baseline) must be ap-proved as the governing technical requirementbefore proceeding to further technical development.This sets the stage for engineering design anddevelopment at the lower levels in the systemarchitecture. The government, as the customer,will normally take control of and manage thesystem functional baseline following successfulcompletion of the SFR.
The review should include assessment of the fol-lowing items. More complete lists are found instandards and texts on the subject.
• Verification that the system specificationreflects requirements that will meet userexpectations.
• Functional Analysis and Allocation of require-ments to items below system level,
• Draft Item Performance and some Item DetailSpecifications,
• Design data defining the overall system,
• Verification that the risks associated with thesystem design are at acceptable levels forengineering development,
• Verification that the design selections have beenoptimized through appropriate trade studyanalyses,
• Supporting analyses, e.g., logistics, human sys-tems integration, etc., and plans are identifiedand complete where appropriate,
• Technical Performance Measurement data andanalysis, and
• Plans for evolutionary design and developmentare in place and that the system design ismodular and open.
Following the SFR, work proceeds to complete thedefinition of the design of the items below systemlevel, in terms of function, performance, interfacerequirements for each item. These definitions aretypically captured in item performance specifica-tions, sometimes referred to as prime item devel-opment specifications. As these documents arefinalized, reviews will normally be held to verifythat the design requirements at the item level reflectthe set of requirements that will result in anacceptable detailed design, because all design workfrom the item level to the lowest level in the systemwill be based on the requirements agreed upon atthe item level. The establishment of a set of finalitem-level design requirements represents the defi-nition of the allocated baseline for the system.There are two primary reviews normally associ-ated with this event: the Software SpecificationReview (SSR), and the Preliminary Design Review(PDR).
Systems Engineering Fundamentals Chapter 11
106
Software Specification Review (SSR)
As system design decisions are made, typicallysome functions are allocated to hardware items,while others are allocated to software. A separatespecification is developed for software items todescribe the functions, performance, interfaces andother information that will guide the design anddevelopment of software items. In preparation forthe system-level PDR, the system softwarespecification is reviewed prior to establishing theAllocated Baseline. The review includes:
• Review and evaluate the maturity of softwarerequirements,
• Validation that the software requirements speci-fication and the interface requirements speci-fication reflect the system-level requirementsallocated to software,
• Evaluation of computer hardware and softwarecompatibility,
• Evaluation of human interfaces, controls, anddisplays
• Assurance that software-related risks have beenidentified and mitigation plans established,
• Validation that software designs are consistentwith the Operations Concept Document,
• Plans for testing, and
• Review of preliminary manuals.
Preliminary Design Review (PDR)
Using the Functional Baseline, especially theSystem Specification, as a governing requirement,a preliminary design is expressed in terms of designrequirements for subsystems and configurationitems. This preliminary design sets forth the func-tions, performance, and interface requirements thatwill govern design of the items below system level.Following the PDR, this preliminary design (Allo-cated Baseline) will be put under formal config-uration control [usually] by the contractor. The
Item Performance Specifications, including thesystem software specification, which form thecore of the Allocated Baseline, will be confirmedto represent a design that meets the SystemSpecification.
This review is performed during the SystemDevelopment and Demonstration phase. Reviewsare held for configuration items (CIs), or groupsof related CIs, prior to a system-level PDR. ItemPerformance Specifications are put under configu-ration control (Current DoD practice is for con-tractors to maintain configuration control over ItemPerformance Specifications, while the governmentexercises requirements control at the systemlevel). At a minimum, the review should includeassessment of the following items:
• Item Performance Specifications,
• Draft Item Detail, Process, and MaterialSpecifications,
• Design data defining major subsystems,equipment, software, and other systemelements,
• Analyses, reports, “ility” analyses, trade stud-ies, logistics support analysis data, and designdocumentation,
• Technical Performance Measurement data andanalysis,
• Engineering breadboards, laboratory models,test models, mockups, and prototypes used tosupport the design, and
• Supplier data describing specific components.
[Rough Rule of Thumb: ~15% of production draw-ings are released by PDR. This rule is anecdotaland only guidance relating to an “average” defensehardware program.]
Critical Design Review (CDR)
Before starting to build the production line thereneeds to be verification and formalization of the
Chapter 11 Technical Reviews and Audits
107
mutual understanding of the details of the itembeing produced. Performed during the SystemDevelopment and Demonstration phase, this re-view evaluates the draft Production Baseline(“Build To” documentation) to determine if thesystem design documentation (Product Baseline,including Item Detail Specs, Material Specs, Pro-cess Specs) is satisfactory to start initial manufac-turing. This review includes the evaluation of allCIs. It includes a series of reviews conducted foreach hardware CI before release of design to fab-rication, and each computer software CI beforefinal coding and testing. Additionally, test plansare reviewed to assess if test efforts are develop-ing sufficiently to indicate the Test ReadinessReview will be successful. The approved detaildesign serves as the basis for final productionplanning and initiates the development of finalsoftware code.
[Rough Rule of Thumb: At CDR the design shouldbe at least 85% complete. Many programs usedrawing release as a metric for measuring designcompletion. This rule is anecdotal and only guid-ance relating to an “average” defense hardwareprogram.]
Test Readiness Review (TRR)
Typically performed during the System Demon-stration stage of the System Development andDemonstration phase (after CDR), the TRR as-sesses test objectives, procedures, and resourcestesting coordination. Originally developed as asoftware CI review, this review is increasinglyapplied to both hardware and software items. TheTRR determines the completeness of test proce-dures and their compliance with test plans anddescriptions. Completion coincides with theinitiation of formal CI testing.
Production Readiness Reviews (PRR)
Performed incrementally during the SystemDevelopment and Demonstration and during theProduction Readiness stage of the Production andDeployment phase, this series of reviews is heldto determine if production preparation for the sys-tem, subsystems, and configuration items is com-
plete, comprehensive, and coordinated. PRRs arenecessary to determine the readiness for produc-tion prior to executing a production go-aheaddecision. They will formally examine the pro-ducibility of the production design, the control overthe projected production processes, and adequacyof resources necessary to execute production.Manufacturing risk is evaluated in relationship toproduct and manufacturing process performance,cost, and schedule. These reviews support acqui-sition decisions to proceed to Low-Rate InitialProduction (LRIP) or Full-Rate Production.
Functional Configuration Audit/ SystemVerification Review (FCA)/(SVR)
This series of audits and the consolidating SVRre-examines and verifies the customer’s needs, andthe relationship of these needs to the system andsubsystem technical performance descriptions(Functional and Allocated Baselines). They deter-mine if the system produced (including produc-tion representative prototypes or LRIP units) iscapable of meeting the technical performancerequirements established in the specifications, testplans, etc. The FCA verifies that all requirementsestablished in the specifications, associated testplans, and related documents have been tested andthat the item has passed the tests, or correctiveaction has been initiated. The technical assessmentsand decisions that are made in SVR will be pre-sented to support the full-rate production go-aheaddecision. Among the issues addressed:
• Readiness issues for continuing design, continu-ing verifications, production, training, deploy-ment, operations, support, and disposal havebeen resolved,
• Verification is comprehensive and complete,
• Configuration audits, including completion of allchange actions, have been completed for all CIs,
• Risk management planning has been updatedfor production,
• Systems Engineering planning is updated forproduction, and
Systems Engineering Fundamentals Chapter 11
108
• Critical achievements, success criteria andmetrics have been established for production.
Physical Configuration Audit (PCA)
After full-rate production has been approved, fol-low-on independent verification (FOT&E) hasidentified the changes the user requires, and thosechanges have been corrected on the baseline docu-ments and the production line, then it is time toassure that the product and the product baselinedocumentation are consistent. The PCA will for-malize the Product Baseline, including specifica-tions and the technical data package, so that futurechanges can only be made through full configura-tion management procedures. Fundamentally, thePCA verifies the product (as built) is consistentwith the Technical Data Package which describesthe Product Baseline. The final PCA confirms:
• The subsystem and CI PCAs have beensuccessfully completed,
• The integrated decision database is valid andrepresents the product,
• All items have been baselined,
• Changes to previous baselines have beencompleted,
• Testing deficiencies have been resolved andappropriate changes implemented, and
• System processes are current and can beexecuted.
The PCA is a configuration management activityand is conducted following procedures establishedin the Configuration Management Plan.
11.3 TAILORING
The reviews described above are based on acomplex system development project requiringsignificant technical evaluation. There are also
cases where system technical maturity is moreadvanced than normal for the phase, for example,where a previous program or an Advanced Tech-nical Concept Demonstration (ACTD) has pro-vided a significant level of technical developmentapplicable to the current program. In some casesthis will precipitate the merging or even elimina-tion of acquisition phases. This does not justifyelimination of the technical management activi-ties grouped under the general heading of systemsanalysis and control, nor does it relieve thegovernment program manager of the responsibil-ity to see that these disciplines are enforced. It does,however, highlight the need for flexibility andtailoring to the specific needs of the program underdevelopment.
For example, a DoD acquisition strategy that pro-poses that a system proceed directly into the dem-onstration stage may skip a stage of the completeacquisition process, but it must not skip the for-mulation of an appropriate Functional Baseline andthe equivalent of an SFR to support the develop-ment. Nor should it skip the formulation of theAllocated Baseline and the equivalent of a PDR,and the formulation of the Product Baseline andthe equivalent of a CDR. Baselines must be devel-oped sequentially because they document differ-ent levels of design requirements and must buildon each other. However, the assessment of designand development maturity can be tailored as ap-propriate for the particular system. Tailored effortsstill have to deal with the problem of determiningwhen the design maturity should be assessed, andhow these assessments will support the formula-tion and control of baselines, which document thedesign requirements as the system matures.
In tailoring efforts, be extremely careful determin-ing the level of system complexity. The systemintegration effort, the development of a singleadvanced technology or complex sub-component,or the need for intensive software development maybe sufficient to establish the total system as a com-plex project, even though it appears simple becausemost subsystems are simple or off-the-shelf.
Chapter 11 Technical Reviews and Audits
109
11.4 SUMMARY POINTS
• Each level of product development is evaluatedand progress is controlled by specification de-velopment (System, Item Performance, ItemDetail, Process, and Material specifications) andtechnical reviews and audits (ASR, SRR, SDR,SSR, PDR, CDR, TRR, PRR, FCA, SVR,PCA).
• Technical reviews assess development maturity,risk, and cost/schedule effectiveness to deter-mine readiness to proceed.
• Reviews must be planned, managed, andfollowed up to be effective as an analysis andcontrol tool.
• As the system progresses through the develop-ment effort, the nature of design reviews andaudits will parallel the technical effort. Initiallythey will focus on requirements and functions,and later become very product focused.
• After system level reviews establish the Func-tional Baseline, technical reviews tend to besubsystem and CI focused until late in devel-opment when the focus again turns to the sys-tem level to determine the system’s readinessfor production.
Systems Engineering Fundamentals Chapter 11
110
Chapter 12 Trade Studies
111
CHAPTER 12
TRADE STUDIES
Systems Engineering Processand Trade Studies
Trade studies are required to support decisionsthroughout the systems engineering process. Dur-ing requirements analysis, requirements are bal-anced against other requirements or constraints,including cost. Requirements analysis trade stud-ies examine and analyze alternative performanceand functional requirements to resolve conflictsand satisfy customer needs.
During functional analysis and allocation, func-tions are balanced with interface requirements,dictated equipment, functional partitioning,requirements flowdown, and configuration itemsdesignation considerations. Trade studies areconducted within and across functions to:
• Support functional analyses and allocation ofperformance requirements and design con-straints,
• Define a preferred set of performance require-ments satisfying identified functional interfaces,
• Determine performance requirements for lower-level functions when higher-level performanceand functional requirements can not be readilyresolved to the lower-level, and
• Evaluate alternative functional architectures.
During design synthesis, trade studies are used toevaluate alternative solutions to optimize cost,schedule, performance, and risk. Trade studies areconducted during synthesis to:
12.1 MAKING CHOICES
Trade Studies are a formal decision making meth-odology used by integrated teams to make choicesand resolve conflicts during the systems engineer-ing process. Good trade study analyses demandthe participation of the integrated team; otherwise,the solution reached may be based on unwarrantedassumptions or may reflect the omission ofimportant data.
Trade studies identify desirable and practicalalternatives among requirements, technical objec-tives, design, program schedule, functional andperformance requirements, and life-cycle costs areidentified and conducted. Choices are then madeusing a defined set of criteria. Trade studies aredefined, conducted, and documented at the vari-ous levels of the functional or physical architec-ture in enough detail to support decision makingand lead to a balanced system solution. The levelof detail of any trade study needs to be commen-surate with cost, schedule, performance, and riskimpacts.
Both formal and informal trade studies are con-ducted in any systems engineering activity. For-mal trade studies tend to be those that will be usedin formal decision forums, e.g., milestone deci-sions. These are typically well documented andbecome a part of the decision database normal tosystems development. On the other hand, engineer-ing choices at every level involve trade-offs anddecisions that parallel the trade study process. Mostof these less-formal studies are documented insummary detail only, but they are important in thatthey define the design as it evolves.
Systems Engineering Fundamentals Chapter 12
112
• Support decisions for new product and processdevelopments versus non-developmentalproducts and processes;
• Establish system, subsystem, and componentconfigurations;
• Assist in selecting system concepts, designs,and solutions (including people, parts, andmaterials availability);
• Support materials selection and make-or-buy,process, rate, and location decisions;
• Examine proposed changes;
• Examine alternative technologies to satisfyfunctional or design requirements includingalternatives for moderate- to high- risktechnologies;
• Evaluate environmental and cost impacts ofmaterials and processes;
• Evaluate alternative physical architectures toselect preferred products and processes; and
• Select standard components, techniques,services, and facilities that reduce system life-cycle cost and meet system effectivenessrequirements.
During early program phases, for example, duringConcept Exploration and functional baselinedevelopment, trade studies are used to examinealternative system-level concepts and scenarios tohelp establish the system configuration. Duringlater phases, trade studies are used to examinelower-level system segments, subsystems, and enditems to assist in selecting component part designs.Performance, cost, safety, reliability, risk, and othereffectiveness measures must be traded against eachother and against physical characteristics.
12.2 TRADE STUDY BASICS
Trade studies (trade-off analyses) are processes thatexamine viable alternatives to determine which is
preferred. It is important that there be criteriaestablished that are acceptable to all members ofthe integrated team as a basis for a decision. Inaddition, there must be an agreed-upon approachto measuring alternatives against the criteria. Ifthese principles are followed, the trade study shouldproduce decisions that are rational, objective, andrepeatable. Finally, trade study results must be suchthat they can be easily communicated to custom-ers and decision makers. If the results of a tradestudy are too complex to communicate with ease,it is unlikely that the process will result in timelydecisions.
Trade Study Process
As shown by Figure 12-1, the process of trade-offanalysis consists of defining the problem, bound-ing the problem, establishing a trade-off method-ology (to include the establishment of decisioncriteria), selecting alternative solutions, determin-ing the key characteristics of each alternative,evaluating the alternatives, and choosing a solution:
• Defining the problem entails developing aproblem statement including any constraints.Problem definition should be done with extremecare. After all, if you don’t have the rightproblem, you won’t get the right answer.
• Bounding and understanding the problemrequires identification of system requirementsthat apply to the study.
• Conflicts between desired characteristics of theproduct or process being studied, and thelimitations of available data. Available databasesshould be identified that can provide relevant,historical “actual” information to supportevaluation decisions.
• Establishing the methodology includes choos-ing the mathematical method of comparison,developing and quantifying the criteria used forcomparison, and determining weighting factors(if any). Use of appropriate models and meth-odology will dictate the rationality, objectivity,and repeatability of the study. Experience hasshown that this step can be easily abused
Chapter 12 Trade Studies
113
Figure 12-1. Trade Study Process
Establish the study problem
• Develop a problem statement• Identify requirements and con-
straints• Establish analysis level of detail
Select and set up methodology• Choose trade-off methodology• Develop and quantify criteria,
including weights whereappropriate
Analyze results• Calculate relative value based
on chosen methodology• Evaluate alternatives• Perform sensitivity analysis• Select preferred alternative• Re-evaluate results
Review inputs• Check requirements and con-
straints for completeness andconflicts
• Develop customer-team com-munication
Identify and select alternatives• Identify alternatives• Select viable candidates for study
Measure performance• Develop models and measure-
ments of merit• Develop values for viable
candidates
Document process and results
through both ignorance and design. To the ex-tent possible the chosen methodology shouldcompare alternatives based on true value to thecustomer and developer. Trade-off relationshipsshould be relevant and rational. Choice of util-ity or weights should answer the question, “whatis the actual value of the increased performance,based on what rationale?”
• Selecting alternative solutions requires identi-fication of all the potential ways of solving theproblem and selecting those that appear viable.The number of alternatives can drive the costof analysis, so alternatives should normally belimited to clearly viable choices.
• Determining the key characteristics entailsderiving the data required by the studymethodology for each alternative.
• Evaluating the alternatives is the analysis partof the study. It includes the development of atrade-off matrix to compare the alternatives,performance of a sensitivity analysis, selectionof a preferred alternative, and a re-evaluation(sanity check) of the alternatives and the studyprocess. Since weighting factors and some“quantified” data can have arbitrary aspects, thesensitivity analysis is crucial. If the solution canbe changed with relatively minor changes indata input, the study is probably invalid, and
Systems Engineering Fundamentals Chapter 12
114
the methodology should be reviewed andrevised. After the above tasks are complete, asolution is chosen, documented, and recordedin the database.
Cost Effectiveness Analyses
Cost effectiveness analyses are a special case tradestudy that compares system or component perfor-mance to its cost. These analyses help determineaffordability and relative values of alternatesolutions. Specifically, they are used to:
• Support identification of affordable, cost opti-mized mission and performance requirements,
• Support the allocation of performance to anoptimum functional structure,
• Provide criteria for the selection of alternativesolutions,
• Provide analytic confirmation that designssatisfy customer requirements within costconstraints, and
• Support product and process verification.
12.3 SUMMARY POINTS
• The purpose of trade studies is to make betterand more informed decisions in selecting bestalternative solutions.
• Initial trade studies focus on alternative systemconcepts and requirements. Later studies assistin selecting component part designs.
• Cost effectiveness analyses provide assessmentsof alternative solution performance relative tocost.
Chapter 12 Trade Studies
115
Figure 12-2. Utility Curve
Utility
Decision Factor(e.g., speed, cost, reliability, etc.)
Step Function
ContinuousRelationship
Threshold Goal
1.0
0.0
SUPPLEMENT 12-A
UTILITY CURVEMETHODOLOGY
The utility curve is a common methodology usedin DoD and industry to perform trade-off analy-sis. In DoD it is widely used for cost effectivenessanalysis and proposal evaluation.
Utility Curve
The method uses a utility curve, Figure 12-2, foreach of the decision factors to normalize them toease comparison. This method establishes the rela-tive value of the factor as it increases from theminimum value of the range. The curve shows canshow a constant value relationship (straight line),increasing value (concave curve), decreasing value(convex curve), or a stepped value.
Decision Matrix
Each of the decision factors will also have relativevalue between them. These relative values are used
to establish weighting factors for each decisionfactor. The weighting factors prioritize the deci-sion factors and allow direct comparison betweenthem. A decision matrix, similar to Figure 12-3, isgenerated to evaluate the relative value of thealternative solutions. In the case of Figure 12-3range is given a weight of 2.0, speed a weight of1.0, and payload a weight of 2.5. The utility val-ues for each of the decision factors are multipliedby the appropriate weight. The weighted valuesfor each alternative solution are added to obtain atotal score for each solution. The solution with thehighest score becomes the preferred solution. Forthe transport analysis of Figure 12-3 the apparentpreferred solution is System 3.
Sensitivity
Figure 12-3 also illustrates a problem with theutility curve method. Both the utility curve and
Systems Engineering Fundamentals Chapter 12
116
weighting factors contain a degree of judgment thatcan vary between evaluators. Figure 12-3 showsthree systems clustered around 3.8, indicating thata small variation in the utility curve or weightingfactor could change the results. In the case of Fig-ure 12-3, a sensitivity analysis should be performedto determine how solutions change as utility andweighting change. This will guide the evaluator indetermining how to adjust evaluation criteria toeliminate the problem’s sensitivity to smallchanges. In the case of Figure 12-3 the solutioncould be as simple as re-evaluating weighting fac-tors to express better the true value to the customer.For example, if the value of range is considered tobe less and payload worth more than originallystated, then System 4 may become a clear winner.
Notes
When developing or adjusting utility curves andweighting factors, communication with thecustomers and decision makers is essential. Mostsensitivity problems are not as obvious as Figure12-3. Sensitivity need not be apparent in the alter-natives’ total score. To ensure study viability,sensitivity analysis should always be done toexamine the consequences of methodology choice.(Most decision support software provides asensitivity analysis feature.)
Figure 12-3. Sample Decision Matrix
Decision Factors Range Speed Payload
Wt. = 2.0 Wt. = 1.0 Wt. = 2.5
Alternatives U W U W U W
Transport System 1 .8 1.6 .7 .7 .6 1.5 3.8
Transport System 2 .7 1.4 .9 .9 .4 1.0 3.3
Transport System 3 .6 1.2 .7 .7 .8 2.0 3.9
Transport System 4 .5 1.0 .5 .5 .9 2.25 3.75
Key: U = Utility valueW = Weighted value
WeightedTotal
Chapter 13 Modeling and Simulation
117
Figure 13-1. Advantages of Modeling and Simulation
Sometimes it’s the only wayto verify or validate
Smooth Transition to Operation• Manual proven• Trained personnel• Operationally ready before
equipment is given tooperators
Reduce Program Risks• Design• Integration• Transition to production• Testing
Prove System Need:Use existing high resolution
models to emulateoperational situation
Test “concepts” in the “realworld” of simulation usingsimple models and putting
operators into process
Helps Refine Requirements• Get the user involved• Prevent gold-plating
Improves IPPD
ShortensSchedules
$ Savings
Saves Time
Need
Concepts
PrelimDesign
DetailDesign
ProdDeploy
O&S
CHAPTER 13
MODELING ANDSIMULATION
represents those products or processes in readilyavailable and operationally valid environments.Use of models and simulations can reduce the costand risk of life cycle activities. As shown by Figure13-1, the advantages are significant throughout thelife cycle.
Modeling, Simulation, and Acquisition
Modeling and simulation has become a veryimportant tool across all acquisition-cycle phasesand all applications: requirements definition;program management; design and engineering;
13.1 INTRODUCTION
A model is a physical, mathematical, or logicalrepresentation of a system entity, phenomenon, orprocess. A simulation is the implementation of amodel over time. A simulation brings a model tolife and shows how a particular object or phenom-enon will behave. It is useful for testing, analysisor training where real-world systems or conceptscan be represented by a model.
Modeling and simulation (M&S) provides virtualduplication of products and processes, and
Systems Engineering Fundamentals Chapter 13
118
efficient test planning; result prediction; supple-ment to actual test and evaluation; manufacturing;and logistics support. With so many opportunitiesto use M&S, its four major benefits; cost savings,accelerated schedule, improved product quality andcost avoidance can be achieved in any systemdevelopment when appropriately applied. DoD andindustry around the world have recognized theseopportunities, and many are taking advantage ofthe increasing capabilities of computer and infor-mation technology. M&S is now capable ofprototyping full systems, networks, interconnect-ing multiple systems and their simulators so thatsimulation technology is moving in every directionconceivable.
13.2 CLASSES OF SIMULATIONS
The three classes of models and simulations arevirtual, constructive, and live:
• Virtual simulations represent systems bothphysically and electronically. Examples are air-craft trainers, the Navy’s Battle Force TacticalTrainer, Close Combat Tactical Trainer, andbuilt-in training.
• Constructive simulations represent a systemand its employment. They include computermodels, analytic tools, mockups, IDEF, FlowDiagrams, and Computer-Aided Design/ Manu-facturing (CAD/CAM).
• Live simulations are simulated operations withreal operators and real equipment. Examplesare fire drills, operational tests, and initialproduction run with soft tooling.
Virtual Simulation
Virtual simulations put the human-in-the-loop. Theoperator’s physical interface with the system isduplicated, and the simulated system is made toperform as if it were the real system. The operatoris subjected to an environment that looks, feels,and behaves like the real thing. The more advancedversion of this is the virtual prototype, which allowsthe individual to interface with a virtual mockup
operating in a realistic computer-generated envir-onment. A virtual prototype is a computer-basedsimulation of a system or subsystem with a degreeof functional realism that is comparable to that ofa physical prototype.
Constructive Simulations
The purpose of systems engineering is to developdescriptions of system solutions. Accordingly, con-structive simulations are important products in allkey system engineering tasks and activities. Ofspecial interest to the systems engineer are Com-puter-Aided Engineering (CAE) tools. Computer-aided tools can allow more in-depth and completeanalysis of system requirements early in design.They can provide improved communication be-cause data can be disseminated rapidly to severalindividuals concurrently, and because designchanges can be incorporated and distributedexpeditiously. Key computer-aided engineeringtools are CAD, CAE, CAM, Continuous Acquisi-tion and Life Cycle Support, and Computer-AidedSystems Engineering:
Computer-Aided Design (CAD). CAD tools areused to describe the product electronically tofacilitate and support design decisions. It can modeldiverse aspects of the system such as how compo-nents can be laid out on electrical/electronic cir-cuit boards, how piping or conduit is routed, orhow diagnostics will be performed. It is used tolay out systems or components for sizing, posi-tioning, and space allocating using two- or three-dimensional displays. It uses three-dimensional“solid” models to ensure that assemblies, surfaces,intersections, interfaces, etc., are clearly defined.Most CAD tools automatically generate isometricand exploded views of detailed dimensional andassembly drawings, and determine component sur-face areas, volumes, weights, moments of inertia,centers of gravity, etc. Additionally, many CADtools can develop three-dimensional models offacilities, operator consoles, maintenance work-stations, etc., for evaluating man-machine inter-faces. CAD tools are available in numerous vari-eties, reflecting different degrees of capabilities,fidelity, and cost. The commercial CAD/CAMproduct, Computer-Aided Three-Dimensional
Chapter 13 Modeling and Simulation
119
Interactive Application (CATIA), was used todevelop the Boeing 777, and is a good example ofcurrent state-of-the-art CAD.
Computer-Aided Engineering (CAE). CAE pro-vides automation of requirements and performanceanalyses in support of trade studies. It normallywould automate technical analyses such as stress,thermodynamic, acoustic, vibration, or heat trans-fer analysis. Additionally, it can provide automatedprocesses for functional analyses such as faultisolation and testing, failure mode, and safetyanalyses. CAE can also provide automation of life-cycle-oriented analysis necessary to support thedesign. Maintainability, producibility, human fac-tor, logistics support, and value/cost analyses areavailable with CAE tools.
Computer-Aided Manufacturing (CAM). CAMtools are generally designed to provide automatedsupport to both production process planning andto the project management process. Process plan-ning attributes of CAM include establishingNumerical Control parameters, controllingmachine tools using pre-coded instructions, pro-gramming robotic machinery, handling material,and ordering replacement parts. The productionmanagement aspect of CAM provides managementcontrol over production-relevant data, uses histori-cal actual costs to predict cost and plan activities,identifies schedule slips or slack on a daily basis,and tracks metrics relative to procurement,inventory, forecasting, scheduling, cost reporting,support, quality, maintenance, capacity, etc. A com-mon example of a computer-based project plan-ning and control tool is Manufacturing ResourcePlanning II (MRP II). Some CAM programs canaccept data direct from a CAD program. With thistype of tool, generally referred to as CAD/CAM,substantial CAM data is automatically generatedby importing the CAD data directly into the CAMsoftware.
Computer-Aided Systems Engineering (CASE).CASE tools provide automated support for theSystems Engineering and associated processes.CASE tools can provide automated support forintegrating system engineering activities, perform-ing the systems engineering tasks outlined in
previous chapters, and performing the systemsanalysis and control activities. It provides techni-cal management support and has a broadercapability than either CAD or CAE. An increas-ing variety of CASE tools are available, ascompetition brings more products to market, andmany of these support the commercial “bestSystems Engineering practices.”
Continuous Acquisition and Life Cycle Support(CALS). CALS relates to the application ofcomputerized technology to plan and implementsupport functions. The emphasis is on informationrelating to maintenance, supply support, and asso-ciated functions. An important aspect of CALS isthe importation of information developed duringdesign and production. A key CALS function is tosupport the maintenance of the system configura-tion during the operation and support phase. InDoD, CALS supports activities of the logisticscommunity rather than the specific program office,and transfer of data between the CAD or CAMprograms to CALS has been problematic. As aresult there is current emphasis on development ofstandards for compatible data exchange. Formatsof import include: two- and three-dimensionalmodels (CAD), ASCII formats (Technical Manu-als), two-dimensional illustrations (TechnicalManuals), and Engineering Drawing formats (Ras-ter, Aperture cards). These formats will be employ-ed in the Integrated Data Environment (IDE) thatis mandated for use in DoD program offices.
Live Simulation
Live simulations are simulated operations of realsystems using real people in realistic situations.The intent is to put the system, including itsoperators, through an operational scenario, wheresome conditions and environments are mimickedto provide a realistic operating situation. Examplesof live simulations range from fleet exercises tofire drills.
Eventually live simulations must be performed tovalidate constructive and virtual simulations. How-ever, live simulations are usually costly, and tradestudies should be performed to support the bal-ance of simulation types chosen for the program.
Systems Engineering Fundamentals Chapter 13
120
Figure 13-2. Verification, Validation, and Accreditation
Developer Functional Expert Requester/User
Verification Agent Validation Agent Accreditation Agent
As design matures, re-examine basic assumptions.
Verification Validation Accreditation
“It looks just likethe real thing.”
“It suits my needs.”“It works as I
thought it would.”
13.3 HARDWARE VERSUS SOFTWARE
Though current emphasis is on software M&S, thedecision of whether to use hardware, software, ora combined approach is dependent on the com-plexity of the system, the flexibility needed for thesimulation, the level of fidelity required, and thepotential for reuse. Software capabilities areincreasing, making software solutions cost effec-tive for large complex projects and repeated pro-cesses. Hardware methods are particularly usefulfor validation of software M&S, simple or one-time projects, and quick checks on changes of pro-duction systems. M&S methods will vary widelyin cost. Analysis of the cost-versus-benefits ofpotential M&S methods should be performed tosupport planning decisions.
13.4 VERIFICATION, VALIDATION,AND ACCREDITATION
How can you trust the model or simulation?Establish confidence in your model or simulationthrough formal verification, validation, andaccreditation (VV&A). VV&A is usually identifiedwith software, but the basic concept applies to
hardware as well. Figure 13-2 shows the basicdifferences between the terms (VV&A).
More specifically:
• Verification is the process of determining thata model implementation accurately representsthe developer’s conceptual description andspecifications that the model was designed to.
• Validation is the process of determining themanner and degree to which a model is an ac-curate representation of the real world from theperspective of the intended uses of the model,and of establishing the level of confidence thatshould be placed on this assessment.
• Accreditation is the formal certification that amodel or simulation is acceptable for use for aspecific purpose. Accreditation is conferred bythe organization best positioned to make thejudgment that the model or simulation inquestion is acceptable. That organization maybe an operational user, the program office, or acontractor, depending upon the purposesintended.
Chapter 13 Modeling and Simulation
121
VV&A is particularly necessary in cases where:
• Complex and critical interoperability is beingrepresented,
• Reuse is intended,
• Safety of life is involved, and
• Significant resources are involved.
VV&A Currency
VV&A is applied at initial development and use.The VV&A process is required for all DoD simu-lations and should be redone whenever existingmodels and simulations undergo a major upgradeor modification. Additionally, whenever the modelor simulation violates its documented methodol-ogy or inherent boundaries that were used to vali-date or verify by its different use, then VV&A mustbe redone. Accreditation, however, may remainvalid for the specific application unless revokedby the Accreditation Agent, as long as its use orwhat it simulates doesn’t change.
13.5 CONSIDERATIONS
There are a number of considerations that shouldenter into decisions regarding the acquisition andemployment of modeling and simulation in defenseacquisition management. Among these are suchconcerns as cost, fidelity, planning, balance, andintegration.
Cost Versus Fidelity
Fidelity is the degree to which aspects of the realworld are represented in M&S. It is the founda-tion for development of the model and subsequentVV&A. Cost effectiveness is a serious issue withsimulation fidelity, because fidelity can be anaggressive cost driver. The correct balance betweencost and fidelity should be the result of simulationneed analysis. M&S designers and VV&A agentsmust decide when enough is enough. Fidelity needscan vary throughout the simulation. This varianceshould be identified by analysis and planned for.
Note of caution: Don’t confuse the quality of thedisplay with the quality of meeting simulationneeds! An example of fidelity is a well-knownflight simulator using a PC and simple joystickversus a full 6-degree of freedom fully-instru-mented aircraft cockpit. Both have value at differ-ent stages of flight training, but obviously varysignificantly in cost from thousands of dollars tomillions. This cost difference is based on fidelity,or degree of real-world accuracy.
Planning
Planning should be an inherent part of M&S, and,therefore, it must be proactive, early, continuous,and regular. Early planning will help achieve bal-ance and beneficial reuse and integration. Withcomputer and simulation technologies evolving sorapidly, planning is a dynamic process. It must bea continuing process, and it is important that theappropriate simulation experts be involved to maxi-mize the use of new capabilities. M&S activitiesshould be a part of the integrated teaming and in-volve all responsible organizations. Integratedteams must develop their M&S plans and insertthem into the overall planning process, includingthe TEMP, acquisition strategy, and any otherprogram planning activity.
M&S planning should include:
• Identification of activities responsible for eachVV&A element of each model or simulation,and
• Thorough VV&A estimates, formally agreed toby all activities involved in M&S, includingT&E commitments from the developmentaltesters, operational testers, and separate VV&Aagents.
Those responsible for the VV&A activities mustbe identified as a normal part of planning. Figure13-2 shows the developer as the verification agent,the functional expert as the validation agent, andthe user as the accreditation agent. In general thisis appropriate for virtual simulations. However, themanufacturer of a constructive simulation wouldusually be expected to justify or warrantee their
Systems Engineering Fundamentals Chapter 13
122
program’s use for a particular application. Thequestion of who should actually accomplishVV&A is one that is answered in planning. VV&Arequirements should be specifically called out intasking documents and contracts. When appropri-ate, VV&A should be part of the contractor’sproposal, and negotiated prior to contract award.
Balance
Balance refers to the use of M&S across the phasesof the product life cycle and across the spectrumof functional disciplines involved. The term mayfurther refer to the use of hardware versus soft-ware, fidelity level, VV&A level, and even useversus non-use. Balance should always be basedon cost effectiveness analysis. Cost effectivenessanalyses should be comprehensive; that is, M&Sshould be properly considered for use in all paral-lel applications and across the complete life cycleof the system development and use.
Integration
Integration is obtained by designing a model orsimulation to inter-operate with other models orsimulations for the purpose of increased perfor-mance, cost benefit, or synergism. Multiple ben-efits or savings can be gained from increasedsynergism and use over time and across activities.Integration is achieved through reuse or upgradeof legacy programs used by the system, or of theproactive planning of integrated development ofnew simulations. In this case integration is accom-plished through the planned utilization of models,simulations, or data for multiple times or applica-tions over the system life cycle. The plannedupgrade of M&S for evolving or parallel usessupports the application of open systems architec-ture to the system design. M&S efforts that areestablished to perform a specific function by aspecific contractor, subcontractor, or governmentactivity will tend to be sub-optimized. To achieve
Figure 13-3. A Robust Integrated Use of Simulation Technology
ConceptDevelopment
FunctionalDesign
Requirements
ProgramMgmt
Opns, Logand Training Testing
Eng Devand Manuf
AnotherSystem
AnotherSystem
DistributedFramework
Distributed
RepositoryPhysical and
HW/SW Design
Chapter 13 Modeling and Simulation
123
integration M&S should be managed at least at theprogram office level.
The Future Direction
DoD, the Services, and their commands havestrongly endorsed the use of M&S throughout theacquisition life cycle. The supporting simulationtechnology is also evolving as fast as computertechnology changes, providing greater fidelity andflexibility. As more simulations are interconnected,the opportunities for further integration expand.M&S successes to date also accelerate its use. Thecurrent focus is to achieve open systems of simu-lations, so they can be plug-and-play across thespectrum of applications. From concept analysisthrough disposal analysis, programs may use hun-dreds of different simulations, simulators andmodel analysis tools. Figure 13-3 shows concep-tually how an integrated program M&S wouldaffect the functions of the acquisition process.
A formal DoD initiative, Simulation Based Acqui-sition (SBA), is currently underway. The SBAvision is to advance the implementation of M&Sin the DoD acquisition process toward a robust,collaborative use of simulation technology that isintegrated across acquisition phases and programs.The result will be programs that are much betterintegrated in an IPPD sense, and which are muchmore efficient in the use of time and dollarsexpended to meet the needs of operational users.
13.6 SUMMARY
• M&S provides virtual duplication of productsand processes, and represent those products orprocesses in readily available and operationallyvalid environments.
• M&S should be applied throughout the systemlife cycle in support of systems engineeringactivities.
• The three classes of models and simulations arevirtual, constructive, and live.
• Establish confidence in your model or simula-tion through formal VV&A.
• M&S planning should be an inherent part ofSystems Engineering planning, and, therefore,pro-active, early, continuous, and regular.
• A more detailed discussion of the use and man-agement of M&S in DoD acquisition is avail-able in the DSMC publication Systems Acqui-sition Manager’s Guide for the Use of Modelsand Simulations.
• An excellent second source is the DSMC pub-lication, Simulation Based Acquisition – A NewApproach. It surveys applications of increas-ing integration of simulation in current DoDprograms and the resulting increasing benefitsthrough greater integration.
Systems Engineering Fundamentals Chapter 13
124
Chapter 14 Metrics
125
CHAPTER 14
METRICS
Effectiveness (MOEs) which reflect operationalperformance requirements.
The term “metric” implies quantitatively measur-able data. In design, the usefulness of metric datais greater if it can be measured at the configura-tion item level. For example, weight can be esti-mated at all levels of the WBS. Speed, though anextremely important operational parameter, can-not be allocated down through the WBS. It cannotbe measured, except through analysis and simula-tion, until an integrated product is available. Sinceweight is an important factor in achieving speedobjectives, and weight can be measured at variouslevels as the system is being developed, weightmay be the better choice as a metric. It has a directimpact on speed, so it traces to the operationalrequirement, but, most importantly, it can be allo-cated throughout the WBS and progress towardachieving weight goals may then be trackedthrough development to production.
Measures of Effectiveness and Suitability
Measures of Effectiveness (MOEs) and Measuresof Suitability (MOSs) are measures of operationaleffectiveness and suitability in terms of operationaloutcomes. They identify the most critical perfor-mance requirements to meet system-level missionobjectives, and will reflect key operational needsin the operational requirements document.
Operational effectiveness is the overall degree ofa system’s capability to achieve mission successconsidering the total operational environment. Forexample, weapon system effectiveness would con-sider environmental factors such as operator orga-nization, doctrine, and tactics; survivability; vul-nerability; and threat characteristics. MOSs, onthe other hand, would measure the extent to whichthe system integrates well into the operation
14.1 METRICS IN MANAGEMENT
Metrics are measurements collected for the pur-pose of determining project progress and overallcondition by observing the change of the measuredquantity over time. Management of technicalactivities requires use of three basic types ofmetrics:
• Product metrics that track the development ofthe product,
• Earned Value which tracks conformance to theplanned schedule and cost, and
• Management process metrics that trackmanagement activities.
Measurement, evaluation and control of metrics isaccomplished through a system of periodic report-ing must be planned, established, and monitoredto assure metrics are properly measured, evaluated,and the resulting data disseminated.
Product Metrics
Product metrics are those that track key attributesof the design to observe progress toward meetingcustomer requirements. Product metrics reflectthree basic types of requirements: operational per-formance, life-cycle suitability, and affordability.The key set of systems engineering metrics are theTechnical Performance Measurements (TPM.)TPMs are product metrics that track designprogress toward meeting customer performancerequirements. They are closely associated with thesystem engineering process because they directlysupport traceability of operational needs to thedesign effort. TPMs are derived from Measures ofPerformance (MOPs) which reflect system require-ments. MOPs are derived from Measures of
Systems Engineering Fundamentals Chapter 14
126
environment and would consider such issues assupportability, human interface compatibility, andmaintainability.
Measures of Performance
MOPs characterize physical or functional attributesrelating to the execution of the mission or func-tion. They quantify a technical or performancerequirement directly derived from MOEs andMOSs. MOPs should relate to these measures suchthat a change in MOP can be related to a change inMOE or MOS. MOPs should also reflect key per-formance requirements in the system specification.MOPs are used to derive, develop, support, anddocument the performance requirements that willbe the basis for design activities and processdevelopment. They also identify the critical tech-nical parameters that will be tracked throughTPMs.
Technical Performance Measurements
TPMs are derived directly from MOPs, and areselected as being critical from a periodic reviewand control standpoint. TPMs help assess designprogress, assess compliance to requirementsthroughout the WBS, and assist in monitoring andtracking technical risk. They can identify the needfor deficiency recovery, and provide informationto support cost-performance sensitivity assess-ments. TPMs can include range, accuracy, weight,size, availability, power output, power required,process time, and other product characteristicsthat relate directly to the system operationalrequirements.
TPMs traceable to WBS elements are preferred,so elements within the system can be monitoredas well as the system as a whole. However, somenecessary TPMs will be limited to the system orsubsystem level. For example, the specific fuelconsumption of an engine would be a TPM neces-sary to track during the engine development, but itis not allocated throughout the WBS. It is reportedas a single data item reflecting the performance ofthe engine as a whole. In this case the metric willindicate that the design approach is consistent with
the required performance, but it may not be usefulas an early warning device to indicate progresstoward meeting the design goal. A more detaileddiscussion of TPMs is available as Supplement Ato this chapter.
Example of Measures
MOE: The vehicle must be able to drive fullyloaded from Washington, DC, to Tampa on onetank of fuel.
MOP: Vehicle range must be equal to or greaterthan 1,000 miles.
TPM: Fuel consumption, vehicle weight, tank size,drag, power train friction, etc.
Suitability Metrics
Tracking metrics relating to operational suitabil-ity and other life cycle concerns may be appropri-ate to monitor progress toward an integrated design.Operational suitability is the degree to which asystem can be placed satisfactorily in field useconsidering availability, compatibility, transport-ability, interoperability, reliability, usage rates,maintainability, safety, human factors, documen-tation, training, manpower, supportability, logis-tics, and environmental impacts. These suitabilityparameters can generate product metrics thatindicate progress toward an operationally suitablesystem. For example, factors that indicate thelevel of automation in the design would reflectprogress toward achieving manpower quantity andquality requirements. TPMs and suitability prod-uct metrics commonly overlap. For example, MeanTime Between Failure (MBTF) can reflect botheffectiveness or suitability requirements.
Suitability metrics would also include measure-ments that indicate improvement in the produci-bility, testability, degree of design simplicity, anddesign robustness. For example, tracking numberof parts, number of like parts, and number of wear-ing parts provides indicators of producibility,maintainability, and design simplicity.
Chapter 14 Metrics
127
Product Affordability Metrics
Estimated unit production cost can be trackedduring the design effort in a manner similar to theTPM approach, with each CI element reporting anestimate based on current design. These estimatesare combined at higher WBS levels to providesubsystem and system cost estimates. This providesa running engineering estimate of unit productioncost, tracking of conformance to Design-to-Cost(DTC) goals, and a method to isolate designproblems relating to production costs.
Life cycle affordability can be tracked throughfactors that are significant in parametric life cyclecost calculations for the particular system. Forexample, two factors that reflect life cycle cost formost transport systems are fuel consumption andweight, both of which can be tracked as metrics.
Timing
Product metrics are tied directly to the design pro-cess. Planning for metric identification, reporting,and analysis is begun with initial planning in theconcept exploration phase. The earliest systemsengineering planning should define the manage-ment approach, identify performance or charac-teristics to be measured and tracked, forecast valuesfor those performances or characteristics, deter-mine when assessments will be done, and establishthe objectives of assessment.
Implementation is begun with the development ofthe functional baseline. During this period, sys-tems engineering planning will identify criticaltechnical parameters, time phase planned profileswith tolerance bands and thresholds, reviews oraudits or events dependent or critical for achieve-ment of planned profiles, and the method of esti-mation. During the design effort, from functionalto product baseline, the plan will be implementedand continually updated by the systems engineer-ing process. To support implementation, contractsshould include provision for contractors to providemeasurement, analysis, and reporting. The needto track product metrics ends in the productionphase, usually concurrent with the establishmentof the product (as built) baseline.
DoD and Industry Policy on Product Metrics
Analysis and control activities shall includeperformance metrics to measure technicaldevelopment and design, actual versus planned;and to measure [the extent to which systems meetrequirements]. DoD 5000.2-R.
The performing activity establishes and imple-ments TPM to evaluate the adequacy of evolvingsolutions to identify deficiencies impacting theability of the system to satisfy a designated valuefor a technical parameter. EIA IS-632, Section 3.
The performing activity identifies the technicalperformance measures which are key indicatorsof system performance...should be limited tocritical MOPs which, if not met put the project atcost, schedule, or performance risk. IEEE 1220,Section 6.
14.2 EARNED VALUE
Earned Value is a metric reporting system that usescost-performance metrics to track the cost andschedule progress of system development againsta projected baseline. It is a “big picture” approachand integrates concerns related to performance,cost, and schedule. Referring to Figure 14-1, if wethink of the line labeled BCWP (budgeted cost ofwork performed) as the value that the contractorhas “earned,” then deviations from this baselineindicate problems in either cost or schedule. Forexample, if actual costs vary from budgeted costs,we have a cost variance; if work performed variesfrom work planned, we have a schedule variance.The projected performance is based on estimatesof appropriate cost and schedule to perform thework required by each WBS element. When a vari-ance occurs the system engineer can pinpoint WBSelements that have potential technical developmentproblems. Combined with product metrics, earnedvalue is a powerful technical management toolfor detecting and understanding developmentproblems.
Relationships exist between product metrics, theevent schedule, the calendar schedule, and Earned
Systems Engineering Fundamentals Chapter 14
128
Value:
• The Event Schedule includes tasks for eachevent/exit criteria that must be performed tomeet key system requirements, which aredirectly related to product metrics.
• The Calendar (Detail) Schedule includes timeframes established to meet those same productmetric-related objectives (schedules).
• Earned Value includes cost/schedule impactsof not meeting those objectives, and, whencorrelated with product metrics, can identifyemerging program and technical risk.
14.3 PROCESS METRICS
Management process metrics are measurementstaken to track the process of developing, building,and introducing the system. They include a widerange of potential factors and selection is pro-gram unique. They measure such factors asavailability of resources, activity time rates, itemscompleted, completion rates, and customer or teamsatisfaction.
Examples of these factors are: number of trainedpersonnel onboard, average time to approve/dis-approve ECPs, lines of code or drawings released,ECPs resolved per month, and team risk identifi-cation or feedback assessments. Selection of ap-propriate metrics should be done to track key man-agement activities. Selection of these metrics ispart of the systems engineering planning process.
How Much Metrics?
The choice of the amount and depth of metrics is aplanning function that seeks a balance between riskand cost. It depends on many considerations, in-cluding system complexity, organizational com-plexity, reporting frequency, how many contrac-tors, program office size and make up, contractorpast performance, political visibility, and contracttype.
14.4 SUMMARY POINTS
• Management of technical activities requires useof three basic types of metrics: product metricsthat track the development of the product,earned value which tracks conformance to the
Figure 14-1. Earned Value Concept
Total Allocated Budget
Management Reserve
Schedule Variance c
BCWSc
ACWPc
BCWPc
PMB
Cost Variance c
OverBudget
PROJECTED
SLIPPAGE
CompletionDate
TimeNow
$
EAC
Chapter 14 Metrics
129
planned schedule and cost, and managementprocess metrics that track management activi-ties.
• Measurement, evaluation and control of metricsis accomplished through a system of periodicreporting that must be planned, established, andmonitored to assure metrics are measuredproperly, evaluated, and the resulting datadisseminated.
• TPMs are performance based product metricsthat track progress through measurement of keytechnical parameters. They are important to thesystems engineering process because they con-nect operational requirements to measurabledesign characteristics and help assess how wellthe effort is meeting those requirements. TPMsare required for all programs covered by DoD5000.2-R.
Systems Engineering Fundamentals Chapter 14
130
Figure 14-2. Technical Performance Measurement – The Concept
SUPPLEMENT 14-A
TECHNICAL PERFORMANCEMEASUREMENT
TPMs generally take the form of both graphic dis-plays and narrative explanations. The graphic, anexample of which is shown in Figure 14-2, showsthe projected behavior of the selected parameteras a function of time, and further shows actual ob-servations, so that deviations from the planned pro-file can be assessed. The narrative portion of thereport should explain the graphic, addressing thereasons for deviations from the planned profile,assessing the seriousness of those deviations, ex-plaining actions underway to correct the situationif required, and projecting future performance,given the current situation.
Technical Performance Measurement (TPM) is ananalysis and control technique that is used to: (1)project the probable performance of a selectedtechnical parameter over a period of time, (2)record the actual performance observed of theselected parameter, and (3) through comparisonof actual versus projected performance, assist themanager in decision making. A well thought outprogram of technical performance measures pro-vides an early warning of technical problems andsupports assessments of the extent to whichoperational requirements will be met, as well asassessments of the impacts of proposed changesin system performance.
TechnicalParameterValue
e.g., Weight
PlannedProfile Tolerance
Band
AchievedTo Date
Variation
Threshold
Goal
TIME
PlannedValue
Milestones
15
10
5
Chapter 14 Metrics
131
Parameters to be tracked are typically based onthe combined needs of the government and thecontractor. The government program office willneed a set of TPMs which provide visibility intothe technical performance of key elements of theWBS, especially those which are cost drivers onthe program, lie on the critical path, or whichrepresent high risk items.
The TPMs selected for delivery to the governmentare expected to be traceable to the needs of theoperational user. The contractor will generally trackmore items than are reported to the government,as the contractor needs information at a moredetailed level than does the government programoffice.
TPM reporting to the government is a contractualissue, and those TPMs on which the governmentreceives reports are defined as contract deliverablesin the contract data requirements list. Which para-meters are selected for reporting depends on a num-ber of issues, among which are resources to pur-chase TPMs, the availability of people to reviewand follow the items, the complexity of the sys-tem involved, the phase of development, and thecontractor’s past experience with similar systems.
A typical TPM graphic will take a form somewhatlike that previously shown. The actual form of theprojected performance profile and whether or nottolerance bands are employed will be a functionof the parameter selected and the needs of the pro-gram office.
Another important consideration is the relation-ship between the TPM program and risk manage-ment. Generally, the parameters selected for track-ing should be related to the risk areas on the pro-gram. If a particular element of the design has beenidentified as a risk area, then parameters shouldbe selected which will enable the manager to trackprogress in that area. For example, if achieving arequired aircraft range is considered to be criticaland a risk area, then tracking parameters that pro-vide insight into range would be selected, such asaircraft weight, specific fuel consumption, drag,etc. Furthermore, there should be consistency be-tween TPMs and the Critical Technical Parameters
associated with formal testing, although the TPMprogram will not normally be limited just to thoseparameters identified as critical for test purposes.
Government review and follow up of TPMs areappropriate on a periodic basis when submitted bythe contractor, and at other major technical eventssuch as at technical reviews, test events, andprogram management reviews.
While TPMs are expected to be traceable to theneeds of the user, they must be concrete technicalparameters that can be projected and tracked. Forexample, an operational user may have a require-ment for survivability under combat conditions.Survivability is not, in and of itself, a measurableparameter, but there are important technical para-meters that determine survivability, such as radarcross section (RCS) and speed. Therefore, the tech-nical manager might select and track RCS andspeed as elements for TPM reporting. The deci-sion on selection of parameters for TPM trackingmust also take into consideration the extent towhich the parameter behavior can be projected(profiled over a time period) and whether or not itcan actually be measured. If the parameter cannotbe profiled, measured, or is not critical to programsuccess, then the government, in general, shouldnot select it for TPM tracking. The WBS structuremakes an excellent starting point for considerationof parameters for TPM tracking (see Figure 14-3).
A substantial effort has taken place in recent yearsto link TPMs with Earned Value Management in away that would result in earned value calculationsthat reflect the risks associated with achieving tech-nical performance. The approach used establishesstatistical probability of achieving a projected levelof performance on the TPM profile based on astatistical analysis of actual versus planned per-formance. Further information is available on theInternet at http://www.acq.osd.mil/api/tpm/.
In summary, TPMs are an important tool in theprogram manager’s systems analysis and controltoolkit. They provide an early warning about de-viations in key technical parameters, which, if notcontrolled, can impact system success in meetinguser needs. TPMs should be an integral part of both
Systems Engineering Fundamentals Chapter 14
132
periodic program reporting and management fol-low-up, as well as elements for discussion in tech-nical reviews and program management reviews.By thoughtful use of a good program of TPM, the
manager, whether technically grounded or not, canmake perceptive judgments about system techni-cal performance and can follow up on contractorplans and progress when deviations occur.
Figure 14-3. Shipboard Fire Control System (Partial)
FireControlSystem
WBS XXXX
CWTransmitter
WBS XXXX
DataProcessor
WBS XXXX
Antenna
WBS XXXX
SUBSYSTEM
Component
Power DensitySlew TimeCWI Ant Side LobesAM noisePointing AccuracyPowerMTTRAngular Resolution
Detection RangeTI Ant Side LobesTI Track AccuracyFM NoiseWeightMTBFRange Resolution
AM NoiseFM Noise
Radiated PowerMTBF
MTBFMemory
Proc SpeedMTTR
Slew TimeBeam WidthSide Lobes
MTTR
Relevant Terms
Achievement to date – Measured or estimated progress plotted and compared with plannedprogress by designated milestone date.
Current estimate – Expected value of a technical parameter at contract completion.
Planned value – Predicted value of parameter at a given point in time.
Planned profile – Time phased projected planned values.
Tolerance band – Management alert limits representing projected level of estimating error.
Threshold – Limiting acceptable value, usually contractual.
Variance – Difference between the planned value and the achievement-to-datederived from analysis, test, or demonstration.
Chapter 15 Risk Management
133
Figure 15-1. Risk Hierarchy
Development Risk
Management ofDevelopment
Management ofDevelopment
PrimeMissionProduct
SupportingProducts
ExternalInfluences
InternalProcess
CHAPTER 15
RISK MANAGEMENT
whether if it is written down, or whether youunderstand it. Risk does not change because youhope it will, you ignore it, or your boss’s expecta-tions do not reflect it. Nor will it change justbecause it is contrary to policy, procedure, orregulation. Risk is neither good nor bad. It is justhow things are. Progress and opportunity arecompanions of risk. In order to make progress, risksmust be understood, managed, and reduced toacceptable levels.
Types of Risk in aSystems Engineering Environment
Systems engineering management related riskscould be related to the system products or to theprocess of developing the system. Figure 15-1shows the decomposition of system developmentrisks.
15.1 RISK AS REALITY
Risk is inherent in all activities. It is a normal con-dition of existence. Risk is the potential for a nega-tive future reality that may or may not happen. Riskis defined by two characteristics of a possible nega-tive future event: probability of occurrence(whether something will happen), and conse-quences of occurrence (how catastrophic if it hap-pens). If the probability of occurrence is not knownthen one has uncertainty, and the risk is undefined.
Risk is not a problem. It is an understanding of thelevel of threat due to potential problems. A prob-lem is a consequence that has already occurred.
In fact, knowledge of a risk is an opportunity toavoid a problem. Risk occurs whether there is anattempt to manage it or not. Risk exists whetheryou acknowledge it, whether you believe it,
Systems Engineering Fundamentals Chapter 15
134
Figure 15-2. Four Elements of Risk Management
A Continuous Interlocked Process—Not an Event
Plan(What, when,
how)
Assess(Identify and
analyze)
Monitorand Report(Know what’shappening)
Handle(Mitigate the
risk)
Risks related to the system development generallyare traceable to achieving life cycle customerrequirements. Product risks include both end prod-uct risks that relate to the basic performance andcost of the system, and to enabling products thatrelate to the products that produce, maintain,support, test, train, and dispose of the system.
Risks relating to the management of the develop-ment effort can be technical management risk orrisk caused by external influences. Risks dealingwith the internal technical management includethose associated with schedules, resources, workflow, on time deliverables, availability of appro-priate personnel, potential bottlenecks, critical pathoperations and the like. Risks dealing with exter-nal influences include resource availability, higherauthority delegation, level of program visibility,regulatory requirements, and the like.
15.2 RISK MANAGEMENT
Risk management is an organized method for iden-tifying and measuring risk and for selecting,developing, and implementing options for the
handling of risk. It is a process, not a series ofevents. Risk management depends on risk man-agement planning, early identification and analy-sis of risks, continuous risk tracking and reassess-ment, early implementation of corrective actions,communication, documentation, and coordination.Though there are many ways to structure risk man-agement, this book will structure it as having fourparts: Planning, Assessment, Handling, and Moni-toring. As depicted in Figure 15-2 all of the partsare interlocked to demonstrate that after initialplanning the parts begin to be dependent on eachother. Illustrating this, Figure 15-3 shows the keycontrol and feedback relationships in the process.
Risk Planning
Risk Planning is the continuing process of devel-oping an organized, comprehensive approach torisk management. The initial planning includesestablishing a strategy; establishing goals andobjectives; planning assessment, handling, andmonitoring activities; identifying resources, tasks,and responsibilities; organizing and training riskmanagement IPT members; establishing a methodto track risk items; and establishing a method to
Chapter 15 Risk Management
135
Figure 15-3. Risk Management Control and Feedback
Planning
Assessment
Handling
Monitoring/Reporting
ContinuousFeedback for
PlanningAdjustment
ContinuousFeedback forManagement
ContinuousFeedback for
Reassessment
How toAssess
How toHandle
How toMonitor/Report
What toMonitor/Report
What toHandle
RiskChange
document and disseminate information on acontinuous basis.
In a systems engineering environment risk plan-ning should be:
• Inherent (imbedded) in systems engineeringplanning and other related planning, such asproducibility, supportability, and configurationmanagement;
• A documented, continuous effort;
• Integrated among all activities;
• Integrated with other planning, such as systemsengineering planning, supportability analysis,production planning, configuration and datamanagement, etc.;
• Integrated with previous and future phases; and
• Selective for each Configuration Baseline.
Risk is altered by time. As we try to control oralter risk, its probability and/or consequence will
change. Judgment of the risk impact and themethod of handling the risk must be reassessedand potentially altered as events unfold. Since theseevents are continually changing, the planningprocess is a continuous one.
Risk Assessment
Risk assessment consists of identifying and ana-lyzing the risks associated with the life cycle ofthe system.
Risk Identification Activities
Risk identification activities establish what risksare of concern. These activities include:
• Identifying risk/uncertainty sources and drivers,
• Transforming uncertainty into risk,
• Quantifying risk,
• Establishing probability, and
• Establishing the priority of risk items.
Systems Engineering Fundamentals Chapter 15
136
Figure 15-4. Initial Risk Identificaiton
Identify and List All Risks• Product• Supporting products• Internal management processes• External influences
Establish a Risk Priority List• Prioritize risk based on matrix• Establish critical “high risk” list• Establish a “moderate risk” list
Establish a Risk Definition Matrixand Assign Risks to a Risk Area
PROBABILITY
Hi
Low
Establish Definitions Early in Program Life Cycle
Moderate
Moderate
Moderate
High High
Low High
Low Low
HiConsequence
As shown by Figure 15-4 the initial identificationprocess starts with an identification of potentialrisk items in each of the four risk areas. Risks re-lated to the system performance and supportingproducts are generally organized by WBS and ini-tially determined by expert assessment of teamsand individuals in the development enterprise.These risks tend to be those that require follow-upquantitative assessment. Internal process and ex-ternal influence risks are also determined by ex-pert assessment within the enterprise, as well asthrough the use of risk area templates similar tothose found in DoD 4245.7-M. The DoD 4245.7-M templates describe the risk areas associated withsystem acquisition management processes, andprovide methods for reducing traditional risks ineach area. These templates should be tailored forspecific program use based on expert feedback.
After identifying the risk items, the risk levelshould be established. One common method isthrough the use of a matrix such as shown in Fig-ure 15-5. Each item is associated with a block inthe matrix to establish relative risk among them.
On such a graph risk increases on the diagonal andprovides a method for assessing relative risk. Oncethe relative risk is known, a priority list can beestablished and risk analysis can begin.
Risk identification efforts can also include activi-ties that help define the probability or consequencesof a risk item, such as:
• Testing and analyzing uncertainty away,
• Testing to understand probability and conse-quences, and
• Activities that quantify risk where the qualita-tive nature of high, moderate, low estimates areinsufficient for adequate understanding.
Risk Analysis Activities
Risk analysis activities continue the assessmentprocess by refining the description of identifiedrisk event through isolation of the cause of risk,determination of the full impact of risk, and the
Chapter 15 Risk Management
137
Figure 15-5. Simple Risk Matrix
determination and choose of alternative courses ofaction. They are used to determine what risk shouldbe tracked, what data is used to track risk, and whatmethods are used to handle the risk.
Risk analysis explores the options, opportunities,and alternatives associated with the risk. It ad-dresses the questions of how many legitimate waysthe risk could be dealt with and the best way to doso. It examines sensitivity, and risk interrelation-ships by analyzing impacts and sensitivity ofrelated risks and performance variation. It furtheranalyzes the impact of potential and accomplished,external and internal changes.
Risk analysis activities that help define the scopeand sensitivity of the risk item include findinganswers to the following questions:
• If something changes, will risk change faster,slower, or at the same pace?
• If a given risk item occurs, what collateraleffects happen?
• How does it affect other risks?
• How does it affect the overall situation?
• Development of a watch list (prioritized list ofrisk items that demand constant attention bymanagement) and a set of metrics to determineif risks are steady, increasing, or decreasing.
• Development of a feedback system to trackmetrics and other risk management data.
• Development of quantified risk assessment.
Quantified risk assessment is a formal quantifica-tion of probabilities of occurrence and conse-quences using a top-down structured processfollowing the WBS. For each element, risks areassessed through analysis, simulation and test todetermine statistical probability and specificconditions caused by the occurrence of theconsequence.
Cautions in Risk Assessments
Reliance solely on numerical values from simula-tions and analysis should be avoided. Do not losesight of the actual source and consequences of therisks. Testing does not eliminate risk. It only
Establish Definitions Early in Program Life Cycle
PROBABILITY
Hi
Low
Moderate
Moderate
Moderate
High High
HighLow
Low Low
HiConsequence
Systems Engineering Fundamentals Chapter 15
138
provides data to assess and analyze risk. Most ofall, beware of manipulating relative numbers, suchas ‘risk index” or “risk scales,” even when basedon expert opinion, as quantified data. They areimportant information, but they are largely sub-jective and relative; they do not necessarily definerisk accurately. Numbers such as these shouldalways be the subject of a sensitivity analysis.
Risk Handling
Once the risks have been categorized and analyzed,the process of handling those risks is initiated. Theprime purpose of risk handling activities is to miti-gate risk. Methods for doing this are numerous,but all fall into four basic categories:
• Risk Avoidance,
• Risk Control,
• Risk Assumption, and
• Risk Transfer.
AvoidanceTo avoid risk, remove requirements that representuncertainty and high risk (probability or conse-quence.) Avoidance includes trading off risk forperformance or other capability, and it is a keyactivity during requirements analysis. Avoidancerequires understanding of priorities in requirementsand constraints. Are they mission critical, missionenhancing, nice to have, or “bells and whistles?”
ControlControl is the deliberate use of the design processto lower the risk to acceptable levels. It requiresthe disciplined application of the systems engi-neering process and detailed knowledge of thetechnical area associated with the design. Controltechniques are plentiful and include:
• Multiple concurrent design to provide morethan one design path to a solution,
• Alternative low-risk design to minimize the riskof a design solution by using the lowest-riskdesign option,
• Incremental development, such as preplannedproduct improvement, to dissociate the designfrom high-risk components that can be devel-oped separately,
• Technology maturation that allows high-riskcomponents to be developed separately whilethe basic development uses a less risky andlower-performance temporary substitute,
• Test, analyze and fix that allows understandingto lead to lower risk design changes. (Test canbe replaced by demonstration, inspection, earlyprototyping, reviews, metric tracking, experi-mentation, models and mock-ups, simulation,or any other input or set of inputs that gives abetter understanding of the risk),
• Robust design that produces a design with sub-stantial margin such that risk is reduced, and
• The open system approach that emphasizes useof generally accepted interface standards thatprovide proven solutions to component designproblems.
AcceptanceAcceptance is the deliberate acceptance of the riskbecause it is low enough in probability and/or con-sequence to be reasonably assumed withoutimpacting the development effort. Key techniquesfor handling accepted risk are budget and sched-ule reserves for unplanned activities and continu-ous assessment (to assure accepted risks are main-tained at acceptance level). The basic objective ofrisk management in systems engineering is toreduce all risk to an acceptable level.
The strong budgetary strain and tight scheduleson DoD programs tends to reduce the programmanager’s and system engineer’s capability to pro-vide reserve. By identifying a risk as acceptable,the worst-case outcome is being declared accept-able. Accordingly, the level of risk consideredacceptable should be chosen very carefully in aDoD acquisition program.
Chapter 15 Risk Management
139
TransferTransfer can be used to reduce risk by moving therisk from one area of design to another where adesign solution is less risky. Examples of this in-clude:
• Assignment to hardware (versus software) orvice versa; and
• Use of functional partitioning to allocate per-formance based on risk factors.
Transfer is most associated with the act of assign-ing, delegating, or paying someone to assume therisk. To some extent transfer always occurs whencontracting or tasking another activity. The con-tract or tasking document sets up agreements thatcan transfer risk from the government to contrac-tor, program office to agency, and vice versa. Typi-cal methods include insurance, warranties, andincentive clauses. Risk is never truly transferred.If the risk isn’t mitigated by the delegated activityit still affects your project or program.
Key areas to review before using transfer are:
• How well can the delegated activity handle therisk? Transfer is effective only to the level therisk taker can handle it.
• How well will the delegated activity solutionintegrate into your project or program? Trans-fer is effective only if the method is integratedwith the overall effort. For example, is the war-ranty action coordinated with operators andmaintainers?
• Was the method of tasking the delegated activ-ity proper? Transfer is effective only if the trans-fer mechanism is valid. For example, can in-centives be “gamed?”
• Who has the most control over the risk? If theproject or program has no or little control overthe risk item, then transfer should be consid-ered to delegate the risk to those most likely tobe able to control it.
Monitoring and Reporting
Risk monitoring is the continuous process of track-ing and evaluating the risk management processby metric reporting, enterprise feedback on watchlist items, and regular enterprise input on poten-tial developing risks. (The metrics, watch lists, andfeedback system are developed and maintained asan assessment activity.) The output of this processis then distributed throughout the enterprise, so thatall those involved with the program are aware ofthe risks that affect their efforts and the systemdevelopment as a whole.
Special Case – Integration as Risk
Integration of technologies in a complex system isa technology in itself! Technology integration dur-ing design may be a high-risk item. It is not nor-mally assessed or analyzed as a separately identi-fied risk item. If integration risks are not properlyidentified during development of the functionalbaseline, they will demonstrate themselves asserious problems in the development of the productbaseline.
Special Case – Software Risk
Based on past history, software development isoften a high-risk area. Among the causes of per-formance, schedule, and cost deficiencies havebeen:
• Imperfect understanding of operationalrequirements and its translation into sourceinstructions,
• Risk tracking and handling,
• Insufficient comprehension of interfaceconstraints, and
• Lack of sufficient qualified personnel.
Risk Awareness
All members of the enterprise developing thesystem must understand the need to pay atten-tion to the existence and changing nature of risk.
Systems Engineering Fundamentals Chapter 15
140
Consequences that are unanticipated can seriouslydisrupt a development effort. The uneasy feelingthat something is wrong, despite assurances thatall is fine may be valid. These kinds of intuitionshave allowed humanity to survive the slings andarrows of outrageous fortune throughout history.Though generally viewed as non-analytical, theseapprehensions should not be ignored. Experienceindicates those non-specific warnings have validity,and should be quantified as soon as possible.
15.3 SUMMARY POINTS
• Risk is inherent in all activities.
• Risk is composed of knowledge of two charac-teristics of a possible negative future event:probability of occurrence and consequences ofoccurrence.
• Risk management is associated with a clearunderstanding of probability.
• Risk management is an essential and integralpart of technical program management (systemsengineering).
• Risks and uncertainties must be identified,analyzed, handled, and tracked.
• There are four basic ways of handling risk:avoidance, transfer, acceptance, and control.
• Program risks are classified as low, moderate,or high depending on consequences andprobability of occurrence. Risk classificationshould be based on quantified data to the extentpossible.
Chapter 15 Risk Management
141
SUPPLEMENT 15-A
RISK MANAGEMENTIN DOD ACQUISITION
Factoring Risk Management into the Process
Risk management, as an integral part of the over-all program planning and management process, isenhanced by applying a controlled, consistent,approach to systems engineering and using inte-grated teams for both product development andmanagement control. Programs should be transi-tioned to the next phase only if risk is at the appro-priate level. Know the risk drivers behind the esti-mates. By its nature there are always subjectiveaspects to assessing and analyzing risk at the sys-tem level, even though they tend to be representedas quantitative and/or analytically objective.
Risk and Phases
Risk management begins in the Concept and Tech-nology Development phase. During Concept Ex-ploration initial system level risk assessments aremade. Unknown-unknowns, uncertainty, and somehigh-risk elements are normal and expected. Whensubstantial technical risk exists, the ComponentAdvanced Development stage is appropriate, andis included in the life-cycle process specifically asan opportunity to address and reduce risks to a levelthat are consistent with movement into systemsacquisition.
The S&T community has a number of vehiclesavailable that are appropriate for examining tech-nology in application and for undertaking riskreduction activities. These include AdvancedTechnology Demonstrations, Advanced ConceptTechnology Demonstrations, as well as JointWarfighting Experiments. The focus of the activi-ties undertaken during these risk reduction stagesinclude:
Policy
DoD policy is quite clear in regard to riskmanagement: it must be done.
The PM shall identify the risk areas in the pro-gram and integrate risk management within overallprogram management. (DoD 5000.2-R.)
In addition, DoDD 5000.4 identifies risk and costanalysis as a responsibility of the program manager.
Risk Management View
A DSMC study indicates that major programswhich declared moderate risk at Milestone B havebeen more successful in terms of meeting cost andschedule goals than those which declared low risk(DSMC TR 2-95). This strongly implies that pro-gram offices that understand and respect risk man-agement will be more successful. For this reason,the program office needs to adopt a systems-levelview of risk. The systems engineer provides thisview. Systems Engineering is the cornerstone ofprogram office risk management program becauseit is the connection to realistic assessment of prod-uct maturity and development, and the product is,in the final analysis, what system acquisition isreally about.
However, the program office has external risks todeal with as well as the internal risks prevalent inthe development process. The Systems Engineerhas to provide the program manager internal riskdata in a manner that aids the handling of theexternal risks. In short, the systems engineer mustpresent bad news such that it is reasonable andcompelling to higher levels of authority. SeeChapter 20 for further discussion on this topic.
Systems Engineering Fundamentals Chapter 15
142
• Testing, analyzing, or mitigating system andsubsystem uncertainty and high risk out of theprogram.
• Demonstrating technology sufficient to uncoversystem and subsystem unknown-unknowns(especially for integration).
• Planning for risk management during thetransition to and continuation of systems ac-quisition during the System Development andDemonstration phase, especially handling andtracking of moderate risk.
System Development and Demonstration requiresthe application of product and manufacturingengineering, which can be disrupted if the tech-nology development is not sufficient to supportengineering development. Risk management induring this phase emphasizes:
• Reduction and control of moderate risks,
• All risks under management including emergingones, and
• Maintenance of risk levels and reaction toproblems.
Objective Assessment of Technology
The revised acquisition process has been deliber-ately structured to encourage and allow programsto progress through appropriate risk reductionstages and phases, based on an objective assess-ment of the maturity levels associated with theproducts and systems under development. It istherefore, particularly important that programmanagers and their staffs ensure that the decisionsmade regarding recommendations to proceed, andthe paths to be taken, be based on as impartial andobjective opinions as possible. The temptation isalways to move ahead and not to delay to improvethe robustness of a given product or system. Whensystems are hurried into engineering developmentand production, in spite of the fact that the under-lying technologies require further development,
history indicates that the results will eventuallyshow the fallacy of speed over common sense. Andto fix the problem in later stages of development—or even after deployment—can be hugely expen-sive in terms of both monetary cost and humanlives.
The prevailing presumption at Milestone B is thatthe system is ready for engineering development.After this, the acquisition community generallyassumes that risk is moderate to low, that the tech-nology is “available.” There is evidence to supportthe assertion that programs often progress intoengineering development with risks that actuallyrequire substantial exploratory and applied re-search and development to bring them to the mod-erate levels of risk or lower. One approach that hasproven successful in making objective risk assess-ments is the use of independent evaluation teams.Groups that have no pre-determined interest toprotect or axe to grind are often capable of provid-ing excellent advice regarding the extent to whicha system is ready to proceed to the next level ofdevelopment and subsequent phases.
Risk Classification on theSystem (Program) Level
Classification definitions should be establishedearly and remain consistent throughout the pro-gram. The program office should assess the risksof achieving performance, schedule, and cost inclear and accurate terms of both probability andconsequence. Where there is disagreement aboutthe risk, assessment efforts should be immediatelyincreased. Confusion over risk is the worst pro-gram risk, because it puts in doubt the validity ofthe risk management process, and therefore,whether program reality is truly understood.
The system level risk assessment requires integra-tion and interpretation of the quantified riskassessment of the parts. This requires reasonablejudgement. Because integration increases the po-tential for risk, it is reasonable to assume overallrisk is not better than the sum of objective data forthe parts.
Chapter 15 Risk Management
143
Reality Versus Expectations
Program managers are burdened with the expecta-tions of superiors and others that have control overthe program office’s environment. Pressure to ac-commodate these expectations is high. If the sys-tems engineer cannot communicate the reality ofrisk in terms that are understandable, acceptable,or sufficiently verifiable to management, then thesepressures may override vertical communication ofactual risk.
Formal systems engineering with risk managementincorporated can provide the verifiable informa-tion. However, the systems engineer also has theresponsibility to adequately explain probability andconsequences such that the program manager canaccept the reality of the risk and override higherlevel expectations.
Uncertainty is a special case, and very dangerousin an atmosphere of high level expectations. Pre-sentation of uncertainty issues should strongly em-phasize consequences, show probability trends, anddevelop “most likely” alternatives for probability.
Systems Engineering Fundamentals Chapter 15
144
SUPPLEMENT 15-B
MODEL FORSYSTEM LEVEL
RISK ASSESSMENT
The following may be used to assist in making preliminary judgments regarding risk classifications:
Low Risk Moderate Risk High Risk
Consequences Insignificant cost, Affects program Significant impact,schedule, or technical objectives, cost, or requiring reserve orimpact schedule; however alternate courses of
cost, schedule, action to recoverperformance areachievable
Probability of Little or no estimated Probability sufficiently High likelihood ofOccurrence likelihood high to be of concern occurrence
to management
Extent of Full-scale, integrated Has been demonstrated Significant designDemonstration technology has been but design changes, changes required in
demonstrated tests in relevant order to achievepreviously environments required required/desired
results
Existence of Capability exists in Capability exists, but Capability does notCapability known products; not at performance currently exist
requires integration levels required forinto new system new system
Also see Technology Readiness Levels matrix in Chapter 2
Chapter 16 Systems Engineering Planning
145
PART 4
PLANNING,ORGANIZING,
ANDMANAGING
Systems Engineering Fundamentals Chapter 16
146
Chapter 16 Systems Engineering Planning
147
CHAPTER 16
SYSTEMS ENGINEERINGPLANNING
Technical/Systems Engineering Planning
Technical planning may be documented in a sepa-rate engineering management plan or incorporatedinto a broad, integrated program management plan.This plan is first drafted at project or programinception during the early requirements analysiseffort. Requirements analysis and technical plan-ning are inherently linked, because requirementsanalysis establishes an understanding of what mustbe provided. This understanding is fundamentalto the development of detailed plans.
To be of utility, systems engineering plans mustbe regularly updated. To support management de-cision making, major updates will usually occurat least just before major management milestonedecisions. However, updates must be performedas necessary between management milestones tokeep the plan sufficiently current to achieve itspurpose of information, communication, anddocumentation.
16.2 ELEMENTS OF TECHNICAL PLANS
Technical plans should include sufficient informa-tion to document the purpose and method of thesystems engineering effort. Plans should includethe following:
• An introduction that states the purpose of theengineering effort and a description of thesystem being developed,
• A technical strategy description that ties theengineering effort to the higher-level manage-ment planning,
16.1 WHY ENGINEERING PLANS?
Systems engineering planning is an activity thathas direct impact on acquisition planning decisionsand establishes the feasible methods to achieve theacquisition objectives. Management uses it to:
• Assure that all technical activities are identifiedand managed,
• Communicate the technical approach to thebroad development team,
• Document decisions and technical implemen-tation, and
• Establish the criteria to judge how well thesystem development effort is meeting customerand management needs.
Systems engineering planning addresses the scopeof the technical effort required to develop the sys-tem. The basic questions of “who will do what”and “when” are addressed. As a minimum, a tech-nical plan describes what must be accomplished,how systems engineering will be done, how theeffort will be scheduled, what resources are needed,and how the systems engineering effort will bemonitored and controlled. The planning effortresults in a management-oriented documentcovering the implementation of program require-ments for system engineering, including technicalmanagement approaches for subsequent phases ofthe life cycle. In DoD it is an exercise done on asystems level by the government, and on a moredetailed level by contractors.
Systems Engineering Fundamentals Chapter 16
148
• A description of how the systems engineeringprocess will be tailored and structured tocomplete the objectives stated in the strategy,
• An organization plan that describes theorganizational structure that will achieve theengineering objectives, and
• A resource plan that identifies the estimatedfunding and schedule necessary to achieve thestrategy.
Introduction
The introduction should include:
Scope: The scope of the plan should provideinformation concerning what part of the big pic-ture the plan covers. For example, if the plan werea DoD program office plan, it would emphasizecontrol of the higher-level requirements, the systemdefinition (functional baseline), and all activitiesnecessary for system development. On the otherhand, a contractor’s plan would emphasize controlof lower-level requirements, preliminary and detaildesigns (allocated and product baselines), andactivities required and limited by the contractualagreement.
Description: The description of the system should:
• Be limited to an executive summary describingthose features that make the system unique,
• Include a general discussion of the system’soperational functions, and
• Answer the question “What is it and what willit do?”
Focus: A guiding focus for the effort should beprovided to clarify the management vision for thedevelopment approach. For example, the focus maybe lowest cost to obtain threshold requirements,superior performance within budget, superior stan-dardization for reduced logistics, maximum use ofthe open systems approach to reduce cost, or thelike. A focus statement should:
• Be a single objective to avoid confusion,
• Be stated simply to avoid misinterpretation, and
• Have high-level support.
Purpose: The purpose of the engineering effortshould be described in general terms of the outputs,both end products and life-cycle enabling prod-ucts that are required. The stated purpose shouldanswer the question, “What does the engineeringeffort have to produce?”
Technical Strategy
The basic purpose of a technical strategy is to linkthe development process with the acquisition orcontract management process. It should include:
• Development phasing and associated baselining,
• Key engineering milestones to support riskmanagement and business management mile-stones,
• Associated parallel developments or productimprovement considerations, and
• Other management generated constraints orhigh-visibility activities that could affect theengineering development.
Phasing and Milestones: The developmentphasing and baseline section should describe theapproach to phasing the engineering effort,including tailoring of the basic process describedin this book and a rationale for the tailoring. Thekey milestones should be in general keeping withthe technical review process, but tailored asappropriate to support business management mile-stones and the project/program’s developmentphasing. Strategy considerations should also in-clude discussion of how design and verificationwill phase into production and fielding. This areashould identify how production will be phased-in(including use of limited-rate initial production andlong lead-time purchases), and that initial supportconsiderations require significant coordinationbetween the user and acquisition community.
Chapter 16 Systems Engineering Planning
149
Parallel Developments and Product Improve-ment: Parallel development programs necessaryfor the system to achieve its objectives should beidentified and the relationship between the effortsexplained. Any product improvement strategiesshould also be identified. Considerations such asevolutionary development and preplanned productimprovement should be described in sufficientdetail to show how they would phase into theoverall effort.
Impacts on Strategy
All conditions or constraints that impact the strat-egy should be identified and the impact assessed.Key points to consider are:
• Critical technologies development,
• Cost As an Independent Variable (CAIV), and
• Any business management directed constraintor activity that will have a significant influenceon the strategy.
Critical Technologies: Discussion of criticaltechnology should include:
• Risk associated with critical technologydevelopment and its impact on the strategy,
• Relationship to baseline development, and
• Potential impact on the overall developmenteffort.
Cost As an Independent Variable: Strategy con-siderations should include discussion of howCAIV will be implemented, and how it will impactthe strategy. It should discuss how unit cost, de-velopment cost, life cycle cost, total ownershipcost, and their interrelationships apply to the sys-tem development. This area should focus on howthese costs will be balanced, how they will be con-trolled, and what impact they have on the strategyand design approach.
Management Issues: Management issues that posespecial concerns for the development strategy
could cover a wide range of possible issues. Ingeneral, management issues identified as engineer-ing strategy issues are those that impact the abilityto support the management strategy. Exampleswould include:
• Need to combine developmental phases toaccommodate management driven schedule orresource limitations,
• Risk associated with a tight schedule or limitedbudget,
• Contractual approach that increases technicalrisk, and
• Others of a similar nature.
Management-dictated technical activities—such asuse of M&S, open systems, IPPD, and others—should not be included as a strategy issue unlessthey impact the overall systems engineering strat-egy to meet management expectations. The strat-egy discussion should lay out the plan, how itdovetails with the management strategy, and howmanagement directives impact it.
Systems Engineering Processes
This area of the planning should focus on how thesystem engineering processes will be designed tosupport the strategy. It should include:
• Specific methods and techniques used toperform the steps and loops of the systems en-gineering process,
• Specific system analysis and control tools andhow they will be used to support step and loopactivities, and
• Special design considerations that must beintegrated into the engineering effort.
Steps and Loops: The discussion of how thesystems engineering process will be done shouldshow the specific procedures and products that willensure:
Systems Engineering Fundamentals Chapter 16
150
• Requirements are understood prior to the flow-down and allocation of requirements,
• Functional descriptions are established beforedesigns are formulated,
• Designs are formulated that are traceable torequirements,
• Methods exist to reconsider previous steps, and
• Verification processes are in place to ensure thatdesign solutions meet needs and requirements.
This planning area should address each step andloop for each development phase, include identi-fication of the step-specific tools (Functional FlowBlock Diagrams, Timeline Analysis, etc.) that willbe used, and establish the verification approach.The verification discussion should identify allverification activities, the relationship to formaldevelopmental T&E activities, and independenttesting activities (such as operational testing).
Norms of the particular technical area and theengineering processes of the command, agency, orcompany doing the tasks will greatly influence thisarea of planning. However, whatever procedures,techniques, and analysis products or models used,they should be compatible with the basic principlesof systems engineering management as describedearlier in this book.
An example of the type of issue this area wouldaddress is the requirements analysis during thesystem definition phase. Requirements analysis ismore critical and a more central focus during sys-tem definition than in later phases. The establish-ment of the correct set of customer requirementsat the beginning of the development effort isessential to proper development. Accordingly, thesystem definition phase requirements analysisdemands tight control and an early review to verifythe requirements are established well enough tobegin the design effort. This process of control andverification necessary for the system definitionphase should be specifically described as part of
the overall requirements analysis process andprocedures.
Analysis and Control: Planning should identifythose analysis tools that will be used to evaluatealternative approaches, analyze or assess effective-ness, and provide a rigorous quantitative basis forselecting performance, functional, and designrequirements. These processes can include tradestudies, market surveys, M&S, effectiveness analy-ses, design analyses, QFD, design of experiments,and others.
Planning must identify the method by whichcontrol and feedback will be established and main-tained. The key to control is performance-basedmeasurement guided by an event-based schedule.Entrance and exit criteria for the event-drivenmilestones should be established sufficient todemonstrate proper development progress has beencompleted. Event-based schedules and exit crite-ria are further discussed later in this chapter.Methods to maintain feedback and control aredeveloped to monitor progress toward meeting theexit criteria. Common methods were discussedearlier in this book in the chapters on metrics, riskmanagement, configuration management, andtechnical reviews.
Design Considerations: In every system develop-ment there are usually technical activities thatrequire special attention. These may come frommanagement concerns, legal or regulatory direc-tives, social issues, or organizational initiatives. Forexample, a DoD program office will have to con-form to DoDD 5000.2-R, which lists several tech-nical activities that must be incorporated into thedevelopment effort. DoD plans should specificallyaddress each issue presented in the Program Designsection of DoD 5000.2-R.
In the case of a contractor there may be issues de-lineated in the contract, promised in the proposal,or established by management that the technicaleffort must address. The system engineering plan-ning must describe how each of these issues willbe integrated into the development effort.
Chapter 16 Systems Engineering Planning
151
Organization
Systems engineering management planning shouldidentify the basic structure that will develop thesystem. Organizational planning should addresshow the integration of the different technical dis-ciplines, primary function managers, and otherstakeholders will be achieved to develop the sys-tem. This planning area should describe how multi-disciplinary teaming would be implemented, thatis, how the teams will be organized, tasked, andtrained. A systems-level team should be establishedearly to support this effort. Roles, authority, andbasic responsibilities of the system-level designteam should be specifically described. Establish-ing the design organization should be one of theinitial tasks of the system-level design team. Theirbasic approach to organizing the effort should bedescribed in the plan. Further information onorganizing is contained in a later chapter.
Resources
The plan should identify the budget for the techni-cal development. The funds required should bematrixed against a calendar schedule based on theevent-based schedule and the strategy. This shouldestablish the basic development timeline with anassociated high-level estimated spending profile.Shortfalls in funding or schedule should be ad-dressed and resolved by increasing funds, extend-ing schedule, or reducing requirements prior to theplan preparation. Remember that future analysisof development progress by management will tendto be based on this budget “promised” at planinception.
16.3 INTEGRATION OF PLANS –PROGRAM PLAN INTERFACES
Systems engineering management planning mustbe coordinated with interfacing activities such asthese:
• Acquisition Strategy assures that technical planstake into account decisions reflected in the Ac-quisition Strategy. Conflicts must be identifiedearly and resolved.
• Financial plan assures resources match theneeds in the tech plan. Conflicts should beidentified early and resolved.
• Test and Evaluation Master Plan (TEMP) as-sures it complements the verification approach.It should provide an integrated approach toverify that the design configuration will meetcustomer requirements. This approach shouldbe compatible with the verification approachdelineated in the systems engineering plan.
• Configuration management plan assures that thedevelopment process will maintain the systembaselines and control changes to them.
• Design plans (e.g., electrical, mechanical, struc-tural, etc.) coordinates identification of IPTteam composition.
• Integrated logistics support planning and sup-port analysis coordinates total system support.
• Production/Manufacturing plan to coordinateactivities concerning design producibility, andfollow-on production,
• Quality management planning assures thatquality engineering activities and quality man-agement functions are included in systemengineering planning,
• Risk management planning establishes andcoordinates technical risk management tosupport total program risk management.
• Interoperability planning assures interopera-bility suitability issues are coordinated with sys-tem engineering planning. (Where interop-erability is an especially critical requirementsuch as, communication or information systems,it should be addressed as a separate issue withseparate integrated teams, monitoring, andcontrols).
• Others such as M&S plan, software develop-ment plan, human integration plan, environ-ment, safety and health planning, also interface.
Systems Engineering Fundamentals Chapter 16
152
Things to Watch
A well developed technical management plan willinclude:
• The expected benefit to the user,
• How a total systems development will beachieved using a systems engineering approach,
• How the technical plan complements and sup-ports the acquisition or management businessplan,
• How incremental reviews will assure that thedevelopment stays on track,
• How costs will be reduced and controlled,
• What technical activities are required and whowill perform them,
• How the technical activities relate to workaccomplishment and calendar dates,
• How system configuration and risk will becontrolled,
• How system integration will be achieved,
• How the concerns of the eight primary life cyclefunctions will be satisfied,
• How regulatory and contractual requirementswill be achieved, and
• The feasibility of the plan, i.e., is the planpractical and executable from a technical,schedule, and cost perspective.
16.4 SUMMARY POINTS
• Systems engineering planning should establishthe organizational structure that will achieve theengineering objectives.
• Planning must include event-based schedulingand establish feedback and control methods.
• It should result in important planning andcontrol documents for carrying out theengineering effort.
• It should identify the estimated funding anddetail schedule necessary to achieve the strategy.
• Systems engineering planning should establishthe proper relationship between the acquisitionand technical processes.
Chapter 16 Systems Engineering Planning
153
Figure 16-1. Sample Event-Based Schedule Exit Criteria
System RequirementsReview (SRR)
• Mission Analysis completed
• Support Strategy defined
• System options decisionscompleted
• Design usage defined
• Operational performancerequirement defined
• Manpower sensitivitiescompleted
• Operational architectureavailable and reviewed
System FunctionalReview/Software Spec
Review(SFR/SSR)
• Installed environments defined
• Maintenance concept defined
• Preliminary design criteriaestablished
• Preliminary design marginsestablished
• Interfaces defined/preliminaryinterface specs completed
• Software and software supportrequirements completed
• Baseline support/resourcesrequirements defined
• Support equipment capabilitydefined
• Technical architecture prepared
• System defined and requirementsshown to be achievable
Preliminary DesignReview (PDR)
• Design analyses/definitioncompleted
• Material/parts characterizationcompleted
• Design maintainability analysiscompleted/support requirementsdefined
• Preliminary production plancompleted
• Make/buy decisions finalized
• Breadboard investigationscompleted
• Coupon testing completed
• Design margins completed
• Preliminary FMECA completed
• Software functions and architec-ture and support defined
• Maintenance tasks trade studiescompleted
• Support equipment developmentspecs completed
APPENDIX 16-A
SCHEDULES
The program office develops an event-basedschedule that represents the overall developmenteffort. This schedule is usually high-level andfocused on the completion of events that supportthe acquisition milestone decision process. Anevent-based schedule is developed by the contrac-tor to include significant accomplishments thatmust be completed in order to meet the progressrequired prior to contract established events. Thecontractor also includes events, accomplishments,and associated success criteria specifically identi-fied by the contract. DoD program offices can usethe contractor’s event-based schedule and the
The event-based schedule, sometimes referred toas the Systems Engineering Master Schedule(SEMS) or Integrated Master Schedule (IMS) is atechnical event-driven (not time-driven) plan pri-marily concerned with product and processdevelopment. It forms the basis for schedule con-trol and progress measurement, and relatesengineering management events and accomplish-ments to the WBS. These events are identifiedeither in the format of entry and exit events (e.g.initiate PDR, complete PDR) or by using entry andexit criteria for each event. Example exit criteriashown in Figures 16-1 and 16-2.
Systems Engineering Fundamentals Chapter 16
154
contractor’s conformance to it for several purposes:source selection, monitoring contractor progress,technical and other reviews, readiness for optionaward, incentives/awards determination, progresspayments decision, and similar activities.
The event-based schedule establishes the keyparameters for determining the progress of adevelopment program. To some extent it controlsand interfaces with systems engineering manage-ment planning, integrated master schedules and in-tegrated master plans, as well as risk managementplanning, system test planning, and other key planswhich govern the details of program management.
The calendar or detail schedule is a time-basedschedule that shows how work efforts will supporttasks and events identified in the event-basedschedule. It aligns the tasks and calendar dates toshow when each significant accomplishment mustbe achieved. It is a key component for developingEarned Value metrics. The calendar schedule iscommonly referred to as the detail schedule,systems engineering detail schedule, or SEDS. Thecontractor is usually required to maintain therelationship between the event and calendarschedules for contract required activities. Figure16-3 shows the relationship between the systemrequirements, the WBS, the contractual require-ments, the event-based schedule, and the detailschedule.
System Verfication Review/Functional Configuration Audit
(SVR/FCA)
• All verification tasks completed
• Durability tests completed
• Long lead time items identified
• PME and operational trainingcompleted
• Tech manuals completed
• Flight test plan approved
• Support and training equipmentdeveloped
• Fielding analysis completed
• Provisioning data verified
Physical Configuration Audit(PCA)
• Qualification testing completed
• All QA provisions finalized
• All manufacturing processrequirements and documenta-tion finalized
• Product fabrication specifica-tions finalized
• Support and training equipmentqualification completed
• All acceptance test require-ments completed
• Life management plan com-pleted
• System support capabilitydemonstrated
• Post production supportanalysis completed
• Final software descriptiondocument and all user manualscomplete
Figure 16-2. Sample Event-Driven Schedule Exit Criteria (continued)
Critical Design ReviewTest Readiness Review
(CDR/TRR)
• Parts, materials, processesselected
• Development tests completed
• Inspection points/criteriacompleted
• Component level FMECAcompleted
• Repair level analysis completed
• Facility requirements defined
• Software test descriptionscompleted
• Hardware and software hazardanalysis completed
• Firmware spt completed
• Software programmers manualcompleted
• Durability test completed
• Maintinability analyses com-pleted
• Qualification test proceduresapproved
• Producibility analyses com-pleted
Chapter 16 Systems Engineering Planning
155
Figure 16-3. Event-Based—Detailed Schedule Interrelationships
Requirement
System Spec
Air Vehicle
1600 Aircraft Subsystems
1610 Landing Gear Systems
••
1600 Aircraft Subsystems
1610 Landing Gear Systems••
WBS Elements SOO/SOW Task
31 Aircraft Subsystems (WBS 1600)
Conduct a development program toinclude detailed design, manufacture,
assembly, and test of all aircraft subsystems
EarnedValue Reports
Significant Accomplishments Events Accomplishment Criteria
PDR 1. a. Duty Cycle Defined
1. Preliminary Design Complete X b. Preliminary Analysis Complete/Rev’d
c. Preliminary Drawings Released
Detailed Tasks 19XX 19XY 19XZ
Program Events: PDR CDR
1. Preliminary Design CompleteDuty Cycle Define
Schedule Summary
The event-based schedule establishes the key tasksand results expected. The event-based scheduleestablishes the basis for a valid calendar-based(detail) schedule.
Systems Engineering Fundamentals Chapter 16
156
Chapter 17 Product Improvement Strategies
157
CHAPTER 17
PRODUCT IMPROVEMENTSTRATEGIES
• Safety issues requiring replacement of unsafecomponents, and
• Service life extension programs that refurbishand upgrade systems to increase their service life.
In DoD, the 21st century challenge will be improv-ing existing products and designing new ones thatcan be easily improved. With the average servicelife of a weapons system in the area of 40 or moreyears, it is necessary that systems be developedwith an appreciation for future requirements, fore-seen and unforeseen. These future requirementswill present themselves as needed upgrades tosafety, performance, supportability, interface com-patibility, or interoperability; changes to reducecost of ownership; or major rebuild. Providingthese needed improvements or corrections formthe majority of the systems engineer’s post-production activities.
17.2 PRODUCT IMPROVEMENTSTRATEGIES
As shown by Figure 17-1, these strategies varybased on where in the life cycle they are applied.The strategies or design approaches that reflectthese improvement needs can be categorized asplanned improvements, changes in design orproduction, and deployed system upgrades.
Planned Improvements
Planned improvements strategies include evolu-tionary acquisition, preplanned product develop-ment, and open systems. These strategies are notexclusive and can be combined synergistically ina program development.
17.1 INTRODUCTION
Complex systems do not usually have stagnantconfigurations. A need for a change during asystem’s life cycle can come from many sourcesand effect the configuration in infinite ways. Theproblem with these changes is that, in most casesit is difficult, if not impossible, to predict the na-ture and timing of these changes at the beginningof system development. Accordingly, strategies ordesign approaches have been developed to reducethe risk associated with predicted and unknownchanges.
Well thought-out improvement strategies can helpcontrol difficult engineering problems related to:
• Requirements that are not completely under-stood at program start,
• Technology development that will take longerthan the majority of the system development,
• Customer needs (such as the need to combat anew military threat) that have increased, beenupgraded, are different, or are in flux,
• Requirements change due to modified policy,operational philosophy, logistics support phi-losophy, or other planning or practices from theeight primary life cycle function groups,
• Technology availability that allows the systemto perform better and/or less expensively,
• Potential reliability and maintainability up-grades that make it less expensive to use,maintain, or support, including development ofnew supply support sources,
Systems Engineering Fundamentals Chapter 17
158
Figure 17-2. Evolutionary Acquisition
Figure 17-1. Types of Product Improvement Strategies
“The lack of specificityand detail in identifying the final
system capability is whatdistinguishes Evolutionary
Acquisition from anacquisition strategy based
on P3I.”– JLC EA Guide
CustomerFeedback
“Managed”by Req
Analysis
Requirements Analysis• General for the System
• Specific for the Core
Concept of Operations
PreliminarySystem
Architecture
Define – FUND – Develop – Operationally Test CORE
Define – FUND – Develop – Operationally Test Block A
…continue “as required”
Flexible/Incremental ORD, TEMP, etc.
Refine and UpdateRequirements
MSA
MSB
MSC
Deployment
Planned Improvement
Design Changes
ProductionModifications
Upgrades
Integrated Inputs of All Functional Disciplines
Chapter 17 Product Improvement Strategies
159
Figure 17-3. Pre-Planned Product Improvement
Evolutionary Acquisition: Evolutionary acquisi-tion is the preferred approach to systems acquisi-tion in DoD. In an environment where technologyis a fast moving target and the key to military su-periority is a technically superior force, the require-ment is to transition useful capability from devel-opment to the user as quickly as possible, whilelaying the foundation for further changes to occurat later dates. Evolutionary acquisition is an ap-proach that defines requirements for a core capa-bility, with the understanding that the core is to beaugmented and built upon (evolved) until the sys-tem meets the full spectrum of user requirements.The core capability is defined as a function of userneed, technology maturity, threat, and budget. Thecore is then expanded as need evolves and the otherfactors mentioned permit.
A key to achieving evolutionary acquisition is theuse of time-phased requirements and continuouscommunication with the eventual user, so that re-quirements are staged to be satisfied incrementally,
rather than in the traditional single grand designapproach. Planning for evolutionary acquisitionalso demands that engineering designs be basedon open system, modular design concepts that per-mit additional increments to be added over timewithout having to completely re-design and re-develop those portions of the system alreadyfielded. Open designs will facilitate access to recentchanges in technologies and will also assist in con-trolling costs by taking advantage of commercialcompetition in the marketplace. This concept isnot new; it has been employed for years in theC4ISR community, where system are often inevolution over the entire span of their lifecycles.
Preplanned Product Improvement (P3I): Oftenreferred to as P3I, preplanned product improve-ment is an appropriate strategy when requirementsare known and firm, but where constraints (typi-cally either technology or budget) make someportion of the system unachievable within theschedule required. If it is concluded that a militarily
The P3I acquisitionmanagement challenge is to acquire
systems with interfaces and accessibilityas an integral part of the design so that
the deferred element(s) can beincorporated in a cost-effective manner
when they become available.
Acquisition Issues
• Longer Range Planning• Parallel Efforts• Standards and Interface Capacity• Modular Equipment/Open Systems
• Responsive to threat changes• Accommodates future technology• IOC can be earlier• Reduced development risk• Possible subsystem competition• Increased effective operational life
• Increased initial development cost• Increased technical requirements
complexity• More complex CM• Sensitive to funding streams• Parallel development management
PROsCONs
P3I
Systems Engineering Fundamentals Chapter 17
160
useful capability can be fielded as an interim solu-tion while the portion yet to be proceeds throughdevelopment, then P3I is appropriate. The approachgenerally is to handle the improvement as a sepa-rate, parallel development; initially test and deliverthe system without the improvement; and proveand provide the enhanced capability as it becomesavailable. The key to a successful P3I is the estab-lishment of well-defined interface requirements forthe system and the improvement. Use of a P3I willtend to increase initial cost, configurationmanagement activity, and technical complexity.Figure 17-3 shows some of the considerations indeciding when it is appropriate.
Open Systems Approach: The open system designapproach uses interface management to build flex-ible design interfaces that accommodate use ofcompetitive commercial products and provideenhanced capacity for future change. It can be usedto prepare for future needs when technology is yetnot available, whether the operational need isknown or unknown. The open systems focus is todesign the system such that it is easy to modifyusing standard interfaces, modularity, recognizedinterface standards, standard components withrecognized common interfaces, commercial andnondevelopmental items, and compartmentalizeddesign. Open system approaches to design arefurther discussed at the end of this chapter.
Changes in Design or Production
Engineering Change Proposals (ECPs): Changesthat are to be implemented during the developmentand production of a given system are typically ini-tiated through the use of ECPs. If the proposedchange is approved (usually by a configurationcontrol board) the changes to the documentationthat describes the system are handled by formalconfiguration management, since, by definition,ECPs, when approved, change an approved base-line. ECPs govern the scope and details of thesechanges. ECPs may address a variety of needs,including correction of deficiencies, cost reduc-tion, and safety. Furthermore, ECPs may been as-signed differing levels of priority from routine toemergency. MIL-HDBK-61, Configuration Man-agement Guidance, offers an excellent source of
advice on issues related to configuration changes.
Block Change before Deployment: Block changesrepresent an attempt to improve configurationmanagement by having a number of changesgrouped and applied such that they will apply con-sistently to groups (or blocks) of production items.This improves the management and configurationcontrol of similar items substantially in compari-son to change that is implemented item by itemand single change order by single change order.When block changes occur, the life cycle impactshould be carefully addressed. Significant differ-ences in block configurations can lead to differentmanuals, supply documentation, training, andrestrictions as to locations or activities where thesystem can be assigned.
Deployed Systems Upgrades
Major Rebuild: A major rebuild results from theneed for a system that satisfies requirements sig-nificantly different or increased from the existingsystem, or a need to extend the life of a systemthat is reaching the end of its usable life. In bothcases the system will have upgraded requirementsand should be treated as basically a new systemdevelopment. A new development process shouldbe started to establish and control configurationbaselines for the rebuilt system based on theupdated requirements.
Major rebuilds include remanufacturing, service-life extension programs, and system developmentswhere significant parts of a previous system willbe reused. Though rebuilding existing systems candramatically reduce the cost of a new system insome cases, the economies of rebuild can bedeceiving, and the choice of whether to pursue arebuild should be done after careful use of tradestudies. The key to engineering such systems is toremember that they are new systems and requirethe full developmental considerations of baselin-ing, the systems engineering process, and life cycleintegration.
Post-Production Improvement: In general, productimprovements become necessary to improve thesystem or to maintain the system as its components
Chapter 17 Product Improvement Strategies
161
reach obsolescence. These projects generally re-sult in a capability improvement, but for all practi-cal purposes the system still the serves the samebasic need. These improvements are usually char-acterized by an upgrade to a component or sub-system as opposed to a total system upgrade.
Block Upgrades: Post-production block upgradesare improvements to a specific group of the systempopulation that provides a consistent configura-tion within that group. Block upgrades in post-production serve the same general purpose ofcontrolling individual system configurations asproduction block upgrades, and they require thesame level of life-cycle integration.
Modifying an Existing System
Upgrading an existing system is a matter of fol-lowing the system engineering process, with anemphasis on configuration and interface manage-ment. The following activities should be includedwhen upgrading a system:
• Benchmark the modified requirements both forthe upgrade and the system as a whole,
• Perform functional analysis and allocation onthe modified requirements,
• Assess the actual capability of the pre-upgradesystem,
• Identify cost and risk factors and monitor them,
• Develop and evaluate modified system alterna-tives,
• Prototype the chosen improvement alternative,and
• Verify the improvement.
Product improvement requires special attentionto configuration and interface management. Itis not uncommon that the existing system’s con-figuration will not be consistent with the existingconfiguration data. Form, fit, and especially func-tion interfaces often represent design constraints
that are not always readily apparent at the outsetof a system upgrade. Upgrade planning shouldensure that the revised components will be com-patible at the interfaces. Where interfaces are im-pacted, broad coordination and agreement is nor-mally required.
Traps in Upgrading Deployed Systems
When upgrading a deployed system pay attentionto the following significant traps:
Scheduling to minimize operational impacts: Theuser’s operational commitments will dictate theavailability of the system for modification. If theschedule conflicts with an existing or emergingoperational need, the system will probably notbecome available for modification at the timeagreed to. Planning and contractual arrangementsmust be flexible enough to accept unforeseen sche-dule changes to accommodate user’s unanticipatedneeds.
Configuration and interface management: Con-figuration management must address three configu-rations: the actual existing configuration, the modi-fication configuration, and the final system con-figuration. The key to successful modification isthe level of understanding and control associatedwith the interfaces.
Logistics compatibility problems: Modificationwill change the configuration, which in most caseswill change the supply support and maintenanceconsiderations. Coordination with the logisticscommunity is essential to the long-term operationalsuccess of the modification.
Minimal resources available: Modifications tendto be viewed as simple changes. As this chapterhas pointed out, they are not; and they should becarefully planned. That planning should includean estimate of needed resources. If the resourcesare not available, either the project should beabandoned, or a plan formulated to mitigate andcontrol the risk of an initial, minimal budget com-bined with a plan for obtaining additionalresources.
Systems Engineering Fundamentals Chapter 17
162
Figure 17-4. Funding Rule for DoD System Upgrades
Funding restrictions ($ color) drive the need to separateperformance increase from supportability changes
Product improvement planning must be driven byrisk management, not by $ color or calendar!
Limited competitors: Older systems may have onlya few suppliers that have a corporate knowledgeof the particular system functions and design. Thisis especially problematic if the original systemcomponents were commercial or NDIs that the de-signer does not have product baseline data for. Incases such as these, there is a learning process thatmust take place before the designer or vendor canadequately support the modification effort. De-pending on the specific system, this could be amajor effort. This issue should be considered veryearly in the modification process because it hasserious cost implications.
Government funding rules: As Figure 17-4 showsthe use of government funding to perform systemupgrades has restrictions. The purpose of the up-grade must be clear and justified in the planningefforts.
17.3 ROLES AND RESPONSIBILITIES
Modification management is normally a joint gov-ernment and contractor responsibility. Though any
specific system upgrade will have relationshipsestablished by the conditions surrounding the par-ticular program, government responsibilities wouldusually include:
• Providing a clear statement of system require-ments,
• Planning related to government functions,
• Managing external interfaces,
• Managing the functional baseline configuration,and
• Verifying that requirements are satisfied.
Contractor responsibilities are established by thecontract, but would normally include:
• Technical planning related to execution,
• Defining the new performance envelope,
• Designing and developing modifications, and
Funddevelopmentand testwith…
Fund modkit with…
Fundinstallationwith…
If…MOD
IncreasesPerformance
RDT&E $ $
No SystemIn
Production
Procurement $ $ O&M $ $
Yes Yes
No
Procurement $ $
Procurement $ $
MOD KitFabricated
Installed
Chapter 17 Product Improvement Strategies
163
• Providing evidence that changes made havemodified the system as required.
System Engineering Role
The systems engineering role in product improve-ment includes:
• Planning for system change,
• Applying the systems engineering process,
• Managing interface changes,
• Identifying and using interface standards whichfacilitate continuing change,
• Ensuring life cycle management is implemented,
• Monitoring the need for system modifications,and
• Ensuring operations, support activities, andearly field results are considered in planning.
17.4 SUMMARY POINTS
• Complex systems do not usually have stagnantconfigurations.
• Planned improvements strategies includeevolutionary acquisition, preplanned productdevelopment, and open systems.
• A major rebuild should be treated as a newsystem development.
• Upgrading an existing system is a matter offollowing the system engineering process, withan emphasis on configuration and interfacemanagement.
• Pay attention to the traps. Upgrade projects havemany.
Systems Engineering Fundamentals Chapter 17
164
Figure 17-5. C4I and IT Development
High-Level SystemArchitectureDeveloped
TechnicalArchitectureDeveloped
Complete SystemArchitectureDeveloped
Implementation
OperationalArchitectureDeveloped
SUPPLEMENT 17-A
OPEN SYSTEM APPROACH
systems engineering, interface control, modulardesign, and design for upgrade. As a technical ap-proach it supports the engineering goals of designflexibility, risk reduction, configuration control,long-term supportability, and enhanced utility.
Open Systems Initiative
In DoD the open system initiative was begun as aresult of dramatic changes in the computer indus-try that afforded significant advantages to designof C4ISR and IT systems. The standardizationachieved by the computer industry allows C4ISRand IT systems to be designed using interfacestandards to select off-the-shelf components toform the system. This is achieved by usingcommercially-supported specifications andstandards for specifying system interfaces (exter-nal and internal, functional and physical), prod-ucts, practices, and tools. An open system is one
The open system approach is a business andtechnical approach to system development thatresults in systems that are easier to change orupgrade by component replacement. It is a systemdevelopment logic that emphasizes flexibleinterfaces and maximum interoperability, optimumuse of commercial competitive products, andenhanced system capacity for future upgrade. Thevalue of this approach is that open systems haveflexibility, and that flexibility translates into ben-efits that can be recognized from business,management, and technical perspectives.
From a management and business view, the opensystem approach directs resources to a more in-tensive design effort with the expectation of a lifecycle cost reduction. As a business approach itsupports the DoD policy initiatives of CAIV, in-creased competition, and use of commercial prod-ucts. It is a technical approach that emphasizes
Chapter 17 Product Improvement Strategies
165
Figure 17-6. Simplified Computer Resource Reference Model
1 Open Standards are non-proprietary, consensus-based standards widely accepted by industry. Examples include SAE, IEEE, and ISOstandards.
2 This system architecture typically describes the end product but not the enabling products. It relies heavily on interface definitions todescribe system components.
Application Software
Operating System
Processor
Backplane
Module Hardware
API and Compile
Drivers and Compiler
Module I/O
in which interfaces are fully described by openstandards.1 An open system approach extends thisconcept further by using modular design andinterface design to enhance the availability of mul-tiple design solutions, especially those reflectinguse of open standards, competitive commercialcomponents, NDIs, and future upgrade capability.
As developed in the C4ISR and IT communities,the open system approach requires the design ofthree architectures: operational, technical, andsystem.
As shown in Figure 17-5, the first one prepared isan operational architecture that defines the tasks,operational elements, and information flowsrequired to accomplish or support an operationalfunction. The user community generates theoperational concepts that form an operationalarchitecture. The operational architecture isallusive. It is not a specific document required tobe developed by the user such as the ORD; but
because of their operational nature, the user mustprovide the components of the operationalarchitecture. It is usually left to the developer toassemble and structure the information as part ofthe system definition requirements analysis. Oncethe operational architecture has clearly defined theoperational need, development of a systemarchitecture2 is begun.
The (open) system architecture is a set of descrip-tions, including graphics, of systems and intercon-nections supporting the operational functionsdescribed in the operational architecture. Early inthe (open) system architecture development atechnical architecture is prepared to establish a setof rules, derived from open consensus-basedindustry standards, to govern the arrangement,interaction, and interdependence of the elementsof a reference model. Reference models are a com-mon conceptual framework for the type of systembeing designed. (A simple version for computerresources is shown in Figure 17-6.)
Systems Engineering Fundamentals Chapter 17
166
The technical architecture identifies the services,interfaces, standards, and their relationships; andprovides the technical guidelines upon whichengineering specifications are based, commonbuilding blocks are built, and product lines aredeveloped. In short, the technical architecture be-comes a design requirement for developing thesystem. (The purpose, form, and function of thetechnical architecture is similar to building codes.)
The system architecture is then further developedto eventually specify component performance andinterface requirements. These are then used toselect the specific commercial components thatform the system under development. This process,called an implementation, envisions the produc-tion process as consisting primarily of selectingcomponents, conformance (to the interface andperformance requirements) management, andassembly, with little or no need for detailed designfabrications.
The process described above has allowed signifi-cant achievements in computer-related develop-ments. Other technical fields have also used theopen system design approach extensively. (Com-mon examples are the electrical outlets in yourhome and the tire-to-wheel interface on your car).In most cases the process is not as well defined asit is in the current digital electronics area. A con-sistent successful use of the open design concept,in and outside the electronics field, requires anunderstanding of how this process relates to theactivities associated with systems engineeringmanagement.
Systems Engineering Management
The open system approach impacts all threeessential elements of systems engineering manage-ment: systems engineering phasing, the systemsengineering process, and life cycle considerations.It requires enhanced interface management in thesystems engineering process, and requires specificdesign products be developed prior to engineer-ing-event milestones. The open systems approachis inherently life-cycle friendly. It favorablyimpacts production and support functions, but it
also requires additional effort to assure life-cycleconformance to interface requirements.
Open Systems Products andSE Development Phasing
A system is developed with stepped phases thatallow an understanding of the operational need toeventually evolve into a design solution. Thoughsome tailoring of this concept is appropriate, thebasic phasing (based on the operational conceptpreceding the system description, which precedesthe preliminary design, which precedes the detaileddesign) is necessary to coordinate the overalldesign process and control the requirements flow-down. As shown by Figure 17-7 the open systemapproach blends well with these developmentphases.
Concept Studies Phase
The initial detailed operational concept, includingoperational architectures, should be a user-com-munity output (with some acquisition engineeringassistance) produced during the concept explora-tion phase that emphasizes operational conceptsassociated with various material solutions. Theoperational concept is then updated as necessaryfor each following phase. Analysis of the initialoperational concept should be a key element ofthe operational view output of the system defini-tion phase requirements analysis. An operationalarchitecture developed for supporting the systemdescription should be complete, comprehensive,and clear; and verified to be so at the AlternativeSystems Review. If the operational architecturecannot be completed, then a core operationalcapability must be developed to establish the basisfor further development. Where a core capabilityis used, core requirements should be complete andfirm, and the process for adding expandedrequirements should be clear and controlled.
System Definition Phase
System interface definitions, such as the technicalarchitecture, and high-level (open) system archi-tecture should be complete in initial form at the
Chapter 17 Product Improvement Strategies
167
Figure 17-7. Phasing of Open System Development
Implementation
Concept Studies
System Definition(Functional Baseline)
Preliminary Design(Allocated Baseline)
Detail Design(Product Baseline)
System Definition, WBS or Reference Model,Technical Architecture, and a High-Level
System Architecture Availableat the System Functional Review
Detailed Operational Concept (Operational Architecture)and a High-Level WBS or Reference Model Available
at the Alternative Systems Review
Detailed Operational Concept(Operational Architecture)
Reviewed and Verified Prior tothe System Requirements Review
Subsystems/CI Definitions, System-WideInterface Requirements, and CompleteSystem Architecture Available at the
Preliminary Design Review
DESIGN DEFINITION
DESIGN DEFINITION
DESIGN DEFINITION
end of the system definition phase (along with otherfunctional baseline documentation). Successfulcompletion of these items is required to performthe preliminary design, and they should be avail-able for the System Functional Review, alsoreferred to as the System Definition Review or Sys-tem Design Review. The open system documenta-tion can be separate or incorporated in other func-tional baseline documentation. The criteria foracceptance should be established in the systemsengineering management plan as phase-exitcriteria.
Preliminary Design Phase
Along with other allocated baseline documenta-tion, the interface definitions should be updatedand the open-system architecture completed by theend of the preliminary design effort. This docu-mentation should also identify the proper level ofopenness (that is, the level of system decomposi-tion at which the open interfaces are established)to obtain the maximum cost and logistic advantageavailable from industry practice.
The preliminary design establishes performance-based descriptions of the system components, aswell as the interface and structure designs thatintegrate those components. It is in this phase thatthe open system approach has the most impact.Interface control should be enhanced and focusedon developing modular designs that allow for maxi-mum interchange of competitive commercial prod-ucts. Review of the technical architecture (or in-terface definitions) becomes a key element of re-quirements analysis, open system focused func-tional partitioning becomes a key element of func-tional analysis and allocation, iterative analysis ofmodular designs becomes a key element of designsynthesis, and conformance management becomesa key element of verification. Open system relatedproducts, such as the technical architecture, inter-face management documentation, and conform-ance management documentation, should be keydata reviewed at the Preliminary Design Review.Again, the criteria for acceptance should be estab-lished in the systems engineering management planas phase-exit criteria.
Systems Engineering Fundamentals Chapter 17
168
Figure 17-8. Open System Approach to the Systems Engineering Process
Design Synthesis
Open Flexible Designs
Functional Analysisand Allocation
Functional Partitioning
RequirementsAnalysis
Use of StandardsInterface
Management
Test of Interfacesand Interface Standards
Verification
IPPD
DevelopProduceDeploySupportOperateDispose
TestTrain
Detail Design Phase
The detail design phase becomes the implementa-tion for those parts of the system that have achievedopen system status. Conformance managementbecomes a significant activity as commercial com-ponents are chosen to meet performance andinterface requirements. Conformance and interfacedesign testing becomes a driving activity duringverification to assure an open system or subsystemhas been achieved and that components selectedmeet interface requirements and/or standards.
Systems Engineering Process
The systems engineering problem solving processconsists of process steps and loops supported bysystem analysis and control tools. The focus of theopen systems engineering process is compartmen-talized design, flexible interfaces, recognized in-terface standards, standard components withrecognized common interfaces, use of commercialand NDIs, and an increased emphasis on interfacecontrol. As shown by Figure 17-8, the open-sys-tem approach complements the systems engineer-ing process to provide an upgradeable design.
Requirements analysis includes the review andupdate of interface standards and other interfacedefinitions generated as output from previoussystems engineering processes. Functional analy-sis and allocation focuses on functional partition-ing to identify functions that can be performed in-dependent of each other in order to minimize func-tional interfaces. Design synthesis focuses onmodular design with open interfaces, use of openstandards compliant commercial products, and thedevelopment of performance and interface speci-fications. The verification processes include con-formance testing to validate the interface require-ments are appropriate and to verify componentschosen to implement the design meet the interfacerequirements. Engineering open designs, then, doesnot alter the fundamental practices within systemsengineering, but, rather, provides a specific focusto the activities within that process.
System Engineering Control:Interface Management
The key to the open systems engineering processis interface management. Interface managementshould be done in a more formal and comprehen-sive manner to rigidly identify all interfaces and
Chapter 17 Product Improvement Strategies
169
control the flowdown and integration of interfacerequirements. The interfaces become controlledelements of the baseline equal to (or consideredpart of) the configuration. Open system interfacemanagement emphasizes the correlation of inter-face requirements between interfacing systems.(Do those designing the interfacing systemsunderstand the interface requirements in the sameway?) Computer-Aided System Engineering(CASE) generated schematic block diagrams canbe used to track interface design activity.
An open system is also characterized by multipledesign solutions within the interfaces with empha-sis on leveraging best commercial practice. Theinterface management effort must control interfacedesign such that interfaces specifically chosen foran open system approach are designed based onthe following priority:
• Open standards that allow competitive products,
• Open interface design that allows installationof competitive products with minimal change,
• Open interface design that allows minimalchange installation of commercial or NDI prod-ucts currently or planned to be in DoD use, andlast,
• Unique design with interfaces designed withupgrade issues considered.
Note that these are clear priorities, not options.
Level of Openness
The level at which the interface design should focuson openness is also a consideration. Each systemmay have several levels of openness depending onthe complexity of the system and the differencesin the technology within the system. The level cho-sen to define the open interfaces should besupported by industry and be consistent withprogram objectives. For example, for most digitalelectronics that level is the line-replaceable (LRU)and shop-replaceable (SRU) level. On the otherhand the Joint Strike Fighter intends to establishopenness at a very high subsystem level to achieve
a major program objective, development ofdifferent planes using common building blocks(which, in essence, serve as the reference modelfor the family of aircraft). The open system ap-proach designed segments of a larger system couldhave additional openness at a lower level. For ex-ample, the Advanced Amphibious Assault Vehicle(AAAV) engine compartment is an open approachdesign allowing for different engine installationand future upgrade capability. On a lower levelwithin the compartment the fuel filters, lines, andconnectors are defined by open standard basedinterfaces. Other systems will define openness atother levels. Program objectives (such as inter-operability, upgrade capability, cost-effective sup-port, affordability, and risk reduction) and industrypractice (based on market research) drive thechoice of the level of openness that will best assureoptimum utility and availability of the open systemapproach.
Life Cycle Considerations
Life cycle integration is established primarilythrough the use of integrated teaming that com-bines the design and life cycle planning. The ma-jor impacts on life-cycle activity include:
• Time and cost to upgrade a system is reduced.It is common in defense systems, which haveaverage life spans in excess of 40 years, thatthey will require upgrade in their life due toobsolescence of original components, threatincrease, and technology push that increaseseconomy or performance. (Most commercialproducts are designed for a significantly shorterlife than military systems, and designs that relyon these commercial products must expect thatoriginal commercial components will notnecessarily be available throughout the system’slife cycle.) By using an open system approachthe ability to upgrade a system by changing asingle or set of components is greatly enhanced.In addition, the open system approach eases thedesign problem of replacing the component,thereby reducing the cost and schedule of up-grade, which in turn reduces the operationalimpact.
Systems Engineering Fundamentals Chapter 17
170
• An open system approach enhances the useof competitive products to support the system.This flexibility tends to reduce the cost associ-ated with supply support, but more importantlyimproves component and parts availability.
• Conformance management becomes a part ofthe life cycle configuration process. Replace-ment of components in an open system mustbe more controlled because the government hasto control the system configuration withoutcontrolling the detail component configuration(which will come from multiple sources, allwith different detail configurations). The gov-ernment must expect that commercial suppli-ers will control the design of their componentswithout regard to the government’s systems.The government therefore must use perfor-mance- and interface-based specifications toassure the component will provide serviceequivalent to that approved through the acqui-sition process. Conformance management is the
process that tracks the interface requirementsthrough the life cycle, and assures that the newproduct meets those requirements.
Summary Comments
Open system design is not only compatible withsystems engineering; it represents an approach thatenhances the overall systems engineering effort. Itcontrols interfaces comprehensively, provides in-terface visibility, reduces risk through multipledesign solutions, and insists on life cycle interfacecontrol. This emphasis on interface identificationand control improves systems engineers’ capabilityto integrate the system, probably one of the hard-est jobs they have. It also improves the tracking ofinterface requirements flow down, another key jobof the systems engineer. Perhaps most importantly,this rigorous interface management improves sys-tems engineers’ ability to correctly determinewhere commercial items can be properly used.
Chapter 18 Organizing and Integrating System Development
171
CHAPTER 18
ORGANIZING AND INTEGRATINGSYSTEM DEVELOPMENT
Benefits
The expected benefits from team-based integrationinclude:
• Reduced rework in design, manufacturing,planning, tooling, etc.,
• Improved first time quality and reduction ofproduct variability,
• Reduced cost and cycle time,
• Reduced risk,
• Improved operation and support, and
• General improvement in customer satisfactionand product quality throughout its life cycle.
Characteristics
The key attributes that characterize a wellintegrated effort include:
• Customer focus,
• Concurrent development of products andprocesses,
• Early and continuous life cycle planning,
• Maximum flexibility for optimization,
• Robust design and improved process capability,
• Event-driven scheduling,
• Multi-disciplinary teamwork,
18.1 INTEGRATED DEVELOPMENT
DoD has, for years, required that system designsbe integrated to balance the conflicting pressureof competing requirements such as performance,cost, supportability, producibility, and testability.The use of multi-disciplinary teams is the approachthat both DoD and industry increasing have takento achieve integrated designs. Teams have beenfound to facilitate meeting cost, performance, andother objectives from product concept throughdisposal.
The use of multi-disciplinary teams in design isknown as Integrated Product and Process Devel-opment, simultaneous engineering, concurrentengineering, Integrated Product Development,Design-Build, and other proprietary and non-pro-prietary names expressing the same concept. (TheDoD use of the term Integrated Product and Pro-cess Development (IPPD) is a wider concept thatincludes the systems engineering effort as an ele-ment. The DoD policy is explained later in thischapter.) Whatever name is used, the fundamentalidea involves multi-functional, integrated teams(preferably co-located), that jointly derive require-ments and schedules that place equal emphasis onproduct and process development. The integrationrequires:
• Inclusion of the eight primary functions in theteam(s) involved in the design process,
• Technical process specialties such as quality,risk management, safety, etc., and
• Business processes (usually in an advisorycapacity) such as, finance, legal, contracts, andother non-technical support.
Systems Engineering Fundamentals Chapter 18
172
Figure 18-1. Integrated Team Structure
Sub-Product2.2.1
Sub-Product2.2.2
Sub-Process4.2.1
Sub-Process4.2.2
Sub-Product2.2
Sub-Process4.2
Product A TeamWBS 1.0
Product B TeamWBS 2.0
Process 1 TeamWBS 3.0
Process 2 TeamWBS 4.0
System LevelDesign Team
System LevelManagement Team
Sub-Process4.3
Sub-Product2.1
Sub-Product2.3
Sub-Process4.1
Sub-Tier Teams(Sub-Product or
Process Oriented)
• Empowerment,
• Seamless management tools, and
• Proactive identification and management ofrisk.
Organizing for System Development
Most DoD program offices are part of a ProgramExecutive Office (PEO) organization that is usu-ally supported by a functional organization, suchas a systems command. Contractors and other gov-ernment activities provide additional necessarysupport. Establishing a system development orga-nization requires a network of teams that draw fromall these organizations. This network, sometimesreferred to as the enterprise, represents the inter-ests of all the stakeholders and provides verticaland horizontal communications.
These integrated teams are structured using theWBS and designed to provide the maximum
vertical and horizontal communication during thedevelopment process. Figure 18-1 shows how teamstructuring is usually done. At the system levelthere is usually a management team and a designteam. The management team would normally con-sist of the government and contractor programmanagers, the deputy program manager(s), possi-bly the contractor Chief Executive Officer, thecontracting officer, major advisors picked by theprogram manager, the system design team leader,and other key members of the system design team.The design team usually consists of the first-levelsubsystem and life-cycle integrated team leaders.
The next level of teams is illustrated on Figure 18-1as either product or process teams. These teamsare responsible for designing system segments(product teams) or designing the supporting orenabling products (process teams). At this levelthe process teams are coordinating the system levelprocess development. For example, the supportteam will integrate the supportability analysis fromthe parts being generated in lower-level design and
Chapter 18 Organizing and Integrating System Development
173
Figure 18-2. Cross Membership
Integrationand
Test Team
Product A Team
Product B Team
support process teams. Teams below this level con-tinue the process at a lower level of decomposi-tion. Teams are formed only to the lowest levelnecessary to control the integration. DoD teamstructures rarely extend lower than levels three orfour on the WBS, while contractor teams may ex-tend to lower levels, depending on the complexi-ties of the project and the approach favored bymanagement.
The team structure shown by Figure 18-1 is ahierarchy that allows continuous vertical commu-nication. This is achieved primarily by having theteam leaders, and, if appropriate, other keymembers of a team, be team members of the nexthighest team. In this manner the decisions of thehigher team is immediately distributed andexplained to the next team level, and the decisionsof the lower teams are presented to the higher teamon a regular basis. Through this method decisionsof lower-level teams follow the decision makingof higher teams, and the higher-level teams’
decisions incorporate the concerns of lower-levelteams.
The normal method to obtain horizontal commu-nication is shown in Figure 18-2. At least one teammember from the Product A Team is also a memberof the Integration and Test Team. This memberwould have a good general knowledge of bothtesting and Product A. The member’s job wouldbe to assist the two teams in designing their end orenabling products, and in making each understandhow their decisions would impact the other team.Similarly, the member that sits on both Product Aand B teams would have to understand the bothtechnology and the interface issues associated withboth items.
The above is an idealized case. Each type of sys-tem, each type of contractor organization, and eachlevel of available resources requires a tailoring ofthis structure. With each phase the focus and thetasks change and so should the structure. As phases
Systems Engineering Fundamentals Chapter 18
174
are transited, the enterprise structure and teammembership should be re-evaluated and updated.
18.2 INTEGRATED TEAMS
Integrated teams are composed of representativesfrom all appropriate primary functional disciplinesworking together with a team leader to:
• Design successful and balanced products,
• Develop the configuration for successful life-cycle control,
• Identify and resolve issues, and
• Make sound and timely decisions.
The teams follow the disciplined approach of thesystems engineering process starting with require-ments analysis through to the development of con-figuration baselines as explained earlier in thisbook. The system-level design team should beresponsible for systems engineering managementplanning and execution. The system-level manage-ment team, the highest level program IPT, isresponsible for acquisition planning, resourceallocation, and management. Lower-level teams areresponsible for planning and executing their ownprocesses.
Team Organization
Good teams do not just happen; they are the resultof calculated management decisions and actions.Concurrent with development of the enterpriseorganization discussed above, each team must alsobe developed. Basically the following are keyconsiderations in planning for a team within anenterprise network:
• The team must have appropriate representationfrom the primary functions, technical special-ties, and business support,
• There must be links to establish vertical andhorizontal communication in the enterprise,
• You should limit over-uses of cross member-ship. Limit membership on three or four teamsas a rough rule of thumb for the working level,and
• Ensure appropriate representation of govern-ment, contractor, and vendors to assure inte-gration across key organizations.
Team Development
When teams are formed they go through a seriesof phases before a synergistic self-actuating teamis evolved. These phases are commonly referredto as forming, storming, norming and performing.The timing and intensity of each phase will dependon the team size, membership personality, effec-tiveness of the team building methods employed,and team leadership. The team leaders and anenterprise-level facilitator provide leadershipduring the team development.
Forming is the phase where the members are in-troduced to their responsibilities and other mem-bers. During this period members will tend to needa structured situation with clarity of purpose andprocess. If members are directed during this ini-tial phase, their uncertainty and therefore appre-hension is reduced. Facilitators controlling the teambuilding should give the members rules and tasks,but gradually reduce the level of direction as theteam members begin to relate to each other. Asmembers become more familiar with other mem-bers, the rules, and tasks, they become more com-fortable in their environment and begin to interactat a higher level.
This starts the storming phase. Storming is the con-flict brought about by interaction relating to theindividuals’ manner of dealing with the team tasksand personalities. Its outcome is members whounderstand the way they have to act with othermembers to accomplish team objectives. The dy-namics of storming can be very complex and in-tense, making it the critical phase. Some teams willgo through it quickly without a visible ripple, oth-ers will be loud and hot, and some will neveremerge from this phase. The team building facili-tators must be alert to dysfunctional activity.
Chapter 18 Organizing and Integrating System Development
175
Members may need to be removed or teamsreorganized. Facilitators during this period mustact as coaches, directing but in a personal collabo-rative way. They should also be alert for membersthat are avoiding storming, because the team willnot mature if there are members who are notpersonally committed to participate in it.
Once the team has learned to interact effectively itbegins to shape its own processes and become moreeffective in joint tasks. It is not unusual to see somereoccurrence of storming, but if the storming phasewas properly transitioned these incidences shouldbe minor and easily passed. In this phase, norming,the team building facilitators become a facilitatorto the team—not directing, but asking penetratingquestions to focus the members. They also monitorthe teams and correct emerging problems.
As the team continues to work together on theirfocused tasks, their performance improves untilthey reach a level of self-actuation and qualitydecision making. This phase, performing, can takea while to reach, 18 months to two years for asystem-level design team would not be uncommon.During the performing stage, the team buildingfacilitator monitors the teams and correctsemerging problems.
At the start of a project or program effort, teambuilding is commonly done on an enterprise basiswith all teams brought together in a team-buildingexercise. There are two general approaches to theexercise:
• A team-learning process where individuals aregiven short but focused tasks that emphasizegroup decision, trust, and the advantages ofdiversity.
• A group work-related task that is important butachievable, such as a group determination ofthe enterprise processes, including identifyingand removing non-value added traditionalprocesses.
Usually these exercises allow the enterprise topass through most of the storming phase if done
correctly. Three weeks to a month is reasonablefor this process, if the members are in the samelocation. Proximity does matter and the team build-ing and later team performance are typically betterif the teams are co-located.
18.3 TEAM MAINTENANCE
Teams can be extremely effective, but they can befragile. The maintenance of the team structure isrelated to empowerment, team membership issues,and leadership.
Empowerment
The term empowerment relates to how responsi-bilities and authority is distributed throughout theenterprise. Maintenance of empowerment isimportant to promote member ownership of thedevelopment process. If members do not havepersonal ownership of the process, the effective-ness of the team approach is reduced or evenneutralized. The quickest way to destroy partici-pant ownership is to direct, or even worse, over-turn solutions that are properly the responsibilityof the team. The team begins to see that theresponsibility for decisions is at a higher levelrather than at their level, and their responsibility isto follow orders, not solve problems.
Empowerment requires:
• The flow of authority through the hierarchy ofteams, not through personal direction (irrespec-tive of organizational position). Teams shouldhave clear tasking and boundaries establishedby the higher-level teams.
• Responsibility for decision making to beappropriate for the level of team activity. Thisrequires management and higher-level teams tobe specific, clear, complete, and comprehensivein establishing focus and tasking, and in speci-fying what decisions must be coordinated withhigher levels. They should then avoid imposingor overturning decisions more properly in therealm of a lower level.
Systems Engineering Fundamentals Chapter 18
176
• Teams at each level be given a clear understand-ing of their duties and constraints. Within thebounds of those constraints and assigned dutiesmembers should have autonomy. Higher-levelteams and management either accept theirdecisions, or renegotiate the understanding ofthe task.
Membership Issues
Another maintenance item of import is team mem-ber turnover. Rotation of members is a fact of life,and a necessary process to avoid teams becomingtoo closed. However, if the team has too fast a turn-over, or new members are not fully assimilated,the team performance level will decline and possi-bly revert to storming. The induction processshould be a team responsibility that includes theimmediate use of the new team member in a jointlyperformed, short term, easily achievable, butimportant task.
Teams are responsible for their own performance,and therefore should have significant, say over thechoice of new members. In addition teams shouldhave the power to remove a member; however, thisshould be preceded by identification of the prob-lem and active intervention by the facilitator.Removal should be a last resort.
Awards for performance should, where possible,be given to the team rather than individuals (orequally to all individuals on the team). Thisachieves several things: it establishes a team focus,shows recognition of the team as a cohesive force,recognizes that the quality of individual effort isat least in part due to team influence, reinforcesthe membership’s dedication to team objectives,and avoids team member segregation due to unevenawards. Some variation on this theme is appropri-ate where different members belong to differentorganizations, and a common award system doesnot exist. The system-level management teamshould address this issue, and where possible assureequitable awards are given team members. A veryreal constraint on cash awards in DoD rises in thecase of teams that include both civilian and mili-tary members. Military members cannot be given
cash awards, while civilians can. Con-sequently,managers must actively seek ways to reward allteam members appropriately, leaving no group outat the expense of others.
Leadership
Leadership is provided primarily by the organiza-tional authority responsible for the program, theenterprise facilitator, and the team leaders. In aDoD program, the organizational leaders are usu-ally the program manager and contractor seniormanager. These leaders set the tone of the enter-prise adherence to empowerment, the focus of thetechnical effort, and the team leadership of thesystem management team. These leaders areresponsible to see that the team environment ismaintained. They should coordinate their actionclosely with the facilitator.
Facilitators
Enterprises that have at least one facilitator findthat team and enterprise performance is easier tomaintain. The facilitator guides the enterprisethrough the team building process, monitors theteam network through metrics and other feed-back, and makes necessary corrections throughfacilitation. The facilitator position can be:
• A separate position in the contractor organiza-tion,
• Part of the responsibilities of the governmentsystems engineer or contractor project manager,or
• Any responsible position in the first level belowthe above that is related to risk management.
Obviously the most effective position would be onethat allows the facilitator to concentrate on theteams’ performance. Enterprise level facilitatorsshould have advanced facilitator training and(recommended) at least a year of mentored expe-rience. Facilitators should also have significantbroad experience in the technical area related tothe development.
Chapter 18 Organizing and Integrating System Development
177
Team Leaders
The team leaders are essential for providing andguiding the team focus, providing vertical com-munication to the next level, and monitoring theteam’s performance. Team leaders must have aclear picture of what constitutes good performancefor their team. They are not supervisors, though insome organizations they may have supervisoryadministrative duties. The leader’s primary purposeis to assure that the environment is present thatallows the team to perform at its optimum level—not to direct or supervise.
The team leader’s role includes several difficultresponsibilities:
• Taking on the role of coach as the team forms,
• Facilitating as the team becomes self-sustaining,
• Sometimes serving as director (only when ateam has failed, needs refocus or correction, andis done with the facilitator),
• Providing education and training for members,
• Facilitating team learning,
• Representing the team to upper managementand the next higher-level team, and
• Facilitating team disputes.
Team leaders should be trained in basic facilitatorprinciples. This training can be done in about aweek, and there are numerous training facilities orcompanies that can offer it.
18.4 TEAM PROCESSES
Teams develop their processes from the principlesof system engineering management as presentedearlier in the book. The output of the teams isthe design documentation associated with prod-ucts identified on the system architecture, includ-ing both end product components and enablingproducts.
Teams use several tools to enhance their pro-ductivity and improve communication amongenterprise members. Some examples are:
• Constructive modeling (CAD/CAE/CAM/CASE) to enhance design understanding andcontrol,
• Trade-off studies and prioritization,
• Event-driven schedules,
• Prototyping,
• Metrics, and most of all
• Integrated membership that represents the lifecycle stakeholders.
Integrated Team Rules
The following is a set of general rules that shouldguide the activities and priorities of teams in asystem design environment:
• Design results must be communicated clearly,effectively, and timely.
• Design results must be compatible with initiallydefined requirements.
• Continuous “up-the-line” communication mustbe institutionalized.
• Each member needs to be familiar with allsystem requirements.
• Everyone involved in the team must work fromthe same database.
• Only one member of the team has the authorityto make changes to one set of master documen-tation.
• All members have the same level of authority(one person, one vote).
• Team participation is consistent, success-oriented, and proactive.
Systems Engineering Fundamentals Chapter 18
178
• Team discussions are open with no secrets.
• Team member disagreements must be reasoneddisagreement (alternative plan of action versusunyielding opposition).
• Trade studies and other analysis techniques areused to resolve issues.
• Issues are raised and resolved early.
• Complaints about the team are not voicedoutside the team. Conflicts must be resolvedinternally.
Guidelines for Meeting Management
Even if a team is co-located as a work unit, regularmeetings will be necessary. These meetings andtheir proper running become even more importantif the team is not co-located and the meeting is theprimary means of one-on-one contact. A well-runtechnical meeting should incorporate the followingconsiderations:
• Meetings should be held only for a specificpurpose and a projected duration should betargeted.
• Advance notice of meetings should normallybe at least two weeks to allow preparation andcommunication between members.
• Agendas, including time allocations for topicsand supportive material should be distributedno less than three business days before the teammeeting. The objective of the meeting shouldbe clearly defined.
• Stick to the agenda during the meeting. Thencover new business. Then review action items.
• Meeting summaries should record attendance,document any decision or agreements reached,document action items and associated due-dates, provide a draft agenda for the nextmeeting, and frame issues for higher-levelresolution.
• Draft meeting summaries should be providedto members within one working day of themeeting. A final summary should be issuedwithin two working days after the draftcomments deadline.
18.5 BARRIERS TO INTEGRATION
There are numerous barriers to building and main-taining a well functioning team organization, andthey are difficult to overcome. Any one of thesebarriers can negate the effectiveness of an inte-grated development approach. Common barriersinclude:
• Lack of top management support,
• Team members not empowered,
• Lack of access to a common database,
• Lack of commitment to a cultural change,
• Functional organization not fully integrated intoa team process,
• Lack of planning for team effort,
• Staffing requirements conflict with teams,
• Team members not collocated,
• Insufficient team education and training,
• Lessons learned and successful practices notshared across teams,
• Inequality of team members,
• Lack of commitment based on perceiveduncertainty,
• Inadequate resources, and
• Lack of required expertise on either the part ofthe contractor or government.
Chapter 18 Organizing and Integrating System Development
179
Breaking Barriers
Common methods to combat barriers include:
• Education and training, and then more educa-tion and training: it breaks down the uncertaintyof change, and provides a vision and methodfor success.
• Use a facilitator not only to build and maintainteams, but also to observe and advise manage-ment.
• Obtain management support up front. Manage-ment must show leadership by managing theteams’ environment rather than trying to managepeople.
• Use a common database open to all enterprisemembers.
• Establish a network of teams that integrates thedesign and provides horizontal and verticalcommunication.
• Establish a network that does not over-tax avail-able resources. Where a competence is not avail-able in the associated organizations, hire itthrough a support contractor.
• Where co-location is not possible have regularworking sessions of several days duration. Tele-communications, video conferencing, and othertechnology based techniques can also go far toalleviate the problems of non-collocation.
Summary Comments
• Integrating system development is a systemsengineering approach that integrates allessential primary function activities through theuse of multi-disciplinary teams, to optimize thedesign, manufacturing and supportabilityprocesses.
• Team building goes through four phases:forming, storming, norming, and performing.
• Key leadership positions in a program networkof teams are the program manager, facilitator,and team leaders.
• A team organization is difficult to build andmaintain. It requires management attention andcommitment over the duration of the teamsinvolved.
Systems Engineering Fundamentals Chapter 18
180
SUPPLEMENT 18-A
IPPD – A DODMANAGEMENT PROCESS
participants empowered and authorized, to themaximum extent possible, to make commitmentsfor the organization or the functional area theyrepresent. IPTs are composed of representativesfrom all appropriate functional disciplines work-ing together to build successful programs and en-abling decision makers to make the right decisionsat the right time.
DoD IPT Structure
The DoD oversight function is accomplishedthrough a hierarchy of teams that include levels ofmanagement from DoD to the program level. Thereare three basic levels of IPTs: the Overaching IPT(OIPT), the Working IPTs (WIPT), and ProgramIPTs with the focus and responsibilities as shownby Figure 18-3. For each ACAT I program, therewill be an OIPT and at least one WIPT. WIPTswill be developed for particular functional topics,e.g., test, cost/performance, contracting, etc. AnIntegrating IPT (IIPT) will coordinate WIPT effortsand cover all topics not otherwise assigned toanother IPT. These teams are structurally organizedas shown on Figure 18-4.
Overarching IPT (OIPT)
The OIPT is a DoD level team whose primary re-sponsibility is to advise the Defense AcquisitionExecutive on issues related to programs managedat that level. The OIPT membership is made up ofthe principals that are charged with responsibilityfor the many functional offices at the Office of theSecretary of Defense (OSD).
The OIPT provides:
• Top-level strategic guidance,
The DoD policy of Integrated Product and ProcessDevelopment (IPPD) is a broad view of integratedsystem development which includes not onlysystems engineering, but other areas involved informal decision making related to system devel-opment. DoD policy emphasizes integratedmanagement at and above the Program Manager(PM) level. It requires IPPD at the systemsengineering level, but does not direct specificorganizational structures or procedures in recog-nition of the need to design a tailored IPPD processto every individual situation.
Integrated Product Teams
One of the key IPPD tenets is multi-disciplinaryintegration and teamwork achieved through the useof Integrated Product Teams (IPTs). While IPTsmay not be the best solution for every manage-ment situation, the requirement to produce inte-grated designs that give consideration to a widearray of technical and business concerns leads mostorganizations to conclude that IPTs are the bestorganizational approach to systems management.PMs should remember that the participation of acontractor or a prospective contractor on a IPTshould be in accordance with statutory require-ments, such as procurement integrity rules. Theservice component’s legal advisor must reviewprospective contractor involvement on IPTs. Toillustrate issues the government-contractor teamarrangement raises, the text box at the end of thissection lists nine rules developed for governmentmembers of the Advanced Amphibious AssaultVehicle (AAAV) design IPTs.
The Secretary of Defense has directed that DoDperform oversight and review by using IPTs.These IPTs function in a spirit of teamwork with
Chapter 18 Organizing and Integrating System Development
181
Figure 18-3. Focus and Responsibilities of IPTs
Figure 18-4. IPT Structure
Organization Teams Focus ParticipantResponsibilities
OSD and OIPT* • Strategic Guidance • Program SuccessComponents • Tailoring • Functional Area Leadership
• Program Assessment • Independent Assessment• Resolve Issues Elevated by WIPTs • Issue Resolution
WIPTs* • Planning for Program Success • Functional Knowledge and Experience• Opportunities for Acquisition • Empowered Contribution
Reform (e.g. innovation, streamlining) • Recom.’s for Program Success• Identify/Resolve Program Issues • Communicate Status and Unresolved• Program Status Issues
Program Program • Program Execution • Manage Complete Scope of ProgramTeams and IPTs** • Identify and Implement Acquisition Resources, and RiskSystem Reform • Integrate Government and ContractorContractors Efforts for Report Program Status and
Issues
* Covered in “Rules of the Road”** Covered in “Guide to Implementation and Management of IPPD in DoD Acquisition”
Extracted from Rules of the Road, A Guide for Leading Successful Integrated Product Teams.
MDADAB or MAISRC
Integrating IPT
Cost/Performance IPT
Cost/Performance IPT
Cost/Performance IPT
Cost/Performance IPT
OverarchingIPT
Program ManagementEnvironment
Program IPTs(System Mgmt Teams)
WIPTsOversight
andReview
Execution
Systems Engineering Fundamentals Chapter 18
182
• Functional area leadership,
• Forum for issue resolution,
• Independent assessment to the MDA,
• Determine decision information for nextmilestone review, and
• Provide approval of the WIPT structures andresources.
Working-Level IPT (WIPT)
The WIPTs may be thought of as teams that linkthe PM to the OIPT. WIPTs are typically func-tionally specialized teams (test, cost-performance,etc.). The PM is the designated head of the WIPT,and membership typically includes representationfrom various levels from the program to OSD staff.The principal functions of the WIPT are to advisethe PM is the area of specialization and to advisethe OIPT of program status.
The duties of the WIPT include:
• Assisting the PM in developing strategies andin program planning, as requested by the PM,
• Establishing IPT plan of action and milestones,
• Proposing tailored document and milestonerequirements,
• Reviewing and providing early input to docu-ments,
• Coordinating WIPT activities with the OIPTmembers,
• Resolving or evaluating issues in a timelymanner, and
• Obtaining principals’ concurrence with appli-cable documents or portions of documents.
Program IPTs
Program IPTs are teams that perform the programtasks. The integration of contractors with the gov-ernment on issues relative to a given program trulyoccurs at the program IPT level. The developmentteams (product and process teams) described ear-lier in this chapter would be considered programIPTs. Program IPTs would also include teamsformed for business reasons, for example teamsestablished to prepare Planning, Programming, andBudgeting System (PPBS) documentation, to pre-pare for Milestone Approval, to develop the RFP,or the like.
Chapter 18 Organizing and Integrating System Development
183
SUPPLEMENT 18-B
GOVERNMENT ROLE ON IPTs
The following list was developed by the AdvancedAmphibious Assault Vehicle (AAAV) program to in-form its government personnel of their role on con-tractor/government integrated teams. It addressesgovernment responsibilities and the realities im-posed by contractual and legal constraints. Thoughit is specific to the AAAV case, it can be used asguidance in the development of team planning forother programs.
1. The IPTs are contractor-run entities. We do notlead or manage the IPTs.
2. We serve as “customer” representatives on theIPTs. We are there to REDUCE THE CYCLETIME of contractor-Government (customer)communication. In other words, we facilitatecontractor personnel getting Governmentinput faster. Government IPT members alsoenable us to provide the contractor IPT Statusand issue information up the Governmentchain on a daily basis (instead of monthly orquarterly).
3. WE DO NOT DO the contractor’s IPT WORK,or any portion of their work or tasks. The con-tractor has been contracted to perform the tasksoutlined in the contract SOW; their personneland their subcontractors’ personnel will per-form those tasks, not us. But Government IPTmembers will be an active part of the delib-erations during the development of, and par-ticipate in “on-the-fly” reviews of deliverablescalled out in CDRLs.
4. When asked by contractor personnel for theGovernment’s position or interpretation, Gov-ernment IPT members can offer their personalopinion, as an IPT member, or offer expertopinion; you can provide guidance as to our
“customer” opinion and what might beacceptable to the Government but you can onlyoffer the “Government” position for items thathave been agreed to by you and your Supervi-sor. IT IS UP TO YOUR SUPERVISORS TOEMPOWER EACH OF YOU TO AN APPRO-PRIATE LEVEL OF AUTHORITY. It is ex-pected that this will start at a minimal level ofauthority and be expanded as each individual’sIPT experience and program knowledgegrows. However… (see items 5 and 6).
5. Government IPT members CAN NOT autho-rize any changes or deviations to/from the con-tract SOW or Specifications. Government IPTmembers can participate in the deliberationsand discussions that would result in the sug-gestion of such changes. If/When an IPT con-cludes that the best course of action is not inaccordance with the contract, and a contractchange is in order, then the contractor mustsubmit a Contract Change Request (CCR)through normal channels.
6. Government IPT members CAN NOT autho-rize the contractor to perform work that is inaddition to the SOW/contract requirements.The contractor IPTs can perform work that isnot specifically required by the contract, attheir discretion (provided they stay within theresources as identified in the Team OperatingContract (TOC).
7. Government IPT member participation incontractor IPT activities IS NOT Governmentconsent that the work is approved by the Gov-ernment or is chargeable to the contract. If anIPT is doing something questionable, identifyit to your supervisor or Program ManagementTeam (PMT) member.
Systems Engineering Fundamentals Chapter 18
184
8. Government members of IPTs do not approveor disapprove of IPT decisions, plans, orreports. You offer your opinion in theirdevelopment, you vote as a member, and youcoordinate issues with your Supervisor andbring the “Government” opinion (in the formof your opinion) back to the IPT, with the goalof improving the quality of the products; youdon’t have veto power.
9. Government IPT members are still subject toall the Government laws and regulations re-garding “directed changes,” ethics, and con-duct. Your primary function is to perform thosefunctions that are best done by Governmentemployees, such as:
• Conveying to contractor personnel yourknowledge/expertise on Marine Corpsoperations and maintenance techniques;
• Interfacing with all other Governmentorganizations (e.g., T&E);
• Control/facilitization of government fur-nished equipment and materials (GFE andGFM);
• Ensuring timely payment of submittedvouchers; and
• Full participation in Risk Management.
Chapter 19 Contractual Considerations
185
Figure 19-1. Contracting Process
Cooperative Systems Engineering Effort
Contract
WBS
SOO/SOW
CDRL
Performance-BasedSPECs and STDs
Government Industry
CHAPTER 19
CONTRACTUALCONSIDERATIONS
The role of technical managers or systems engi-neers is crucial to satisfying these diverse concerns.Their primary responsibilities include:
• Supporting or initiating the planning effort.The technical risk drives the schedule and costrisks which in turn should drive the type ofcontractual approach chosen,
• Prepares or supports the preparation of thesource selection plan and solicitation clausesconcerning proposal requirements and selectioncriteria,
• Prepares task statements,
19.1 INTRODUCTION
This chapter describes how the systems engineersupports the development and maintenance of theagreement between the project office and the con-tractor that will perform or manage the detail workto achieve the program objectives. This agreementhas to satisfy several stakeholders and requirescoordination between responsible technical, mana-gerial, financial, contractual, and legal personnel.It requires a document that conforms to the Fed-eral Acquisition Regulations (and supplements),program PPBS documentation, and the SystemArchitecture. As shown by Figure 19-1, it also hasto result in a viable cooperative environment thatallows necessary integrated teaming to take place.
Systems Engineering Fundamentals Chapter 19
186
RequirementDetermination
ProcurementRequests (RFP)
Procurement Planning
Solicitation Evaluation NegotiationSelectionof Source Award
AssignmentSystemControl
PerformanceMeasurement
ContractModifications
Completion/Payment/Closeout
Acquisition Planning
Source Selection
Contract Administration
RequirementSpecification
Figure 19-2. Contracting Process
• Prepares the Contract Data Requirements List(CDRL),
• Supports negotiation and participates in sourceselection evaluations,
• Forms Integrated Teams and coordinates thegovernment side of combined government andindustry integrated teams,
• Monitors the contractor’s progress, and
• Coordinates government action in support ofthe contracting officer.
This chapter reflects the DoD approach to contract-ing for system development. It assumes that thereis a government program or project office that istasking a prime contractor in a competitive envi-ronment. However, in DoD there is variation tothis theme. Some project activities are tasked di-rectly to a government agency or facility, or arecontracted sole source. The processes describedin this chapter should be tailored as appropriatefor these situations.
19.2 SOLICITATION DEVELOPMENT
As shown by Figure 19-2, the DoD contractingprocess begins with planning efforts. Planning in-cludes development of a Request for Proposal(RFP), specifications, a Statement of Objective(SOO) or Statement of Work (SOW), a sourceselection plan, and the Contract Data RequirementsList (CDRL).
Request for Proposal (RFP)
The RFP is the solicitation for proposals. The gov-ernment distributes it to potential contractors. Itdescribes the government’s need and what theofferor must do to be considered for the contract.It establishes the basis for the contract to follow.
The key systems engineering documents includedin a solicitation are:
• A statement of the work to be performed. InDoD this is a SOW. A SOO can be used to ob-tain a SOW or equivalent during the selectionprocess.
Chapter 19 Contractual Considerations
187
Figure 19-3. Optional Approaches
Government Develops Contractor Develops
ORD Evaluation Instructions Proposed System WBS SOW ContractFactors to Offerors Concept(s) Spec Signed
Select Draft SOO Evaluation Instructions SOW ContractConcept(s) Technical Factors to Offerors Signed
Requirementsand WBS
Select Draft WBS SOW Evaluation Instructions ContractConcept(s) System Factors to Offerors Signed
Spec
Detail Spec SOW Instructions Contractand to Bidders Signed
Drawings
Opt
ion
3O
ptio
n 2
Opt
ion
1O
ptio
n 4
• A definition of the system. Appropriate speci-fications and any additional baseline informa-tion necessary for clarification form thisdocumentation. This is generated by the systemsengineering process as explained earlier in thisbook.
• A definition of all data required by the customer.In DoD this accomplished through use of theContract Data Requirements List (CDRL).
The information required to be in the proposalsresponding to the solicitation is also key for thesystems engineer. An engineering team will decidethe technical and technical management merits ofthe proposals. If the directions to the offerors arenot clearly and correctly stated, the proposal willnot contain the information needed to evaluate theofferors. In DoD Sections L and M of the RFP arethose pivotal documents.
Task Statement
The task statement prepared for the solicitation willgovern what is actually received by the govern-ment, and establish criteria for judging contractorperformance. Task requirements are expressed in
the SOW. During the solicitation phase the taskscan be defined in very general way by a SOO.Specific details concerning SOOs and SOWs areattached at the end of this chapter.
As shown by Figure 19-3, solicitation taskingapproaches can be categorized into four basic op-tions: use of a basic operational need, a SOO, aSOW, or a detail specification.
Option 1 maximizes contractor flexibility by sub-mitting the Operational Requirements Document(ORD) to offerors as a requirements document (e.g.in place of SOO/SOW), and the offerors are re-quested to propose a method of developing asolution to the ORD. The government identifiesits areas of concern in Section M (evaluation fac-tors) of the RFP to provide guidance. Section L(instructions to the offerors) should require thebidders write a SOW based on the ORD as part oftheir proposal. The offeror proposes the type ofsystem. The contractor develops the system speci-fication and the Work Breakdown Structure(WBS). In general this option is appropriate forearly efforts where contractor input is necessaryto expand the understanding of physical solutionsand alternative system approaches.
Systems Engineering Fundamentals Chapter 19
188
Option 2 provides moderate contractor flexibilityby submitting a SOO to the offerors as the SectionC task document (e.g., in place of SOW.) The gov-ernment identifies its areas of concern in SectionM (evaluation factors) to provide guidance. Sec-tion L (instructions to the offerors) should requireas part of the proposal that offerors write a SOWbased on the SOO. In this case the governmentusually selects the type of system, writes a drafttechnical-requirements document or system speci-fication, and writes a draft WBS. This option ismost appropriate when previous efforts have notdefined the system tightly. The effort should nothave any significant design input from the previ-ous phase. This method allows for innovative think-ing by the bidders in the proposal stage. It is apreferred method for design contracts.
Option 3 lowers contractor flexibility, and in-creases clarity of contract requirements. In thisoption the SOW is provided to the Contractor asthe contractual task requirements document. Thegovernment provides instructions in Section L tothe offerors to describe the information needed bythe government to evaluate the contractor’s abilityto accomplish the SOW tasks. The governmentidentifies evaluation factors in Section M to pro-vide guidance for priority of the solicitation re-quirements. In most cases, the government selectsthe type of system, and provides the draft systemspec, as well as the draft WBS. This option is mostappropriate when previous efforts have defined thesystem to the lower WBS levels or where theproduct baseline defines the system. Specificallywhen there is substantial input from the previousdesign phase and there is a potential for a differentcontractor on the new task, the SOW method isappropriate.
Option 4 minimizes contractor flexibility, andrequires maximum clarity and specificity of con-tract requirements. This option uses an Invitationfor Bid (IFB) rather than an RFP. It provides bid-ders with specific detailed specifications or taskstatements describing the contract deliverables.They tell the contractor exactly what is requiredand how to do it. Because there is no flexibility inthe contractual task, the contract is awarded basedon the low bid. This option is appropriate when
the government has detailed specifications orother product baseline documentation that de-fines the deliverable item sufficient for produc-tion. It is generally used for simple build-to-printreprocurement.
Data Requirements
As part of the development of an IFB or RFP, theprogram office typically issues a letter that de-scribes the planned procurement and asks inte-grated team leaders and affected functional man-agers to identify and justify their data requirementsfor that contract. The data should be directly as-sociated with a process or task the contractor isrequired to perform.
The affected teams or functional offices thendevelop a description of each data item needed.Data Item Descriptions (DIDs), located in theAcquisition Management Systems and DataRequirements Control List (AMSDL), can be usedfor guidance in developing these descriptions.Descriptions should be performance based, andformat should be left to the contractor as long asall pertinent data is included. The descriptions arethen assembled and submitted for inclusion in thesolicitation. The listing of data requirements in thecontract follows an explicit format and is referredto as the CDRL.
In some cases the government will relegate the datacall to the contractor. In this case it is importantthat the data call be managed by a government/contractor team, and any disagreements be resolvedprior to formal contract change incorporating datarequirements. When a SOO approach is used, thecontractor should be required by section L to pro-pose data requirements that correspond to theirproposed SOW.
There is current emphasis on electronic submis-sion of contractually required data. Electronic DataInterchange (EDI) sets the standards for compatibledata communication formats.
Additional information on data management,types of data, contractual considerations, andsources of data are presented in Chapters 10 and
Chapter 19 Contractual Considerations
189
13. Additional information on CDRLs is providedat the end of this chapter.
Technical Data Package Controversy
Maintenance of a detailed baseline such as the “asbuilt” description of the system, usually referredto as a Technical Data Package (TDP), can be veryexpensive and labor intensive. Because of this,some acquisition programs may not elect to pur-chase this product description. If the Governmentwill not own the TDP the following questions mustbe resolved prior to solicitation issue:
• What are the pros and cons associated with theTDP owned by the contractor?
• What are the support and reprocurement impacts?
• What are the product improvement impacts?
• What are the open system impacts?
In general the government should have sufficientdata rights to address life cycle concerns, such asmaintenance and product upgrade. The extent towhich government control of configurations anddata is necessary will depend on support andreprocurement strategies. This, in turn, demandsthat those strategic decisions be made as early aspossible in the system development to avoid pur-chasing data rights as a hedge against the possibilitythat the data will be required later in the programlife cycle.
Source Selection
Source Selection determines which offeror will bethe contractor, so this choice can have profoundimpact on program risk. The systems engineer mustapproach the source selection with great carebecause, unlike many planning decisions madeearly in product life cycles, the decisions maderelative to source selection can generally not beeasily changed once the process begins. Laws andregulations governing the fairness of the processrequire that changes be made very carefully—andoften at the expense of considerable time and efforton the part of program office and contractor
personnel. In this environment, even minormistakes can cause distortion of proper selection.
The process starts with the development of aSource Selection Plan (SSP), that relates the orga-nizational and management structure, the evalua-tion factors, and the method of analyzing theofferors’ responses. The evaluation factors and theirpriority are transformed into information providedto the offerors in sections L and M of the RFP. Theofferors’ proposals are then evaluated with the pro-cedures delineated in the SSP. These evaluationsestablish which offerors are conforming, guidenegotiations, and are the major factor in contrac-tor selection. The SSP is further described at theend of this chapter.
The system engineering area of responsibilityincludes support of SSP development by:
• Preparing the technical and technical manage-ment parts of evaluation factors,
• Organizing technical evaluation team(s), and
• Developing methods to evaluate offerors’ pro-posals (technical and technical management).
19.3 SUMMARY COMMENTS
• Solicitation process planning includes develop-ment of a Request for Proposal, specifications,a Statement of Objective or Statement of Work,a source selection plan, and the Contract DataRequirements List.
• There are various options available to programoffices as far as the guidance and constraintsimposed on contractor flexibility. The govern-ment, in general, prefers that solicitations beperformance-based.
• Data the contractor is required to provide thegovernment is listed on the CDRL List.
• Source Selection is based on the evaluationcriteria outlined in the SSP and reflected inSections L and M of the RFP.
Systems Engineering Fundamentals Chapter 19
190
SUPPLEMENT 19-A
STATEMENT OF OBJECTIVES(SOO)
• Draft WBS and dictionary.
Step 2: Once the program objectives are defined,the SOO is constructed so that it addresses prod-uct-oriented goals and performance-orientedrequirements.
SOO and Proposal Evaluations
Section L (Instructions to Offerors) of the RFPmust include instructions to the offeror that requireusing the SOO to construct and submit a SOW. InSection M (Evaluation Criteria) the program officeshould include the criteria by which the proposals,including the contractor’s draft SOW, will be evalu-ated. Because of its importance, the government’sintention to evaluate the proposed SOW should bestressed in Sections L and M.
Offeror Development ofthe Statement of Work
The offeror should establish and define in clear,understandable terms:
• Non-specification requirements (the tasks thatthe contractor must do),
• What has to be delivered or provided in orderfor him to get paid,
• What data is necessary to support the effort,and
• Information that would show how the offerorswould perform the work that could differenti-ate between them in proposal evaluation andcontractor selection.
The SOO is an alternative to a government pre-pared SOW. A SOO provides the Government’soverall objectives and the offeror’s required sup-port to achieve the contractual objectives. Offerorsuse the SOO as a basis for preparing a SOW whichis then included as an integral part of the proposalwhich the government evaluates during the sourceselection.
Purpose
SOO expresses the basic, top-level objectives ofthe acquisition and is provided in the RFP in lieuof a government-written SOW. This approach givesthe offerors the flexibility to develop cost effec-tive solutions and the opportunity to proposeinnovative alternatives.
Approach
The government includes a brief (1- to 2-page)SOO in the RFP and requests that offerors providea SOW in their proposal. The SOO is typicallyappended to section J of the RFP and does not be-come part of the contract. Instructions for the con-tractor prepared SOW would normally be includedin or referenced by Section L.
SOO Development
Step 1: The RFP team develops a set of objectivescompatible with the overall program directionincluding the following:
• User(s) operational requirements,
• Programmatic direction,
• Draft technical requirements, and
Chapter 19 Contractual Considerations
191
SOO Example:Joint Air-to-Surface Standoff Missile (JASSM)
Statement of Objectives
The Air Force and Navy warfighters need a standoff missile that will destroy the enemies’ war-sustaining capabilities with a launch standoff range outside the range of enemy area defenses.Offerors shall use the following objectives for the pre-EMD and EMD acquisition phases of theJASSM program along with other applicable portions of the RFP when preparing proposals andprogram plans. IMP events shall be traceable to this statement of objectives:
Pre-EMD Objectives
a. Demonstrate, at the sub-system level as a minimum, end-to-end performance of the sys-tem concept. Performance will be at the contractor-developed System Performance Speci-fication requirements level determined during this phase without violation of any keyperformance parameters.
b. Demonstrate the ability to deliver an affordable and producible system at or under the averageunit procurement price (AUPP).
c. Provide a JASSM system review including final system design, technical accomplishments,remaining technical risks and major tasks to be accomplished in EMD.
EMD Objectives
a. Demonstrate through test and/or analysis that all requirements as stated in the contractorgenerated System Performance Specification, derived from Operational Requirements, aremet, including military utility (operational effectiveness and suitability).
b. Demonstrate ability to deliver an affordable and producible system at or under the AUPPrequirement.
c. Demonstrate all production processes.
d. Produce production representative systems for operational test and evaluation, includingcombined development/operational test and evaluation.
At contract award the SOW, as changed throughnegotiations, becomes part of the contract and thestandard for measuring contractor’s effectiveness.
Systems Engineering Fundamentals Chapter 19
192
Figure 19-4. Requirement-WBS-SOW Flow
SUPPLEMENT 19-B
STATEMENT OF WORK(SOW)
Section 3: Requirements – States the tasks thecontractor has to perform to provide thedeliverables. Tasks should track with the WBS. TheSOW describes tasks the contractor has to do. Thespecifications describe the products.
Statement of Work Preparationand Evaluation Strategies
SOWs should be written by an integrated team ofcompetent and experienced members. The teamshould:
• Review and use the appropriate WBS for theSOW framework,
The SOW is a specific statement of the work to beperformed by the contractor. It is derived from theProgram WBS (System Architecture). It shouldcontain, at a minimum, a statement of scope andintent, as well as a logical and clear definition ofall tasks required. The SOW normally consists ofthree parts:
Section 1: Scope – Defines overall purpose of theprogram and to what the SOW applies.
Section 2: Applicable Documents – Lists thespecifications and standards referenced in Section3.
••
1600 Aircraft Subsystems
Requirement WBS Elements
System Spec
Air Vehicle
1600 Aircraft Subsystems
1610 Landing Gear Systems
31 Aircraft Subsystems (WBS 1600)
Conduct a development program toinclude detailed design, manufacture,
assembly, and test of all aircraft subsystems
SOO/SOW
1610 Landing Gear Systems••
Chapter 19 Contractual Considerations
193
• Set SOW objectives in accordance with theAcquisition Plan and systems engineeringplanning,
• Develop a SOW tasking outline and check list,
• Establish schedule and deadlines, and
• Develop a comprehensive SOW from the above.
Performance-based SOW
The term performance-based SOW has become acommon expression that relates to a SOW that tasksthe contractor to perform the duties necessary toprovide the required deliverables, but is not specificas to the process details. Basically, all SOWs shouldbe performance based, however, past DoD gener-ated SOWs have had the reputation of being overlydirective. A properly developed SOW tasks thecontractor without telling him how to accomplishthe task.
Evaluating the SOW
The WBS facilitates a logical arrangement of theelements of the SOW and a tracing of work effortexpended under each of the WBS elements. It helpsintegrated teams to ensure all requirements havebeen included, and provides a foundation for track-ing program evolution and controlling the changeprocess. As shown by Figure 19-4, the WBS servesas a link between the requirements and the SOW.
In the past, DoD usually wrote the SOW and, overtime, an informal set of rules had been developedto assist in drafting them. While the governmenttoday generally does not write the SOW, but, rather,more often evaluates the contractor’s proposed SOW,those same rules can assist in the government roleof evaluator.
Statement of Work Rules
In section 1. Scope:
DO NOT:
• Include directed work statements.
• Include data requirements or deliverableproducts.
In section 2. Applicable Documents:
DO NOT:
• Include guidance documents that apply only toGovernment PMOs (e.g., DoD 5000 series andservice regulations).
In section 3. Requirements:
DO NOT:
• Define work tasks in terms of data to be deliv-ered.
• Order, describe, or discuss CDRL data (OK toreference).
• Express work tasks in data terms.
• Invoke, cite, or discuss a DID.
• Invoke handbooks, service regulations, techni-cal orders, or any other document not specifi-cally written in accordance with MIL-STD-961/962.
• Specify how task is to be accomplished.
• Use the SOW to amend contract specifications.
• Specify technical proposal or performancecriteria or evaluation factors.
• Establish delivery schedules.
• Over specify.
In section 3. Requirements:
DO:
• Specify work requirements to be performedunder contract.
Systems Engineering Fundamentals Chapter 19
194
• Set SOW objectives to reflect the acquisitionplan and systems engineering planning.
• Provide a priceable set of tasks.
• Express work to be accomplished in workwords.
• Use “shall” whenever a task is mandatory.
• Use “will” only to express a declaration ofpurpose or simple futurity.
• Use WBS as an outline.
• List tasks in chronological order.
• Limit paragraph numbering to 3rd sub-level(3.3.1.1.) – Protect Government interests.
• Allow for contractor’s creative effort.
Chapter 19 Contractual Considerations
195
CONTRACT DATA REQUIREMENTS LIST
ATCH NR: 3 TO EXHIBIT: SYSTEM/ITEM: ATF DEM/VAL PHASE
TO CONTRACT/PR: F33657-86-C-2085 CATEGORY: X CONTRACTOR: LOCKHEED
1) 2) SOW 3.1 6) 10) 12) 14)
3100 3) ASD/TASE ONE/R 60DAC ASD/TASE 2/0
4) 5) SOW 3.1 7) 8) 9) 11) 13)
OT E62011 IT D SEE 16
16)BLK 4: SEE APPENDIXES TO CDRL FOR DID.
THIS DID IS TAILORED AS FOLLOWS:(1) CONTRACTOR FORMAT IS ACCEPTABLE.(2) CHANGE PARAGRAPH 2a OF DID TO READ: “PROGRAM RISKANALYSIS. THIS SECTION SHALL DESCRIBE THE PLAN ANDMETHODOLOGY FOR A CONTINUING ASSESSMENT OFTECHNICAL, SUPPORTABILITY, COST, AND SCHEDULE RISKS OFTHE SYSTEM PROGRAM. THIS SECTION SHOULD BECONSISTENT WITH AND NOT DUPLICATE THE SYSTEMINTEGRATION PLAN (REFERENCE DI-S-3563/T); i.e., ONE PLANMAY REFERENCE THE OTHER.”
BLK 13: REVISIONS SHALL BE SUBMITTED AS REQUIRED BY CHANGERESULTING FROM THE SYSTEMS ENGINEERING PROCESS.
NOTE: SCHEDULES ASSOCIATED WITH THIS PLAN SHALL BEINTEGRATED WITH THE MASTER PROGRAM PLANNINGSCHEDULE SUBMITTED ON MAGNETIC MEDIA IN ACCORDANCEWITH DI-A-3007/T.
PREPARED BY: DATE: APPROVED BY: DATE:
86 JUN 11 86 JUNE 11
DD FORM 1423 ADPE ADAPTATION SEP 81 (ASD/YYD)
Figure 19-5. CDRL Single Data Item Requirement Example
ASD/TASM 2/0
ASD/TASL 2/0
ACO 1/0
15)
7/0
SUPPLEMENT 19-C
CONTRACT DATAREQUIREMENTS LIST
Data requirements can also be identified in thecontract via Special Contract Clauses (FederalAcquisition Clauses.) Data required by the FARclauses are usually required and managed by theContracting Officer.
The Contract Data Requirements List (CDRL) isa list of authorized data requirements for a specificprocurement that forms a part of the contract. It iscomprised of a series of DD Forms 1423 (Indi-vidual CDRL forms) containing data requirementsand delivery instructions. CDRLs should be linkeddirectly to SOW tasks and managed by the programoffice data manager. A sample CDRL datarequirement is shown in Figure 19-5.
Systems Engineering Fundamentals Chapter 19
196
Data Requirement Sources
Standard Data Item Descriptions (DID) define datacontent, preparation instructions, format, intendeduse, and recommended distribution of data requiredof the contractor for delivery. The AcquisitionManagement Systems and Data RequirementsControl List (AMSDL) identifies acquisition man-agement systems, source documents, and standardDIDs. With acquisition reform the use of DIDs hasdeclined, and data item requirements now are ei-ther tailored DIDs or a set of requirements specifi-cally written for the particular RFP in formatsagreeable to the contractor and the government.
DD Form 1423 Road Map
Block 1: Data Item Number – represents the CDRLsequence number.
Block 2: Title of Data Item – same as the titleentered in item 1 of the DID (DD Form 1664).
Block 4: Authority (Data Acquisition DocumentNumber) – same as item 2 of the DID form andwill include a “/t” to indicate DID has been tailored.
Block 5: Contract Reference – identifies the DIDauthorized in block 4 and the applicable documentand paragraph numbers in the SOW from whichthe data flows.
Block 6: Requiring Office – activity responsiblefor advising the technical adequacy of the data.
Block 7: Specific Requirements – may be neededfor inspection/acceptance of data.
Block 8: Approval Code – if “A,” it is a criticaldata item requiring specific, advanced, writtenapproval prior to distribution of the final data item.
Block 9: Distribution Statement Required:
Category A is unlimited-release to the public.
Category B is limited-release to governmentagencies.
Category C limits release to government agenciesand their contractors.
Category D is limited-release to DoD offices andtheir contractors.
Category E is for release to DoD components only.
Category F is released only as directed andnormally classified.
Block 12: Date of First Submission – indicatesyear/month/day of first submission and identifiesspecific event or milestone data is required.
Block 13: Date of Subsequent Submission – if datais submitted more than once, subsequent dates willbe identified.
Block 14: Distribution – identify each addresseeand identify the number of copies to be receivedby each. Use office symbols, format of data to bedelivered, command initials, etc.
Block 16: Remarks – explain only tailored featuresof the DID, any additional information for blocks1-15, and any resubmittal schedule or special con-ditions for updating data submitted for governmentapproval.
Chapter 19 Contractual Considerations
197
Figure 19-6. Source Selection Process
Source SelectionAuthority
Source SelectionAdvisory Council
Source SelectionEvaluation Board
Other ReviewPanels
Technical EvaluationReview Panel
SUPPLEMENT 19-D
THE SOURCESELECTION PLAN
(SSAC) provides advice to the SSA based on theSource Selection Evaluation Board’s (SSEB’s)findings and the collective experience of SSACmembers. The SSEB generates the information theSSA needs by performing a comprehensive evalu-ation of each offeror’s proposal. A Technical Evalu-ation Review Team(s) evaluates the technical por-tion of the proposals to support the SSEB. Theprocess flow is shown in Figure 19-6.
The PM is responsible for developing and imple-menting the acquisition strategy, preparing the SSP,and obtaining SSA approval of the plan before theformal solicitation is issued to industry. The SystemEngineer or technical manager supports the PM’sefforts. The Contracting Officer is responsible forpreparation of solicitations and contracts, any com-munications with potential offerors or offerors,consistency of the SSP with requirements of theFederal Acquisition Regulation (FAR) and DoDFAR Supplement (DFARS), and award of thecontract.
Prior to solicitation issuance, a source selectionplan should be prepared by the Program Manager(PM), reviewed by the Contracting Officer, andapproved by the Source Selection Authority (SSA).A Source Selection Plan (SSP) generally consistsof three parts:
• The first part describes the organization,membership, and responsibilities of the sourceselection team,
• The second part identifies the evaluation factors,and
• The last part establishes detailed procedures forthe evaluation of proposals.
Source Selection Organization
The SSA is responsible for selecting the sourcewhose proposal is most advantageous to the gov-ernment. The Source Selection Advisory Council
Systems Engineering Fundamentals Chapter 19
198
Figure 19-7. Evaluation Factors Example
Rating Evaluation Criteria – Life Cycle Cost(Points)
9-10 Offeror has included a complete Life Cycle Cost analysis that supports their proposal.
7-8 Offeror did not include a complete Life Cycle Cost analysis but has supported theirdesign approach on the basis of Life Cycle Cost.
5-6 Offeror plans to complete a Life Cycle Cost analysis as part of the contract effort andhas described the process that will be used.
3-4 Offeror plans to complete a Life Cycle Cost analysis as part of the contract effort but didnot describe the process that will be used.
0-2 Life Cycle Cost was not addressed in the Offeror’s proposal.
SSP Evaluation Factors
The evaluation factors are a list, in order of rela-tive importance, of those aspects of a proposal thatwill be evaluated quantitatively and qualitatively toarrive at an integrated assessment as to which pro-posal can best meet the Government’s need asdescribed in the solicitation. Figure 19-7 showsan example of one evaluation category, life cyclecost. The purpose of the SSP evaluation is toinform offerors of the importance the Govern-ment attaches to various aspects of a proposal andto allow the government to make fair and reasoneddifferentiation between proposals.
In general the following guidance should be usedin preparing evaluation factors:
• Limit the number of evaluation factors,
• Tailor the evaluation factors to the Governmentrequirement (e.g., combined message of theSOO/SOW, specification, CDRL, etc.), and
• Cost is always an evaluation factor. The identi-fication of the cost that is to be used and itsrelative importance in rating the proposal shouldbe clearly identified.
Factors to Consider
There is not sufficient space here to attempt to ex-haustively list all the factors that might influencethe decision made in a source selection. Thefollowing are indicative of some of the keyconsideration, however:
• Is the supplier’s proposal responsive to thegovernment’s needs as specified in the RFP?
• Is the supplier’s proposal directly supportive ofthe system requirements specified in the systemspecification and SOO/SOW?
• Have the performance characteristics beenadequately specified for the items proposed?Are they meaningful, measurable, and traceablefrom the system-level requirements?
• Have effectiveness factors been specified(e.g., reliability, maintainability, supportability,and availability?) Are they meaningful, mea-surable, and traceable, from the system-levelrequirements?
• Has the supplier addressed the requirement fortest and evaluation of the proposed systemelement?
Chapter 19 Contractual Considerations
199
• Have life cycle support requirements been iden-tified (e.g., maintenance resource requirements,spare/repair parts, test and support equipment,personnel quantities and skills, etc?) Have theserequirements been minimized to the extentpossible through design?
• Does the proposed design configuration reflectgrowth potential or change flexibility?
• Has the supplier developed a comprehensivemanufacturing and construction plan? Are keymanufacturing processes identified along withtheir characteristics?
• Does the supplier have an adequate qualityassurance and statistical process controlprograms?
• Does the supplier have a comprehensiveplanning effort (e.g., addresses program tasks,organizational structure and responsibilities, aWBS, task schedules, program monitoring andcontrol procedures, etc.)?
• Does the supplier’s proposal address all aspectsof total life cycle cost?
• Does the supplier have previous experience inthe design, development, and production ofsystem elements/components which are simi-lar in nature to the item proposed?
Proposal Evaluation
Proposal evaluation factors can be analyzed withany reasonable trade study approach. Figure 19-8shows a common approach. In this approach eachfactor is rated based on the evaluation factor ma-trix established for each criteria, such as that shownin Figure 19-7. It is then multiplied by a weight-ing factor based on the perceived priority of eachcriteria. All the weighted evaluations are addedtogether and the highest score wins.
Like trade studies the process should be examinedfor sensitivity problems; however, in the case ofsource selection, the check must be done withanticipated values prior to release of the RFP.
Systems Engineering Fundamentals Chapter 19
200
Figure 19-8. Source Evaluation
WT. Proposal A Proposal B Proposal CEvaluation Criteria Factor
(%) Rating Score Rating Score Rating Score
A. Technical Requirements: 25
1. Performance Characteristics 6 4 24 5 30 5 30
2. Effectiveness Factors 4 3 12 4 16 3 12
3. Design Approach 3 2 6 3 9 1 3
4. Design Documentation 4 3 12 4 16 2 8
5. Test and Evaluation Approach 2 2 4 1 2 2 4
6. Product Support Requirements 4 2 8 3 12 2 8
B. Production Capability 20
1. Production Layout 8 5 40 6 48 6 48
2. Manufacturing Process 5 2 10 3 15 4 20
3. Quality Control Assurance 7 5 35 6 42 4 28
C. Management 20
1. Planning (Plans/Schedules) 6 4 24 5 30 4 24
2. Organization Structure 4 4 16 4 12 4 16
3. Available Personnel Resources 5 3 15 3 20 3 15
4. Management Controls 5 3 15 3 20 4 20
D. Total Cost 25
1. Acquisition Price 10 7 70 5 50 6 60
2. Life Cycle Cost 15 9 135 10 150 8 120
E. Additional Factors 10
1. Prior Experience 4 4 16 3 12 3 12
2. Past Performance 6 5 30 5 30 3 18
Grand Total 100 476 516 450
* Select Proposal B
*
Chapter 20 Management Considerations and Summary
201
CHAPTER 20
MANAGEMENT CONSIDERATIONSAND SUMMARY
fact is that, in too many cases, we are producingexcellent systems, but systems that take too longto produce, cost too much, and are often outdatedwhen they are finally produced. The demand forchange has been sounded, and systems engineer-ing management must respond if change is to takeplace. The question then becomes how should onemanage to be successful in this environment? Wehave a process that produces good systems; howshould we change the process that has served uswell so that it serves us better?
At the heart of acquisition reform is this idea: wecan improve our ability to provide our users withhighly capable systems at reasonable cost andschedule. We can if we manage design and devel-opment in a way that takes full advantage of theexpertise resident both with the government andthe contractor. This translates into the governmentstating its needs in terms of performance outcomesdesired, rather than in terms of specific designsolutions required; and, likewise, in having con-tractors select detailed design approaches thatdeliver the performance demanded, and thentaking responsibility for the performance actuallyachieved.
This approach has been implemented in DoD, andin other government agencies as well. In its earlierimplementations, several cases occurred where thegovernment managers, in an attempt to ensure thatthe government did not impose design solutionson contractors, chose to deliberately distance thegovernment technical staff from contractors. Thispresumed that the contractor would step forwardto ensure that necessary engineering disciplines andfunctions were covered. In more than one case,the evidence after the fact was that, as thegovernment stepped back to a less directive role
20.1 MANAGEMENT CONSIDERATIONS
The Acquisition Reform Environment
No one involved in systems acquisition, eitherwithin the department or as a supplier, can avoidconsidering how to manage acquisition in thecurrent reform environment. In many ways, re-thinking the way we manage the systems engineer-ing process is implicit in reforming acquisitionmanagement. Using performance specifications(instead of detailed design specifications), leavingdesign decisions in the hands of contractors,delaying government control of configurationbaselines—all are reform measures related directlyto systems engineering management. This text hasalready addressed and acknowledged managing thetechnical effort in a reform environment.
To a significant extent, the systems engineeringprocesses—and systems engineers in general—arevictims of their own successes in this environment.The systems engineering process was created andevolved to bring discipline to the business of pro-ducing very complex systems. It is intended toensure that requirements are carefully analyzed,and that they flow down to detailed designs. Theprocess demands that details are understood andmanaged. And the process has been successful.Since the 1960s manufacturers, in concert withgovernment program offices, have produced aseries of ever-increasingly capable and reliablesystems using the processes described in this text.The problem is, in too many cases, we have over-laid the process with ever-increasing levels ofcontrols, reports, and reviews. The result is thatthe cycle time required to produce systems hasincreased to unacceptable levels, even as technol-ogy life cycles have decreased precipitously. The
Systems Engineering Fundamentals Chapter 20
202
in design and development, the contractor did nottake a corresponding step forward to ensure thatnormal engineering management disciplines wereincluded. In several cases where problems arose,after-the-fact investigation showed important ele-ments of the systems engineering process wereeither deliberately ignored or overlooked.
The problem in each case seems to have beenfailure to communicate expectations between thegovernment and the contractor, compounded by afailure on the part of the government to ensure thatnormal engineering management disciplines wereexercised. One of the more important lessonslearned has been that while the systems engineer-ing process can—and should be—tailored to thespecific needs of the program, there is substantialrisk ignoring elements of the process. Before onedecides to skip phases, eliminate reviews, or takeother actions that appear to deliver shortenedschedules and less cost, one must ensure thatthose decisions are appropriate for the risks thatcharacterize the program.
Arbitrary engineering management decisions yieldpoor technical results. One of the primary require-ments inherent in systems engineering is to assessthe engineering management program for its con-sistency with the technical realities and risks con-fronted, and to communicate his/her findings andrecommendations to management. DoD policy isquite clear on this issue. The government is not, inmost cases, expected to take the lead in the devel-opment of design solutions. That, however, doesnot relieve the government of its responsibilitiesto the taxpayers to ensure that sound technical andmanagement processes are in place. The systemsengineer must take the lead role in establishing thetechnical management requirements for the pro-gram and seeing that those requirements are com-municated clearly to program managers and to thecontractor.
Communication – Trust and Integrity
Clearly, one of the fundamental requirements foran effective systems engineer is the ability to com-municate. Key to effective communication is the
rudimentary understanding that communicationinvolves two elements—a transmitter and areceiver. Even if we have a valid message and thecapacity for expressing our positions in terms thatenable others to understand what we are saying,true communication may not take place if theintended receiver chooses not to receive our mes-sage. What can we do, as engineering managers tohelp our own cause as far as ensuring that ourcommunications are received and understood?
Much can be done to condition others to listen andgive serious consideration to what one says, and,of course, the opposite is equally true—one cancondition others to ignore what he/she says. It isprimarily a matter of establishing credibility basedon integrity and trust.
First, however, it is appropriate to discuss thesystems engineer’s role as a member of the man-agement team. Systems engineering, as practicedin DoD, is fundamentally the practice of engineer-ing management. The systems engineer is expectedto integrate not only the technical disciplines inreaching recommendations, but also to integratetraditional management concerns such as cost,schedule, and policy into the technical manage-ment equation. In this role, senior levels of man-agement expect the systems engineer to understandthe policies that govern the program, and to ap-preciate the imperatives of cost and schedule. Fur-thermore, in the absence of compelling reasons tothe contrary, they expect support of the policiesenunciated and they expect the senior engineer tobalance technical performance objectives with costand schedule constraints.
Does this mean that the engineer should place hisobligation to be a supportive team member abovehis ethical obligation to provide honest engineer-ing judgment? Absolutely not! But it does meanthat, if one is to gain a fair hearing for expressionof reservations based on engineering judgment, onemust be viewed as a member of the team. The indi-vidual who always fights the system, always ob-jects to established policy, and, in general, refusesto try to see other points of view will eventuallybecome isolated. When others cease listening, the
Chapter 20 Management Considerations and Summary
203
1 Ethical Issues in Engineering, Johnson, Ch 15.
communication stops and even valid points of vieware lost because the intended audience is no longerreceiving the message—valid or not.
In addition to being team players, engineeringmanagers can further condition others to be recep-tive to their views by establishing a reputation formaking reasoned judgments. A primary require-ment for establishing such a reputation is that man-agers must have technical expertise. They must beable to make technical judgments grounded in asound understanding of the principles that governscience and technology. Systems engineers musthave the education and the experience that justi-fies confidence in their technical judgments. In theabsence of that kind of expertise, it is unlikely thatengineering managers will be able to gain the re-spect of those with whom they must work. Andyet, systems engineers cannot be expert in all theareas that must be integrated in order to create asuccessful system. Consequently, systems engi-neers must recognize the limits of their expertiseand seek advice when those limits are reached.And, of course, systems engineers must have builta reputation for integrity. They must have demon-strated a willingness to make the principled standwhen that is required and to make the tough call,even when there are substantial pressures to dootherwise.
Another, perhaps small way, that engineers canimprove communication with other members oftheir teams (especially those without an engineer-ing background) is to have confidence in the posi-tion being articulated and to articulate the positionconcisely. The natural tendency of many engineersis to put forward their position on a subject alongwith all the facts, figures, data and required proofsthat resulted in the position being taken. This some-times results in explaining how a watch workswhen all that was asked was “What time is it?”Unless demonstrated otherwise, team memberswill generally trust the engineer’s judgment andwill assume that all the required rationale is inplace, without having to see it. There are sometimes when it is appropriate to describe how the
watch works, but many times communication isenhanced and time saved by providing a confidentand concise answer.
When systems engineers show themselves to bestrong and knowledgeable, able to operate effec-tively in a team environment, then communicationproblems are unlikely to stand in the way of effec-tive engineering management.
20.2 ETHICAL CONSIDERATIONS
The practice of engineering exists in an environ-ment of many competing interests. Cost and sched-ule pressures; changes in operational threats,requirements, technology, laws, and policies; andchanges in the emphasis on tailoring policies in acommon-sense way are a few examples. Thesecompeting interests are exposed on a daily basisas organizations embrace the integrated productand process development approach. The commu-nication techniques described earlier in this chap-ter, and the systems engineering tools described inearlier chapters of this book, provide guidance forengineers in effectively advocating the importanceof the technical aspects of the product in this envi-ronment of competing interests.
But, what do engineers do when, in their opinion,the integrated team or its leadership are not put-ting adequate emphasis on the technical issues?This question becomes especially difficult in thecases of product safety or when human life is atstake. There is no explicit set of rules that directsthe individual in handling issues of ethical integ-rity. Ethics is the responsibility of everyone on theintegrated team. Engineers, while clearly the ad-vocate for the technical aspects of the intgratedsolution, do not have a special role as ethicalwatchdogs because of their technical knowledge.
Richard T. De George in his article entitled EthicalResponsibilities of Engineers in Large Organiza-tions: The Pinto Case1 makes the following case:“The myth that ethics has no place in engineering
Systems Engineering Fundamentals Chapter 20
204
has been attacked, and at least in some corners ofthe engineering profession been put to rest. Anothermyth, however, is emerging to take its place—themyth of the engineer as moral hero.”
This emphasis, De George believes, is misplaced.“The zeal of some preachers, however, has gonetoo far, piling moral responsibility upon moral re-sponsibility on the shoulders of the engineer.Though engineers are members of a profession thatholds public safety paramount, we cannot reason-ably expect engineers to be willing to sacrifice theirjobs each day for principle and to have a whistleever by their sides ready to blow if their firm straysfrom what they perceive to be the morally rightcourse of action.”
What then is the responsibility of engineers tospeak out? De George suggests as a rule of thumbthat engineers and others in a large organizationare morally permitted to go public with informa-tion about the safety of a product if the followingconditions are met:
1. If the harm that will be done by the product tothe public is serious and considerable.
2. If they make their concerns known to theirsuperiors.
3. If, getting no satisfaction from their immedi-ate supervisors, they exhaust the channelsavailable within the operation, including goingto the board of directors (or equivalent).
De George believes if they still get no action atthis point, engineers or others are morally permit-ted to make their concerns public but not morallyobligated to do so. To have a moral obligation togo public he adds two additional conditions to thoseabove:
4. The person must have documented evidencethat would convince a reasonable, impartialobserver that his/her view of the situation iscorrect and the company policy wrong.
5. There must be strong evidence that making theinformation public will in fact prevent thethreatened serious harm.
Most ethical dilemmas in engineering managementcan be traced to different objectives and expecta-tions in the vertical chain of command. Higherauthority knows the external pressures that impactprograms and tends to focus on them. Systemengineers know the realities of the on-goingdevelopment process and tend to focus on theinternal technical process. Unless there is commu-nication between the two, misunderstandings andlate information can generate reactive decisions andpotential ethical dilemmas. The challenge for sys-tem engineers is to improve communication to helpunify objectives and expectations. Divisive ethi-cal issues can be avoided where communication isrespected and maintained.
20.3 SUMMARY
The material presented in this book is focused onthe details of the classic systems engineeringprocess and the role of the systems engineer as theprimary practitioner where the activities includedin that process are concerned. The systems engi-neering process described has been used success-fully in both DoD and commercial product devel-opment for decades. In that sense, little new or revo-lutionary material has been introduced in this text.Rather, we have tried to describe this time-provenprocess at a level of detail that makes it logicaland understandable as a tool to use to plan, design,and develop products that must meet a defined setof requirements.
In DoD, systems engineers must assume roles ofengineering managers on the program or projectassigned. They must understand that the role ofthe systems engineer is necessarily different fromthat normal to the narrowly specialized functionalengineer, yet it is also different from the role playedby the program manager. In a sense, the role of thesystems engineer is a delicate one, striving to bal-ance technical concerns with the real management
Chapter 20 Management Considerations and Summary
205
pressures deriving from cost, schedule, and policy.The systems engineer is often the person in themiddle; it is seldom a comfortable position. Thistext has been aimed at that individual.
The first two parts of the text were intended to firstgive the reader a comprehensive overview of sys-tems engineering as a practice and to demonstratethe role that systems engineering plays within theDoD acquisition management process. Part 2, inparticular, was intended to provide relatively de-tailed insights into the specific activities that makeup the process. The government systems engineermay find him/herself deeply involved in some ofthe detailed activities that are included in the pro-cess, while less involved in others. For example,government systems engineers may find them-selves very involved in requirements definition andanalysis, but less directly involved in design syn-thesis. However, the fact that government engineersdo not directly synthesize designs does not relievethem from a responsibility to understand theprocess and to ensure that sound practices arepursued in reaching design decisions. It is for thisreason that understanding details of the processare critical.
Part 3 of the book is perhaps the heart of the textfrom an engineering management perspective. InPart 3, we have presented discussions on a seriesof topics under the general heading of SystemsAnalysis and Control. The engine that translatesrequirements into designs is defined by the require-ments analysis, functional analysis and allocation,and design synthesis sequence of activities. Much
of the role of the systems engineer is to evaluateprogress, consider alternatives, and ensure the prod-uct remains consistent and true to the requirementsupon which the design is based. The tools and tech-niques presented in Part 3 are the primary meansby which a good engineering management effortaccomplishes these tasks.
Finally, in Part 4, we presented some of theconsiderations beyond the implementation of adisciplined systems engineering process that theengineering manager must consider in order to besuccessful. Particularly in today’s environmentwhere new starts are few and resources often lim-ited, the planning function and the issues associ-ated with product improvement and integrated teammanagement must move to the forefront of thesystems engineer’s thinking from the very earlystages of work on any system.
This book has attempted to summarize the primaryactivities and issues associated with the conductand management of technical activities on DoDprograms and projects. It was written to supple-ment the material presented courses at the DefenseSystems Management College. The disciplinedapplication of the principles associated withsystems engineering has been recognized as oneindicator of likely success in complex programs.As always, however, the key is for the practitionerto be able to absorb these fundamental principlesand then to tailor them to the specific circumstancesconfronted. We hope that the book will prove use-ful in the future challenges that readers will faceas engineering managers.
Systems Engineering Fundamentals Chapter 20
206
Glossary Systems Engineering Fundamentals
207
GLOSSARY
Systems Engineering Fundamentals Glossary
208
Glossary Systems Engineering Fundamentals
209
GLOSSARY
SYSTEMS ENGINEERINGFUNDAMENTALS
AAAV Advanced Amphibious Assault Vehicle
ACAT Acquisition Category
ACR Alternative Concept Review
AMSDL Acquisition Management Systems Data List
ASR Alternative Systems Review
AUPP Average Unit Procurement Price
AWP Awaiting Parts
BL Baseline
BLRIP Beyond Low Rate Initial Production
C4ISR Command, ontrol, Communications, Computers, Intelligence,and Reconnaissance
CAD Computer-Aided Design
CAE Computer-Aided Engineering
CAIV Cost As an Independent Variable
CALS Continuous Acquisition and Life Cycle Support
CAM Computer-Aided Manufacturing
CASE Computer-Aided Systems Engineering
CATIA Computer-Aided Three-Dimensional Interactive Application
CCB Configuration Control Board
CCR Contract Change Request
CDR Critical Design Review
CDRL Contract Data Requirement List
CDS Concept Design Sheet
CE Concept Exploration
Systems Engineering Fundamentals Glossary
210
CEO Chief Executive Officer
CI Configuration Item
Circular A-109 Major Systems Acquisitions
CM Configuration Management
CM Control Manager
COTS Commercial Off-The-Shelf
CSCI Computer Software Configuration Item
CWI Continuous Wave Illumination
DAU Defense Acquisition University
DCMC Defense Contract Management Command
DDR Detail Design Review
DFARS Defense Supplement to the Federal Acquisition Regulation
DID Data Item Description
DoD Department of Defense
DoD 5000.2-R Mandatory Procedures for Major Defense Acquisition Programs (MDAPs), andMajor Automated Information System Acquisition Programs (MAIS)
DoDISS DoD Index of Specifications and Standards
DSMC Defense Systems Management College
DT Developmental Testing
DTC Design To Cost
DT&E Developmental Test and Evaluation
EC Engineering Change
ECP Engineering Change Proposal
EDI Electronic Data Interchange
EIA Electronic Industries Alliance
EIA IS 632 Electronic Industries Association Interim Standard 632, on Systems Engineering
EIA IS-649 Electronic Industries Association Interim Standard 649, on ConfigurationManagement
EOA Early Operational Assessments
Glossary Systems Engineering Fundamentals
211
FAR Federal Acquisition Regulation
FCA Functional Configuration Audit
FEO Field Engineering Order
FFBD Functional Flow Block Diagram
FIPS Federal Information Processing Standard
FMECA Failure Modes, Effects, and Criticality Analysis
FOT&E Follow-On Operational Test and Evaluation
FQR Formal Qualification Review
GFE Government Furnished Equipment
GFM Government Furnished Material
ICD Interface Control Documentation
ICWG Interface Control Working Group
IDE Integrated Digital Environment
IDEF Integration Definition Function
IDEF0 Integrated Definition for Function Modeling
IDEF1x Integration Definition for Information Modeling
IEEE Institute of Electrical and Electronics Engineers
IEEE/EIA 12207 IEEE/EIA Standard 12207, Software Life Cycle Processes
IEEE P1220 IEEE Draft Standard 1220, Application and Management of the SystemsEngineering Process
IFB Invitation for Bid
IIPT Integrating Integrated Product Teams
IMS Integrated Master Schedule
IOC Initial Operational Capability
IOT&E Initial Operational Test and Evaluation
IPPD Integrated Product and Process Development
IPR In-Progress/Process Review
IPT Integrated Product Teams
Systems Engineering Fundamentals Glossary
212
JASSM Joint Air-to-Surface Standoff Missile
JROC Joint Requirements Oversight Council
JTA Joint Technical Architecture
KPPs Key Performance Parameters
LFT&E Live Fire Test and Evaluation
LRU Line-Replaceable Unit
LRIP Low Rate Initial Production
M&S Modeling and Stimulation
MAIS Major Automated Information System
MAISRC Major Automated Information Systems Review Council
MBTF Mean Time Between Failure
MDA Milestone Decision Authority
MDAP Major Defense Acquisition Program
MIL-HDBK-61 Military Handbook 61, on Configuration Management
MIL-HDBK-881 Military Handbook 881, on Work Breakdown Structure
MIL-STD 499A Military Standard 499A, on Engineering Management
MIL-STD-961D Military Standard 961D, on Standard Practice for Defense Specifications
MIL-STD 962 Military Standard 962, on Format and Content of Defense Standards
MIL-STD-973 Military Standard 973, on Configuration Management
MNS Mission Need Statement
MOE Measure of Effectiveness
MOP Measure of Performance
MOS Measure of Suitability
MRP II Manufacturing Resource Planning II
MS Milestone
MTTR Mean Time To Repair
NDI Non-Developmental Item
NIST National Institute of Standards and Technology
Glossary Systems Engineering Fundamentals
213
NRTS Not Repairable This Station
OA Operational Assessment
OIPT Overarching Integrated Product Teams
OMB Office of Management and Budget
OPS Operations
ORD Operational Requirements Document
OSD Office of the Secretary of Defense
OT&E Operational Test and Evaluation
P3I Preplanned Product Improvement
PAR Production Approval Reviews
PCA Physical Configuration Audit
PDR Preliminary Design Review
PDRR Program Definition and Risk Reduction
PEO Program Executive Office
PM Program Manager
PME Program/Project Manager – Electronics
PMO Program Management Office
PMT Program Management Team
PPBS Planning, Programming and Budgeting System
PRR Production Readiness Review
QA Quality Assurance
QFD Quality Function Deployment
R&D Research and Development
RAS Requirements Allocation Sheets
RCS Radar Cross Section
RDT&E Research, Development, Test and Evaluation
RFP Request for Proposal
Systems Engineering Fundamentals Glossary
214
S&T Science and Technology
SBA Simulation Based Acquisition
SBD Schematic Block Diagram
SD&E System Development and Demonstration
SDefR System Definition Review (as referred to in IEEE P1220)
SDR System Design Review
SE Systems Engineering
Section L Instructions to Offerors (Portion of Uniform Contract Format)
Section M Evaluation Criteria (Portion of Uniform Contract Format)
SEDS Systems Engineering Detail Schedule
SEMS Systems Engineering Master Schedule
SEP Systems Engineering Process
SFR System Functional Review
SI Software Item
SI&T System Integration and Test
SOO Statement of Objectives
SOW Statement of Work
SPEC Specification
SSA Source Selection Authority
SSAC Source Selection Advisory Council
SSEB Source Selection Evaluation Board
SSP Source Selection Plan
SSR Software Specification Review
SRR System Requirements Review
SRU Shop-Replaceable Unit
STD Standard
SVR System Verification Review
S/W Software
Glossary Systems Engineering Fundamentals
215
T&E Test and Evaluation
TDP Technical Data Package
TEMP Test and Evaluation Master Plan
TLS Timeline Analysis Sheet
TOC Team Operating Contract
TPM Technical Performance Measurement
TPWG Test Planning Work Group
TRR Test Readiness Review
VV&A Verfication, Validation, and Accreditation
WIPT Working-Level Integrated Product Team
Systems Engineering Fundamentals Glossary
216
Glossary Systems Engineering Fundamentals
217
DAU PRESSWANTS TO HEAR FROM YOU
Please rate this publication in various ways using the following scores:
4 – Excellent 3 – Good 2 – Fair 1– Poor 0 – Does not apply
Name of this publication: Systems Engineering Fundamentals
This publication:
A. _____ is easy to read.
B. _____ has a pleasing design and format.
C. _____ successfully addresses acquisition management and reform issues.
D. _____ contributes to my knowledge of the subject areas.
E. _____ contributes to my job effectiveness.
F. _____ contributes to my subordinate’s job effectiveness.
G. _____ is useful to me in my career.
H. _____ I look forward to receiving this publication.
I. _____ I read all or most of this publication.
J. _____ I recommend this publication to others in acquisition workforce.
How can we improve this publication? Provide any constructive criticism for us to consider for thefuture.
What other DAU/DSMC publications do you read?
— — — OPTIONAL — — —
Name/Title: ________________________________________________________________________
Company/Agency: __________________________________________________________________
Address: ___________________________________________________________________________
Work Phone: ( ____ ) ______________ DSN: ____________________ FTS: ___________________
Fax: __________________________________ E-mail: ____________________________________
___ Please mail me a brochure listing other DAU/DSMC publications.
___ Please send a free subscription to the following at the address above.
___ Acquisition Review Quarterly ___ Acquisition Reform Today Newsletter
___ Program Manager
Copy this form and fax it to DAU/DSMC Press at (703) 805-2917.
Systems Engineering Fundamentals Glossary
218