9
1 Copyright © 20xx by ASME Proceedings of the ASME 2009 International Design Engineering Technical Conferences & Design Theory and Design IDETC/DTM 2009 August 30 September 2, 2009, San Diego, California USA DETC2009-86740 MULTIFUNCTIONAL ENTERPRISE READINESS: BEYOND THE POLICY OF BUILD-TEST-FIX CYCLIC REWORK Victor Tang Massachusetts Institute of Technology Department of Mechanical Engineering 77 Massachusetts Avenue Cambridge, Massachusetts, USA phone 914-769-4040 [email protected] Kevin N. Otto Robust Systems and Strategy LLC President 19 Edgewater Lane Taunton, Massachusetts, USA 20780 phone 877-875-5087, fax 877-875-5087 [email protected] ABSTRACT NASA, the US Government and many companies attempt to manage the development and launch of new technology using Technology Readiness Levels, TRLs. Unfortunately, TRLs as generally defined are outdated and flawed, based on the extent of prototype or hardware use in the field. Urgency in improving TRL levels drives early release of hardware before it is ready, and initiates cyclic rounds of debugging and fixing failures in the field or laboratory. Such a build-test-fix approach to product development is now well documented to be inefficient and wasteful. We present updated definitions of technology readiness levels (TRLs) based on the lean and design-for-six- sigma product design methodology, a radical departure from the “build-test-fix” methodology of conventional TRLs. We argue that the iterative build-test-fix approach of cyclic rework is costly to product development, as well as, downstream manufacturing and services. We call our updated TRL the L- TRL, for Lean TRL. Consistent with our L-TRL, we also present updated definitions for Manufacturing Readiness Levels (MRLs) to address lean and six-sigma manufacturing principles. Hence we call them L-MRL. We address a void in the literature and unveil definitions for service readiness levels (SRLs). 1. INTRODUCTION “Readiness” is a powerful general concept about an enterprise’s capability to successfully mobilize its resources to execute specific functional processes; such as, technology evaluation in product development, manufacturing ability to meet demand with quality products, and services’ ability to serve customers , maintain quality, costs, and customer satisfaction. “Readiness Levels” are standard definitions of readiness for common understanding, providing a lingua franca for different disciplines to readily understand the readiness of different components, subsystems, products and entire systems. The purpose of readiness levels is sound when a system is deemed readiness level 6, everyone can understand it is new and untried, but nonetheless ready for launch into the field. This communication is possible without having to describe underlying details or issues that may be foreign to others. Unfortunately, we find that the commonly used definitions of readiness promote poor practices, and run counter to well proven six sigma and lean practices. In this paper, we discuss common readiness level definitions, explore their flaws, and offer improved alternatives for different disciplines. 1.1 TECHNOLOGY READINESS NASA introduced a document defining Technology Readiness Levels (TRL) as an assessment instrument to determine the extent to which the behavior and performance of a technology are sufficiently understood and characterized for use in a product system [1]. The goal is to help product development organizations adopt new technology with fewer unpleasant surprises, with more confidence, and less risk [2]. For example, of 50 major weapons systems, the Government Accountability Office (GAO) found that only 15% began development with demonstrably mature technology, and in those cases development costs increased only 1% versus 41% for those with technologies that were not ready [3]. The Department of Defense (DOD) now requires the use of TRLs for all its major acquisition programs [4, 5]. Many product development groups

2009 ASME final

Embed Size (px)

Citation preview

Page 1: 2009 ASME final

1 Copyright © 20xx by ASME

Proceedings of the ASME 2009 International Design Engineering Technical Conferences &

Design Theory and Design

IDETC/DTM 2009

August 30 – September 2, 2009, San Diego, California USA

DETC2009-86740

MULTIFUNCTIONAL ENTERPRISE READINESS:

BEYOND THE POLICY OF BUILD-TEST-FIX CYCLIC REWORK

Victor Tang Massachusetts Institute of Technology Department of Mechanical Engineering

77 Massachusetts Avenue Cambridge, Massachusetts, USA

phone 914-769-4040 [email protected]

Kevin N. Otto Robust Systems and Strategy LLC

President 19 Edgewater Lane

Taunton, Massachusetts, USA 20780 phone 877-875-5087, fax 877-875-5087

[email protected]

ABSTRACT NASA, the US Government and many companies attempt to

manage the development and launch of new technology using

Technology Readiness Levels, TRLs. Unfortunately, TRLs as

generally defined are outdated and flawed, based on the extent

of prototype or hardware use in the field. Urgency in improving

TRL levels drives early release of hardware before it is ready,

and initiates cyclic rounds of debugging and fixing failures in

the field or laboratory. Such a build-test-fix approach to

product development is now well documented to be inefficient

and wasteful. We present updated definitions of technology

readiness levels (TRLs) based on the lean and design-for-six-

sigma product design methodology, a radical departure from the

“build-test-fix” methodology of conventional TRLs. We argue

that the iterative build-test-fix approach of cyclic rework is

costly to product development, as well as, downstream

manufacturing and services. We call our updated TRL the L-

TRL, for Lean TRL. Consistent with our L-TRL, we also

present updated definitions for Manufacturing Readiness Levels

(MRLs) to address lean and six-sigma manufacturing principles.

Hence we call them L-MRL. We address a void in the literature

and unveil definitions for service readiness levels (SRLs).

1. INTRODUCTION

“Readiness” is a powerful general concept about an enterprise’s

capability to successfully mobilize its resources to execute

specific functional processes; such as, technology evaluation in

product development, manufacturing ability to meet demand

with quality products, and services’ ability to serve customers ,

maintain quality, costs, and customer satisfaction. “Readiness

Levels” are standard definitions of readiness for common

understanding, providing a lingua franca for different

disciplines to readily understand the readiness of different

components, subsystems, products and entire systems. The

purpose of readiness levels is sound – when a system is deemed

readiness level 6, everyone can understand it is new and untried,

but nonetheless ready for launch into the field. This

communication is possible without having to describe

underlying details or issues that may be foreign to others.

Unfortunately, we find that the commonly used definitions of

readiness promote poor practices, and run counter to well

proven six sigma and lean practices. In this paper, we discuss

common readiness level definitions, explore their flaws, and

offer improved alternatives for different disciplines.

1.1 TECHNOLOGY READINESS

NASA introduced a document defining Technology Readiness

Levels (TRL) as an assessment instrument to determine the

extent to which the behavior and performance of a technology

are sufficiently understood and characterized for use in a

product system [1]. The goal is to help product development

organizations adopt new technology with fewer unpleasant

surprises, with more confidence, and less risk [2]. For example,

of 50 major weapons systems, the Government Accountability

Office (GAO) found that only 15% began development with

demonstrably mature technology, and in those cases

development costs increased only 1% versus 41% for those with

technologies that were not ready [3]. The Department of

Defense (DOD) now requires the use of TRLs for all its major

acquisition programs [4, 5]. Many product development groups

Page 2: 2009 ASME final

2 Copyright © 20xx by ASME

have integrated TRL into their product development processes

for risky products with good results [6]. Experience with TRLs

has resulted in improvements, refinements, and applications to

new technical domains [2], [6-8].

However, important issues remain. For example, the

TRLs do not fully address: the difficulties of subsystems

integration, the interactions among subsystems in a complex

product-system, the uncertainty of technical difficulties in the

progression to maturation, and comparative analyses techniques

for alternative technologies [9]. To fill this gap, Sauser proposes

methods for System Readiness Levels (SRL) and Integration

Readiness Levels (IRL) [10]. Majumbar [11] proposes ways for

TRLs to address interoperability of different systems within a

system-of-systems (SOS).

But, upon examining the definitions used in TRLs, we

find that the TRL work is outdated. It is predicated on an

outmoded paradigm of build-test-fix. That is, the iterative nature

of design has been historically assumed as a natural and

acceptable feature of design methodology, e.g.

“Teams make better decisions when they make several

iterations based on approximate information” [12] .

“Product generation and evaluation are synergistic; they form

an iterative loop” [13].

We now know that an ideal design process is rather

one where all costly late iterations are eliminated [14, 15]. This

is only possible by spending much more resources on early

concepts, system engineering, critical requirement definition

and characterization over a wide space of alternatives. Simply

stated, characterize many alternatives, do not iterate only one

alternative. Spending resources early far and away saves over

spending larger resources later on build-test-fix. In practice, the

means to do this are the methods of robust design, design-for-

six-sigma and lean product development. Engineering results

are proven more consistent, and potentially less costly, with

these newer and proven engineering methods [16-19].

The reason premature iterations in cyclic design

approaches are inherently risky and costly in time and dollar

resources is the commonly understood heuristic on the 80-20

rule of design decisions, 80% of the decisions are made in the

first 20% of the resource expenditure in a traditional approach

to design. In this paradigm, consider the cost of making a

design change. This is well documented [20] as in Table 1. The

message of Table 1 is clear; a better design process is one that

reduces late phase iterations. Build-test-fix methods build in

escalating future costs as the design freezes. The systems

become more expensive to build and assemble, more difficult

and time consuming to debug and determine root causes, and

more time consuming and expensive to implement any fix.

Table 1: The Cost of Making a Design Change

Cost (Hamada1996), [21]. TRL

$35 during the design phase 5

$177 before procurement 6

$368 before production 7

$17,000 before shipment 8

$690,000 on customer site 9

Given these observations, it is no surprise that studies

reveal substantial benefits when using lean design for six-sigma

methods [21]. MARKETWIRE reported “hard” (tied to

financial statements) and “soft” benefits over standard practice

are, respectively, for each project undertaken:

$500K and 200K for companies with greater than $1 B in

annual revenues,

$200K and $100K mean for all companies, and $10K to

$300K for the smallest companies.

A project is typically a 6-9 month design effort by a single

engineer. The savings are real, instantiated by a paradigm shift

in how to manage and execute product development.

Rather than working to build full system prototype

hardware to learn by “seeing how it works”, lean DFSS is

instead about first characterizing subsystems and components

(not full systems), and through equations characterizing the full

system requirement targets as scalable equations over these

subsystem and component changes. This approach greatly

speeds the lengthy downstream design and manufacturing

verification phases. That is, in the early phases when product

requirements are defined, it is impossible to foresee exactly

what the target should be for every design requirement, to

provide a robust and reliable product. Traditional practices are

to build a prototype, test it for gaps against requirements, and

then fix it. Alpha, beta, and pre-production and production

prototypes are typically used. Often, these are even codified

into standard work, which institutionalizes this poor behavior.

Lean DFSS is different. Instead, early in development all

critical requirements are characterized as equations – design

variables are changed in large sets of early subsystem and

component prototypes and the requirements are measured to fit

wide domain equations. The immediate impact is that there are

much less problems during integration, since the

characterization work of subsystems and components easily

discovers resolves many issues early, with the need for root

cause analysis on a full system alpha prototype that works

inconsistently. Equally important, later in the development

process during verification activities when the unknown-

unknowns arise such as supplier problems, tooling problems, or

user environment problems, the previously developed equations

can be used to very quickly fix these problems.

The benefits of lean six-sigma product development

are also visible in the work of scholars [22-24] where design for

six-sigma, robust design, and lean principles are the core

concepts. Dramatic results of lean PD are reported [22, 23]:

Page 3: 2009 ASME final

3 Copyright © 20xx by ASME

70% reduction in product development cycle time. 80%

reduction in design hours.

33% reduction in prototype development. 50% reduction in

inspections.

25% time reduction between engineering and manufacturing.

50% reduction in product cost. 90% conformance of design-

to-cost targets.

5-sigma design quality level.

We will present TRL definitions that are predicated on the

robust design paradigm. We call this the Lean TRL, L-TRL.

1.2 MANUFACTURING READINESS

Manufacturing readiness gate reviews in industry are also not

new, and have long been a standard industry practice [25-28].

However, codifying the manufacturing readiness review concept

into a manufacturing readiness levels (MRL) specification is a

more recent trend. DOD and the defense community developed

definitions for Manufacturing Readiness Levels (MRLs) that are

consistent extensions of the conventional TRLs [29, 30].

Scholars have also taken an interest in MRLs [31, 32].

Unfortunately, the MRLs are again not only based on the

outdated policy of build-test-fix, but they also do not fully

address the principles of Lean Manufacturing, which are widely

researched and practiced [33]. Womack et al [33] report

dramatic results of lean manufacturing implementation:

50% reduction in human effort.

50% less space.

50% less investment in tools.

50% less engineering hours, and time to develop products.

Motivated by these results, we also propose updated MRL

definitions that reflect lean manufacturing principles. We will

call this the Lean MRL or L-MRL.

1.3 SERVICES READINESS

OECD statistics reveal that services are a major contributor to

national economic wealth. Similarly, the 2007 International

Labor Organization reports that for the first time in history,

employment in services (40%) exceeds agriculture (39.4%) and

manufacturing (20.7%). Scholars note that economies are now

undergoing a “servicization” transformation:. physical products

and services are increasingly bundled as an integral offering to

meet customer needs [34]. We consider that Services Readiness

Levels (SRLs) definitions are a new and important subject.

Services readiness thinking is emerging. Heslop et al.

[35] consider market readiness, commercial readiness, and

management readiness all in conjunction with technology

readiness. But their perspective is that of a senior business

executive. Fine-grained texture and nuances of technology

readiness are submerged in executive level abstractions. As

such, the work is only indirectly useful to engineering or

product development. Marketing scholars investigate

technology readiness from psychometric measures of

technology users [36- 38]. But their focus is on the demand side

of technology, while useful to understand, it is only of indirect

applicability to engineering. Similarly, Lin and Hsieh [39]

analyze the demand side implications of technology readiness.

Significantly, there is a substantial asymmetry in the rigor of

service’s conceptual foundations relative to technology and

manufacturing. To address this deficit, IBM has articulated an

overarching multi-disciplinary descriptive framework called

Services, Science, Management, and Engineering (SSME) [40].

Unexpectedly, unlike technology and manufacturing,

SSME is silent and the literature is moot, as well, on a corpus of

first-principles for services. Some scholars argue that without

such first-principles, the service discipline will not attain the

rigor that science, engineering, or operations management can

bring to it [41, 42]. To address this gap, Tang and Zhou [44]

have systematically derived and defined a set of first-principles

for services, as well as strong epistemic rules to test them.

Given there are no service readiness levels (SRLs), we will call

our SRL definitions simply as SRL.

1.4 STATE-OF-THE-ART AND NEW DIRECTIONS

Table 2 summarizes our multi-functional readiness discussion.

Table 2. Conceptual Foundations of readiness models

Technology readiness

Manufacturing readiness

Services readiness

Conventional conceptual foundations

build-test-fix

mass production and its variants: people are interchangeable

customer satisfaction, consumer-product bias

Readiness models

TRL MRL none

New conceptual foundations

Robust design, Design for six-sigma, Lean product development.

Lean manufacturing, DMAIC six sigma

Services’ first-principles

new readiness models

L-TRL, i.e. Lean TRL

L-MRL, i.e. Lean MRL

SRL

The remainder of this paper is organized as follows. We begin

with the definitions of the conventional TRL as basis for

discussion of the problems of the build-test-fix methodology of

technology and product development. We then propose an

updated TRL based on the new methods of robustness, six-

sigma, and lean. We follow with a similar review of the MRL

literature and, as with TRLs, point out that the predicate

assumption of the MRLs on the outdated model of build-test-fix

is not ideal as a base for MRLs. We turn our attention

downstream to the services function with a literature review on

service readiness. We note that the literature is dominated by

the demand side of services, focusing on satisfaction and

Page 4: 2009 ASME final

4 Copyright © 20xx by ASME

traditional concerns of services. The services development

readiness perspective is barely visible in the literature.

Following this review, we present our proposed definitions for

updated TRLs, MRLs, and unveil a SRLs definitions; L-TRL,

L-MRL, and SRL, respectively.

2. PRODUCT DEVELOPMENT AND TECHNOLOGY

READINESS LEVELS

2.1 THE BUILD-TEST-FIX TECHNOLOGY READINESS

LEVEL SYSTEM

We first examine the conventional TRL model, in Table 3. We

concentrate on TRL 5 through TRL 8 and present what the

specifications require as supporting evidence. They are in effect

exit criteria for a specific level [30]. L-TRL 1 is again the base

line of no statistical proof the idea will work. L-TRL2 and L-

TRL3 goals are largely predicated on robustness, six-sigma, and

lean PD principles – not only should nominal response be

characterized, but also noise factors that cause variability. L-

TRL 4 is also now much more demanding. In addition to

requiring that the elemental pieces put together will work, our

L-TRL requires a deeper understanding of how the product-

system will behave, to permit very rapid adjustment in

downstream activities. The behavior must be represented not

just by causal trees common to requirements management, but

rather by complete transfer functions equations. The importance

of this augmented set of tasks cannot be understated.

Close examination at the supporting information

required in TRL5 through TRL8 reveals that the specifications

are grounded on repeated iterations of build-test-fix cycles. The

wording is almost identical at each level. This process incents

development teams toward building prematurely in order to

proceed with ad-hoc testing to discover problems and then to

fix them. In section 1.1 and 1.3 of this paper, we presented

evidence that lean PD and robust PD is superior to the policy of

build-test-fix iterations. The classification is a good idea, the

definitions are not.

2.2. IMPROVED LEAN TRL DEFINITIONS

We propose an updated TRL based on a policy grounded on the

robust six-sigma and lean PD principles. The definitions of our

L-TRL are shown in Table 4. Instead of build-test-fix cycles,

our strategy is to “characterize-validate-control” the

performance and its variability to demonstrate robustness under

progressively more demanding environments, from the

laboratory, to relevant, representative, and customer operational

under progressively more demanding environments, from the

laboratory, to relevant, representative, and customer operational

environments. The process proceeds with successively more

mature embodiments in which the technology will operate, from

subsystem, to prototype systems, to increasingly valid systems.

Table 3. Traditional Build-test-fix TRL definitions

TRL 1 Basic principles observed and reported.

TRL 2 Technology concept and/or application formulated

TRL 3 Analytical and experimental critical function and/or

characteristic proof of concept

TRL 4 Component and/or breadboard validation in laboratory

environment.

TRL 5

Component and/or breadboard validation in relevant

environment. Analyses and explanation of differences

from predictions. Identify problems encountered and

refinements made to match expected goals.

TRL 6

System/subsystem model or prototype demonstration in a

relevant environment. Analyses and explanation of

differences in the testing and operational environment and

in the differences in results from predictions. Identify

problems encountered and refinements made to match

expected goals. Discuss actions to move to next level of

readiness

TRL 7

System prototype demonstration in an operational

environment. Analyses and explanation of the differences

in results from predictions. Identify problems encountered

and refinements made to match expected goals. Discuss

actions to move to next level of readiness

TRL 8

Actual system completed and qualified through test and

demonstration. Results of testing of final configuration

under expected range of environmental conditions.

Analyses and results that system will meet operational

requirements. Identify problems encountered and

refinements made to match expected goals. Discuss

actions to remove problems and move to finalizing

design.

TRL 9 Actual system proven through successful mission

operations.

L-TRL5 requires robustness work of the components. It also

requires complete characterization of the noise conditions using

the noise factors and predictions of their impact on the transfer

function. L-TRL6 departs from the build-test-fix paradigm by

intent and policy. Scalable transfer function equations must be

developed and validated relating all input design variables and

noise variables with component-subsystem-system hierarchical

output responnse variables that represent Performance is

adjusted in downstream activities simply by using a subset of

the available design variables (tuning variables) [44], not by

iterative redesign. Unlike the conventional TRL7 where the

testing continues to uncover more problems to fix or initiate

more redesign, L-TRL7 is ready for commercialization. At L-

TRL8 the technology is ready for manufacturing release, a non-

event in our model, whereas in traditional TRL8, the build-test-

fix iterations continue. By L-TRL9 the product is used by the

customer in their environment.

4. LEAN MANUFACTURING MATURITY READINESS

(L-MRL) Successful release to a manufacturing facility in order

to begin production of new products is a critical and costly

decision. Premature release of a facility before all problems are

Page 5: 2009 ASME final

5 Copyright © 20xx by ASME

Table 4 Lean TRL Definitions

L-TRL 1 Basic principles observed and reported. Equations are ved describing the technology physics.

L-TRL 2 Technology concept and/or application formulated. Noise factors identified. Control factors identified. Measurement response identified.

L-TRL 3

Technology performance behavior characterized. Range of noise factors identified. Range of control factors identified. Measurement response identified. Measurement system GRR baselined. Proof of concept completed.

L-TRL 4

Technology Nominal Performance validated. Integration of basic technological components to establish they work together and produce the range of performance targets necessary. Integration uses “ad hoc” hardware in the laboratory. Transfer function equation predicts a validated nominal response. Measurement system GRR complete and capable.

L-TRL 5

Technology Performance Variability validated. Integration of basic technological components with reasonably realistic supporting elements to test technology in a simulated environment. Robustness work on the technology components is complete. The sum of squares response variation impact of each noise factor varying is predicted in a validated transfer function equation.

L-TRL 6

Supersystem/system/subsystem interactions in relevant environment are demonstrated. Test representative prototype system in a stress test laboratory or simulated environment. Develop and validate scalable transfer function equations for the entire product as a system with the new technology. Equations include prediction of sum-of-squares performance variation and degradation for the entire product with applied off-nominal variation of the noise factors.

L-TRL 7

Product System Demonstrated Robust in representative environment. Technology prototype transferred to a product commercialization group, and they scaled it to fit within their real product application as an operational system. Demonstration of an actual full-product prototype in the field using the new technology. Transfer function equations for the particular product system instantiation are completely verified. A limited set of remaining control factors are available to adjust the technology within the product against unknown-unknowns. Technology is as robust as any other re-used module in the product.

L-TRL 8

Product Ready for Commercialization and Release to Manufacturing Full Production. Technology has been proven robust across the noise variations of extreme field conditions using hardware built with the production equipment purposefully set up and operated at their upper and lower control limits. Transfer to manufacturing is a non-event to the development staff if L-MRL processes are in place.

L-TRL 9

Experienced Customer Use. Product in use by the customer’s operational environment. This is the end of the last validation aspects of true system development. The performance of the product and the technology perform to customer satisfaction in spite of uncontrollable perturbations in the system environment in which the product is embedded or in other external perturbation. Transfer to the customer is a non-event to the engineering staff if L-TRL and L-MRL processes are in place.

worked out can lead to very expensive rework and poor quality

products in the hands of the customer. Scholars define

manufacturing readiness, MR: “MR is designated as the ability

to harness the manufacturing, production, quality assurance, and

industrial functions to achieve operational capability that

satisfies the product needs in the quality and quantity needed at

the best value as measured by the product” [32]..

Similar to TRLs, manufacturing readiness levels have

been defined to provide a means to readily understand the

uncertainty and readiness of new manufacturing production

technology. Similar to TRLs, however, we consider

conventional MRLs outdated because they are again grounded

on the build-test-fix cyclic process of rework. MRLs need to be

updated on more recent work and best practices. Six sigma, lean

manufacturing and the best practices of the Toyota Production

System (TPS) are the foundations that we will use to base our

work for updated definitions for MRLs [33], [45, 46].. The core

ideas of six sigma, lean manufacturing and TPS are to

systematically identify and eliminate waste, reduce variability

through elimination of defects, and to improve continuously.

The practice demands a strategy of relentless waste elimination,

defect reduction and customer pull wherein all processes are

guided by customer demand and high quality.

We propose a Lean Six-Sigma Manufacturing

Readiness Level Definition (L-MRL) in Table 5 (next page).

5. A SERVICES READINESS LEVELS MODEL (SRL)

Similar to technology product-development and

manufacturing, new service offers also need taxonomy of

readiness, for the same reasons of allowing communication of

the readiness. Our SRL takes Tang and Zhou’s [43] first-

principles as a base. The interface between manufacturing and

services as an important subject for research and firms for

competitive advantage was first identified by Quinn [47].

Womack called for a lean services action plan [48]. As a

response to these needs, we propose the SRL in Table 6. Our

intent is for researchers and practitioners to refine, improve, and

build on it.

It is also our first-hand experiences in services for

technology intensive products and systems that motivate us to

propose SRL definitions. One of the co-authors held executive

positions in engineering and services with IBM. He recalls his

experience with the IBM AS/400 midrange server. In IBM, this

product stands out as a technology and market success [49, 50].

The system was subjected to technology validation,

system performance validation, and testing in customers’

operational environment that was unprecedented in IBM or by a

competitor. Years in advance of product launch, 1800 systems

Page 6: 2009 ASME final

6 Copyright © 20xx by ASME

Table 5. Lean MRL Definitions

L-MRL 1

Manufacturing pre-concept research. Research: new manufacturing concepts, technology, processes, materials, and potential investments. Scanning the manufacturing opportunity space. Manufacturing present at TRL processes. Value stream mapped.

L-MRL 2

Manufacturing concepts generated and down-selection done. Candidate concepts down-selected. implications on manufacturing feasibility and invention begin. Requirements on manufacturing technologies, materials analyzed. Cause-effect maps for all process steps done.

L-MRL 3

Manufacturing proof-of-concept characterized. Range of noise factors identified. Range of key process parameters (KPP) identified. Measured key quality characteristics (KQC) response identified. Baseline Measurement system GRR done. All conducted with experimental hardware in limited and not-integrated but highly controlled environments. Cost modeling begins. Proof-of-concept completed.

L-MRL 4

Manufacturing proof-of-concept Nominal Performance validated. Key value stream loops demonstrated as predicted by transfer function equations in lab environment. Nominal production capability demonstrated on KQCs, and key process parameter changes validate transfer function predictions. Takt times calculated. Investments case and cost models studies initiated. Supply chain studies and analyses initiated. Key constraints and risks documented. Process FMEAs identify risks.

L-MR L 5

Critical process variability and Takt time. Manufacturing technology and cell work initiated. Process KQC sum-of-squares variability and the driving KPP are characterized. Investments and cost models completed. Initial sources of waste identified, analyzed qualitatively. Key value stream loops nominal performance validated.

L-MRL 6

Prototype manufacturing system performance variability in relevant environment. Majority of value stream defined and performance characterized with entire production line prototypes. Technologies’ producibility demonstrated. KPPs including costs, variability under noise characterized. Materials and tools proven. Personnel training requirements done. Supply chain infrastructure implementation initiated. Economic impact of waste known and programs initiated. Process FMEA risks reduced. Control plans available.

L-MRL 7

Pilot Line robustness demonstrated, ramp-up initiated. Hand-off from engineering is a non-event. Manufacturability and producibility in pilot line demonstrated. Yield performance and variability characterized and validated against the pilot production. In pilot, supply chain, materials, technology, tools, supermarkets, and personnel perform with no major surprises. Risk management procedures work as planned. Unit costs learning curve on track. Yield variability is on track.

L-MRL 8 Ramp-up capability demonstrated, full-rate production initiated. All subsystems and systems are stable and meeting > 4-sigma quality levels on all

key parameters. Waste elimination/mitigation actions of previous L- MRLs confirmed, extend to this L-MRL and focus on overproduction wastes. Control plans complete.

L-MRL 9

At full-rate production. Manufacturing processes operated and controlled at greater than 4-sigma levels on all key process variables. Quality, costs and learning curves on track. Continuous quality improvement and lean waste elimination actions on track. Control plans validated.

were shipped worldwide for validation and use, in customers’

real operational environment, 70 millions lines of customer

application code were tested, and over 200,000 programs and

procedures were validated [51]. During first customer

installations, the AS/400 was at L-TRL9 and L-MRL9. In the

year 2000, the AS/400 won the Malcolm Baldrige US National

Quality Award [50].

He also recalls another defining services-experiences,

the 1996 Atlanta Olympic Summer Games and his first-hand

role in the 1998 Nagano Olympic Winter Games. The Atlanta

Olympics Games are a blot in IBM’s record on IT technology

[52, 53]. In its eagerness to showcase technology, IBM used

many products and systems that were only L-TRL6 during the

Atlanta games. The entire Olympic IT system was a system of

multivendor systems (SOS). The SOS was at L-TRL4. The SOS

and business processes applications were never subjected to the

rigor of end-end (value stream) operations in an Olympic

competition environment. As a result, during the Games, the

services organization had to perform heroic acts to keep the

systems running. And they had not been well trained.

However, for the 1998 Nagano Olympic Games, F..

Carrad, Director General of the IOC, declared, “Technology did

win Gold in Nagano”. [54] What was different between the

Atlanta Games and the Nagano Games? One, only products,

which had a track record supported by demanding customer

testimonials, were deployed for the Winter Games. All products

had to be at L-TRL9 and L-MRL9. Moreover, the services

organizations had to be fully trained and services infrastructure

in place for all these products, i.e. at SRL9. Two, the SOS

underwent comprehensive validation of user operations,

systems interoperability, system performance, and business

processes performance were all systematically conducted in

end-end (entire value stream) customer operational

environments. Since, there is only one Olympic Games,

customer operational environment was provided by World Cup,

Olympic qualifying competitions around the world. The scope

of this effort was extraordinary. The validation included 75 M

lines of software code, 60,000 test cases, 5000 PC’s, over 3000

additional server s and networking equipment, and about 85,000

pages in the IBM internet Nagano Olympics homepage.

Although the Variability criteria of the SRL was not as Table 6.

Page 7: 2009 ASME final

7 Copyright © 20xx by ASME

Service Maturity Levels (SRL) definitions

SRL 1 Services pre-concept research and awareness of R-TRL

and L-MRL.

SRL 2

Services concept identified with implications and refinements. Based on R-TRL2 and L-MRL2, a services’ a strategic direction is formulated. Many candidate service concepts generated. Concepts analyzed against key assumptions of R-TRL and L-MRL. “as is” value stream of mapping done.

SRL 3

Services concept characterized via “to-be” value stream strategy. Down-selection complete with strategic consistency with R-TRL and L-MRL. Strategy for physical infrastructure and potential reuse of elemental processes documented. Customer, market, competitive, and investments case studies analyzed.

SRL 4

Nominal specifications for key-value service processes complete. Key-value services specified and demonstrated with L-MRL4. Changes to key service inputs demonstrate expected changes to service outcomes. Process interactions with manufacturing and services’ physical infrastructure, materials and people demonstrated. Strategy for business practices, fair and competitive terms and conditions complete.

SRL 5

End-end services processes specified. Material,

information, skills (labor) flows and dependencies are

also specified. Requirements and dependencies on

contractual terms and conditions finalized. Services KPPs

are characterized and their nominal performance targets

set. End-to-end variations in the KPPs demonstrate

expected changes to outcomes.

SRL 6

Prototype service process nominal performance validated relevant environment and in the R-TRL6 and L-MRL6 environment using services’ physical infrastructure. Waste drivers and constraints identified.

SRL 7

Key services processes variability validated in representative environment and infrastructure in R-TRL7 and L-MRL7. Waste and constraints elimination/reduction programs initiated. KPPs measured and analyzed, they variations measured against noise conditions analyzed. Field personnel training begins for services ramp-up.

SRL 8

Services infrastructure complete and poised for large scale field deployment. Waste drivers from previous SRL level fixed and completed. Supply chain dependencies confirmed and scale-up initiated. Critical mass of field personnel trained and ready for customers. Services physical infrastructure is robust.

SRL 9

Field deployment to entire customer base. Service delivery responsive to customer demand. KPPs from the field monitored and improvement programs initiated. Kaizen programs for services initiated. Star-burst* work initiated and on-going.

* Star-burst” refers to a best practice in services. Quinn [55] first observed that extending radially from an existing core-competency (as in a star-burst), a service provider can create a variety of other services centered on that core-competency. The idea is that once a service offering is proven to be effective, it is disaggregated into elemental offerings. Its is an effective strategy to expand a firm’s services portfolio.

complete, the SRL were not at a “gentleman’s SRL9”. These

and other similarly forceful experiences have strengthened our

conviction that SRLs are overdue. We show in Table 5 our

proposed definitions.

6. CLOSING REMARKS

Readiness levels are a useful and effective means to

communicate to management and partners without disciplinary

expertise the maturity and risk of a new system they are

unfamiliar with. This can be new product technology, new

production systems, or new services. Traditional readiness level

definitions, however, need to be updated to modern practices

that give much better indications of risk and readiness.

Technology readiness levels in the early phases need to

be based not on the level of use in the field, but rather on how

ready they are to be released to the field. This means the level

of mathematical and statistical characterization of the

technology is needed, and an accounting of the breadth of the

characterization over the domain of application and production

variances. An L-TRL 6 technology is one that has not been

implemented in any product yet, but yet, due to the level of

characterization, one can still be fully confident it can very

easily be made to work as well as a system commonly used in

the field (L-TRL 8) through simple adjustments that are already

characterized and understood. There are no surprises. The same

holds true whether for manufacturing readiness levels and new

production equipment or a new production line.

Service readiness levels remain a new concept to the

field. We propose a readiness level taxonomy similar to

technology and manufacturing readiness levels that we have

found useful. We hope to spark improvements in deployment of

new services, as well as spark an interest in fundamentals of

effectively providing services.

ACKNOWLEDGEMENT

Our colleague Joern Hoppmann gave us valuable insight and

comments in the preparation of this paper. It is a pleasure to

acknowledge his assistance.

REFERENCES

1. Mankins J.C. 1995. “Technology Readiness Levels: A

White Paper”. http://ipao.larc.nasa.gov/Toolkit/TRL.pdf

April.6. Accessed: 8 February 2009.

2. DuBos G.F., Salch J.H. and Braun R. 2008. “Technology

Readiness Level, Schedule Risk, and Slippage in Spacecraft

Design”. Journal of Spacecraft and Rockets, 45(4) July-

August, pp. 836-842.

Page 8: 2009 ASME final

8 Copyright © 20xx by ASME

3. DuPont, D.G. 2008. “Proactive Prototypes”. Scientific

American, pp. 24-25.

4. Graettinger C.P., S. Garcia, J. Siviy, R.J. Schenk, and P.J.

Van Syckle. 2002. “Using the Technology Readiness Levels

Scale to Support Technology Management in the DODs

ATD/STO Environments”. Army CEOCOM, September.

5. DOD 5000. 2002. “Mandatory Procedures for Major

Defense Acquisition Programs (MDAPS) and Major

Automated Information Systems (MAIS) Acquisition

Programs”, April 5.

6. Minning C.P., P.I. Moynihan and J.F. Stocky. 2003.

“Technology Readiness Levels for the New Millennium

Program”, AeroSpace Conference 2003, Proceeding 2003

IEEE 1, pp. 417-426

7. Moorhouse D.J. 2001. “Detailed Definitions and Guidance

for Application of Technology Readiness Levels”. Journal

Aircraft 39(1), pp. 190-192.

8. Mackey R.,R. Some, A. Aljabri. 2008. “Readiness Levels

for Spacecraft Information Technologies”. AeroSpace

Conference 2003, Proceeding 2003 IEEE 1, pp. 391-398.

9. Sauser, B., J. Ramirez-Marquez, D. Henry, and D.

DiMarzio. (2008). “A System Maturity Index for the

Systems Engineering Life Cycle”. International Journal of

Industrial and Systems Engineering, 3(6), pp. 673-691

10. Sauser, B.J. 2008. “Technology Integration Maturity

Metrics: Development of an Integration Readiness Level”.

.Submitted to the Int. J. of Technology Management.

11. Majumdar, W.S. 2007. “System of Systems Technology

Readiness Assessment”. Masters Thesis. Naval Post

Graduate School. Monterey, CA.

12. Ulrich K.T., and S. D. Eppinger. 2000. “Product Design and

Development”. McGraw Hill. New York, pp. 202.

13. Ullman, D. 1995. “The Mechanical Design Process”.

MacGraw Hill, New York, pp. 95.

14. Ward, A. 2007. Lean Product and Process Development,

Lean Enterprises Institute, Cambridge MA

15. Kato, J. 2005. “Development of a Process for Continuous

16. Creation of Lean Value in Product Development

Organizations”, MS Thesis, Massachusetts Institute of

Technology, Cambridge, MA..

17. Creveling, C., Slutsky, J. and D. Antis. 2002. “Design for

Six Sigma in Technology and Product Development”,

Prentice Hall PTR, Upper Saddle River, New Jersey.

18. Taguchi, G., S. Chowdhury, S. Taguchi. 2000. “Robust

Engineering”. McGraw Hill, N.Y.

19. Taguchi, G. 1993. “Taguchi on Robust Technology

Development: Bringing Quality Engineering Upstream”.

ASME Press, New York

20. Otto, K.N. and K. Wood. 2001. “Product Design”. Prentice

Hall. Upper Saddle River, New Jersey.

21. Hamada, H. 1996. “Quality what it means”. European

Community Quarterly Review. Third Quarter.

22. MARKET WIRE. 2005. “$300K Typical Gain from Design

for Six Sigma Projects”. SEATTLE, WA. July 18.

23. Oppenheim, B.W. 2004. “Lean Product Development

Flow”. Systems Engineering, 7(4), pp. 352-376.

24. Fiore, C. 2005. “Accelerated Product Development:

Combining Lean and Six-Sigma for Peak Performance”.

Productivity Press, N.Y.

25. Anand, G. and R. Kodali. 2008. “Development of a

Conceptual Framework for Lean Product Development

Process”.. Int. J. Product Development 6(2), pp. 190-224.

26. Difranza, M.J. 1986. “R&D Transition to High Volume

Production Excellence”. 40th Annual Quality Congress,

May. Anaheim, CA, 40(0), May; pp. 62-65.

27. Benzin, R.W. 1991. “Evaluating Production Readiness:

Improving Returns on the Research and development

Investment”. IEEE Proceedings Annual Reliability and

Maintainability Symposium, pp. 4-9.

28. Tung, T., L.V. DeArmond, R.F. Herald, P.E. Herning, M.H.

Kalisher, D.A. Olson, R.F. Risser, A. Stevens, and S.J.

Tighe. 1992. “State of the art of Hg-melt LPE HgCdTe at

Santa Barbara Research Center”, Proc. SPIE, 1735,

109 (1992), pp. 109-134. DOI:10.1117/12.138616

29. Ho, B. 1997. “Manufacturing readiness model for fab

startup”. Semiconductor Manufacturing Conference

Proceedings, IEEE International Symposium, 6-8 Oct; A25-

A28.

30. Acquisition Community Connection (a). 2009.

“Manufacturing Readiness Assessments”. Available:

https://acc.dau.mil/CommunityBrowser.aspx?id=18231&lan

g=en-US. Accessed on 12 February 2009.

31. Acquisition Community Connection (b). 2009.

“Manufacturing Readiness Assessments”. Available:

https://acc.dau.mil/CommunityBrowser.aspx?id=18545&lan

g-en-US. Accessed on 12 February 2009.

32. Pretorius M.W. and D. de Wet. 2000. “A model for the

assessment of new technology for the manufacturing

enterprise”. Technovation 20(1); pp. 3-10.

33. Genaidy, A., W. Karwowski. 2008. “A roadmap for a

methodology to assess, improve and sustain intra-and inter-

enterprise system performance with respect to technology-

product life cycle in small and medium manufacturers”.

Human Factors in Manufacturing 18(1), pp. 70-84.

34. Womack J.P., Jones D.T. and Roos D. 1991. “The Machine

that Changed the World: The Story of Lean Production”.

Harper Perennial, N.Y.

35. Müller P. and Blessing L. “Development of Product-

Service-Systems–Comparison of Product and Service

Development Process Models”. 16th International

Conference in Engineering Design (ICED) 2007, pp. 245-

246.

36. Heslop, A.L., E McGregor, and M. Griffith. 2001.

“Development of a Technology Readiness Assessment

Measure: The cloverleaf Model of Technology Transfer”.

Journal of Technology Transfer 26(4), pp. 369-384.

37. Parasuraman, A. 2000. “Technology Readiness Index (TRI):

A Multiple-Item Scale to Measure Readiness to Embrace

Page 9: 2009 ASME final

9 Copyright © 20xx by ASME

New Technologies”. Journal of service Research. May 2(4),

pp.307-320.

38. Parasuraman, A. and C.I. Colby. 2001. “Techno Ready

Marketing: How and Why Your Customers Adopt

Technology”. Free Press, N.Y.

39. Tsikritsis, N. 2004. “A Technology Readiness-Based

Taxonomy of Customers: A Replication and Extension”.

Journal of Service Research, Aug 7(1), pp. 42-52.

40. Lin C. and P. Hsieh. 2006. “The role of technology

readiness in customers’ perception and adoption of self-

service technologies”. Int. Journal of Service Industry

Management 17(5), pp. 497-517.

41. Spohrer J. and Riecken D. 2006. “Services Science”.

Communications of the ACM 49(7), Jul., pp. 31-34.

42. Paulson, L.D. 2006. “Services Science: A New Field for

Today’s Economy”. IEEE Computer, August, pp. 18-21

43. Chesbrough H. 2004. “A failing grade for the innovation

academy”. Financial Times, 24 September.

44. Tang, V. and R. Zhou. 2009. “First-Principles for Services

and Product-Services-Systems: An R&D Agenda”,

Submitted to the International Conference on Engineering

Design, ICED’09, 24-27 August. Stanford University,

Stanford, CA.

45. Otto K. and E. Antonsson. 1993. “Tuning Parameters in

Engineering Design”. Journal of Mechanical Design,

115(1), pp. 14-19.

46. Breyfogle, F. 2003. “Implementing Six Sigma”, John Wiley

& Sons; 2nd edition, New York.

47. Ohno, T. 1995. “Toyota Production System: Beyond Large-

scale Production”. Productivity Press.

48. Quinn, J.B. 1990. “Technology in Services: Rethinking

Strategic Focus”. Sloan Management Review; Winter 31(2),

pp. 79-87.

49. Womack, J.P. 2004. “An Action Plan for Lean Services”.

Lean Service Summit. Amsterdam, June.

50. Bauer, R.A., E. Collar, and V. Tang. 1992. “The Silverlake

Project: Transformation at IBM”. Oxford University Press,

New York.

51. “Malcolm Baldrige National Quality Award 1990 Winner.

IBM Rochester - AS/400 Division”. Available at:

http://www.quality.nist.gov/IBM_90.htm. Accessed 26 Feb.

2009.

52. Pine, B.J. “Design, test, and validation of the Application

System/400 through early user involvement”. 1989. The

IBM Systems Journal 28(3), pp. 376-385.

53. Djurdjevic R. 1996. “Showcase of American IT supremacy

Turns into a 5-Ring Circus”. Available:

http://www.truthinmedia.org/truthinmedia/Columns/Atlanta

96.html. Accessed 26 Feb. 2009.

54. Schwartz, J. 1996. “Olympics Stung by Technology’s False

Starts”. July 22. New York Times. Available:

http://www.nytimes.com/specials/olympics/0722/oly-

failure.html. Accessed 26 Feb. 2009.

55. 1998 Nagano Olympic Winter Games: A Model of e-

business Success. Available:

http://www-7.ibm.com/services/nz/success/nagano2.html,

Accessed 26 Feb. 2009

56. Quinn, J.B. 1992. The Intelligent Enterprise. The Free

Press, New York.. pp. 148.