12
Bradley Camburn 1 Engineering Product Development, Singapore University of Technology and Design, 20 Dover Drive, Singapore 138682 e-mails: [email protected]; [email protected] Brock Dunlap Department of Mechanical Engineering, The University of Texas at Austin, 204 E. Dean Keeton St. C2200, Austin, TX 78712 e-mail: [email protected] Tanmay Gurjar Department of Mechanical Engineering, The University of Texas at Austin, 204 E. Dean Keeton Street, C2200, Austin, TX 78712 e-mail: [email protected] Christopher Hamon Department of Mechanical Engineering, The University of Texas at Austin, 204 E. Dean Keeton Street, C2200, Austin, TX 78712 e-mail: [email protected] Matthew Green Department of Mechanical Engineering, Le Tourneau University, 2100 South Mobberly Avenue, Longview, TX 75602 e-mail: [email protected] Daniel Jensen Department of Engineering Mechanics, United States Air Force Academy, U.S. Air Force Academy, Colorado Springs, CO 80840 e-mial: [email protected] Richard Crawford Department of Mechanical Engineering, The University of Texas at Austin, 204 E. Dean Keeton Street, C2200, Austin, TX 78712 e-mail: [email protected] Kevin Otto Engineering Product Development, Singapore University of Technology and Design, 20 Dover Drive, Singapore 138682 e-mail: [email protected] Kristin Wood Engineering Product Development, Singapore University of Technology and Design, 20 Dover Drive, Singapore 138682 e-mail: [email protected] A Systematic Method for Design Prototyping Scientific evaluation of prototyping practices is an emerging field in design research. Prototyping is critical to the success of product development efforts, and yet its imple- mentation in practice is often guided by ad hoc experience. To address this need, we seek to advance the study and development of prototyping principles, techniques, and tools. A method to repeatedly enhance the outcome of prototyping efforts is reported in this paper. The research methodology to develop this method is as follows: (1) systematically identify practices that improve prototyping; (2) synthesize these practices to form a guiding method for designers; and (3) validate that the proposed method encourages best prac- tices and improves performance. Prototyping practices are represented as six key heuris- tics to guide a designer in planning: how many iterations to pursue, how many unique design concepts to explore in parallel, as well as the use of scaled prototypes, isolated subsystem prototypes, relaxed requirements, and virtual prototypes. The method is corre- lated, through experimental investigation, with increased application of these best prac- tices and improved design performance outcomes. These observations hold across various design problems studied. This method is novel in providing a systematic approach to prototyping. [DOI: 10.1115/1.4030331] 1 Corresponding author. Contributed by the Design Theory and Methodology Committee of ASME for publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received August 12, 2014; final manuscript received March 30, 2015; published online June 8, 2015. Assoc. Editor: Kristina Shea. Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-1 Copyright V C 2015 by ASME Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

Bradley Camburn1

Engineering Product Development,

Singapore University of Technology and Design,

20 Dover Drive, Singapore 138682

e-mails: [email protected];

[email protected]

Brock DunlapDepartment of Mechanical Engineering,

The University of Texas at Austin,

204 E. Dean Keeton St. C2200,

Austin, TX 78712

e-mail: [email protected]

Tanmay GurjarDepartment of Mechanical Engineering,

The University of Texas at Austin,

204 E. Dean Keeton Street, C2200,

Austin, TX 78712

e-mail: [email protected]

Christopher HamonDepartment of Mechanical Engineering,

The University of Texas at Austin,

204 E. Dean Keeton Street, C2200,

Austin, TX 78712

e-mail: [email protected]

Matthew GreenDepartment of Mechanical Engineering,

Le Tourneau University,

2100 South Mobberly Avenue,

Longview, TX 75602

e-mail: [email protected]

Daniel JensenDepartment of Engineering Mechanics,

United States Air Force Academy,

U.S. Air Force Academy,

Colorado Springs, CO 80840

e-mial: [email protected]

Richard CrawfordDepartment of Mechanical Engineering,

The University of Texas at Austin,

204 E. Dean Keeton Street, C2200,

Austin, TX 78712

e-mail: [email protected]

Kevin OttoEngineering Product Development,

Singapore University of Technology and Design,

20 Dover Drive,

Singapore 138682

e-mail: [email protected]

Kristin WoodEngineering Product Development,

Singapore University of Technology and Design,

20 Dover Drive,

Singapore 138682

e-mail: [email protected]

A Systematic Method for DesignPrototypingScientific evaluation of prototyping practices is an emerging field in design research.Prototyping is critical to the success of product development efforts, and yet its imple-mentation in practice is often guided by ad hoc experience. To address this need, we seekto advance the study and development of prototyping principles, techniques, and tools. Amethod to repeatedly enhance the outcome of prototyping efforts is reported in this paper.The research methodology to develop this method is as follows: (1) systematically identifypractices that improve prototyping; (2) synthesize these practices to form a guidingmethod for designers; and (3) validate that the proposed method encourages best prac-tices and improves performance. Prototyping practices are represented as six key heuris-tics to guide a designer in planning: how many iterations to pursue, how many uniquedesign concepts to explore in parallel, as well as the use of scaled prototypes, isolatedsubsystem prototypes, relaxed requirements, and virtual prototypes. The method is corre-lated, through experimental investigation, with increased application of these best prac-tices and improved design performance outcomes. These observations hold acrossvarious design problems studied. This method is novel in providing a systematicapproach to prototyping. [DOI: 10.1115/1.4030331]

1Corresponding author.Contributed by the Design Theory and Methodology Committee of ASME for

publication in the JOURNAL OF MECHANICAL DESIGN. Manuscript received August 12,2014; final manuscript received March 30, 2015; published online June 8, 2015.Assoc. Editor: Kristina Shea.

Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-1Copyright VC 2015 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 2: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

1 Introduction

1.1 Background. Sensitivity analysis of product developmentcycles shows that experimental prototyping plays a key role indetermining outcome [1]. In this context, prototyping is the sys-tematic development and testing of a new product design conceptto establish its feasibility and enhance detail design of preproduc-tion models. There is an inconsistent success rate in productdevelopment [1]. It has also been empirically validated that differ-ent approaches to prototyping can have a significant impact onboth short- and long-term outcomes [2]. Yet, prototyping effortsare typically ad hoc and implemented through the experientialbase of the developer. Planning tools are needed to help managethe uncertainty in these processes. Therefore, it is critical to pur-sue research on systematic approaches to prototyping [3]. Severalindividual techniques to plan prototyping efforts have been pro-posed [4–6]. However, clarification and empirical testing of thesetechniques must continue so that successful outcomes are morelikely and reproducible [7].

A number of studies have identified key factors in a successfulprototyping strategy. Drezner identifies of a broad range of thefactors in a strategic prototyping effort through a review ofDepartment of Defense (DoD) projects. Identified factors includeparallel concept testing, iterative testing, requirement specifica-tion, and use of planning. This study also reports that late stageprototyping may incur larger costs [8]. Moe has proposed a parti-tioning approach by which the designer can plan a specificapproach for each factor. The approach provides a framework toselect between single and multiple iterations, single and multipledesign concepts, and flexible or rigid scheduling [4]. Christieexpands on this work further to identify additional guidelines, inthe form of a directed list of prompts to encourage considerationof these techniques in a prototyping effort [5].

Empirical research can be used to support development of asystematic method. Viswanathan provides a list of prototypingbest practices from an in situ study of designers [9]. Yangobserved that prototypes with fewer parts are more successful[10]. Jang conducted empirical studies to find that successfulteams employed physical prototypes more often and handwrittennotes less often. Studies show that earlier prototyping is correlatedwith higher final design performance [11,12]. In a general sense,early prototyping occurs in the first half of a design phase. Addi-tionally, reduced time spent on each individual prototype actuallycorrelates with improved design outcome [13]. There are alsostudies of prototype use in ideation [14], fixation [15], andanalogy use [16].

While the above studies indicate the potential to strategicallyallocate resources and warn about factors that will facilitate andinhibit design outcomes, currently there is no widely applicable

and accepted method for assisting designers in making a varietyof prototyping decisions to build a strategy. We propose such amethod based on synthesizing identified best practices, with theintent of improving the likelihood of success of a product’s devel-opment. The method is tested in a number of experimental scenar-ios. Thus, the motivation of this research is to provide systematicmeans to improve prototyping outcomes in a scientifically repeat-able way. Within the scope of this work, we will consider proto-typing for the purpose of concept development and functionaltesting. This particular work will not explore the relationshipsbetween early stage prototyping as used in concept development(ideation) or later stage prototypes such as those for preproductiontesting. Furthermore, this work is oriented toward product designin the electromechanical domain.

1.2 Research Motivation and Approach. As outlined in theintroduction, there is a need for the development and critical eval-uation of strategic prototyping methods. This work, in particular,explores a process to develop prototypes that meet measurabledesign requirements with higher repeatability than a traditional,ad hoc approach. This process should also ideally meet theserequirements with less expenditure of time and resources. Toachieve this goal, several research hypotheses are formulated toguide the research effort:

(1) Prototyping techniques, taking into account studied prototyp-ing principles, correlate with successful design outcomes.

(2) These techniques can be induced in designers’ activitiesthrough exposure to a developed design method.

(3) Use of this method correlates positively with direct meas-ures of prototype performance.

To evaluate the first research hypothesis, a literature review isemployed to discover techniques for prototyping that are corre-lated with success. The results of various studies are synthesizedinto heuristics that provide an understanding of best practices. Theheuristics form the basis of a strategic prototyping method. Thismethod and the heuristics are evaluated via experimentation.Research hypotheses two and three are likewise addressed throughexperimentation. Relevant data are collected during each experi-ment so that the three research hypotheses can be tested directly.This approach is outlined in Fig. 1. This work represents theexpansion of information shown in previous works by the authors[17–20] with new analyses and synthesis.

1.3 Identification of Heuristics and Formation ofMethod. To generalize the concept of prototyping best practices,we introduce the term individual technique and adapt the termprototyping strategy from Drezner. Individual techniques provide

Fig. 1 Representation of overall research method employed in this study

081102-2 / Vol. 137, AUGUST 2015 Transactions of the ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 3: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

a means of enhancing the prototyping process. However, theapplicability of each individual technique is dependent on context.There is also possibility for variable implementation of the indi-vidual techniques (e.g., 1:100 scale versus 1:5 scale). A prototyp-ing strategy represents the specific plan for implementingprototyping across a product development effort [8]. A plannedapproach consisting of a combination of choices for applying sev-eral individual techniques can be used in the formation of a proto-typing strategy. For example, there is a range of possible sizes orscales for a prototype. There is also a range in the potential num-ber of times a prototype may be improved or altered and then iter-atively rebuilt and tested. This section reports key findings fromthe literature, exploring empirically validated best practices indetail. This literature has been critically synthesized to form heu-ristics that describe best practices, which in turn form the basis fora systematic prototyping method. Discussion of the synthesis pro-cess and presentation of the method itself follows at the end of thesection. A set of variables that quantify the individual techniques,as well as identified metrics of performance, are listed in Table 1.Iterative and parallel prototyping are intended to directly lead toimproved performance. Scaling, subsystem isolation, and require-ment relaxation are complementary techniques intended to reducecost and time expenditure. Thus multiple concepts and iterativetesting can be a viable avenue, even in cases of limited time andbudget. Furthermore, as the literature has shown that faster proto-typing can reduce fixation [13], these techniques could potentiallylead to more novel concepts as well.

1.3.1 Iteration. In this context, iteration is defined as the cycleof building, testing, and improving as applied to a single designconcept. A basic, small-scale illustration of this concept is the iter-ative design of a conveyor belt geometry to reduce stress(Fig. 2(a)). In the U.S. DoD case study of full-scale projects,Drezner identifies that iteration cycles can systematically advancedesign maturity [8]. Glegg suggests three iterative phases in prod-uct development [21]. Ulrich identifies that a firm may choosebetween building prototypes sequentially or in parallel with differ-ent cost, benefit, and time implications. Accordingly, the numberof iterations may be given by the timeline divided by the expectedduration of the prototyping cycle [21]. Thomke observes from anindustry study that there is a correlation between the fabricationmethod and number of iterations pursued [2]. This observation isconfirmed by Viswanathan in an experimental setting wheredesign teams given a less complex fabrication process producedmore iterations than a control with a complex process [13]. Dowobserves that teams in an iteration condition produced prototypeswith higher performance versus a control without iteration, in aset timeframe [22].

1.3.2 Parallel Concepts. The exploration of parallel designconcepts is defined as the fabrication and testing of two or morediverse or fundamentally different core design concepts to achievethe same function or affordance properties during one productdevelopment project. For example, parallel testing can aid conceptselection for an inverted pyramidal prostheses structure(Fig. 2(b)). Badri identifies, from an industry study, that multipleresearch teams working concurrently enhances the design

outcome [1]. A firm has a choice of developing prototypessequentially or in parallel with different cost, benefit, and timeimplications [23]. An industry case demonstrates that parallel pro-totyping can permit design space exploration in a time constrainedcontext [3]. Thomke finds that parallel testing is common inindustry; however, the integration of test results is critical [24].Christie observes that parallel prototypes are critical for designfeedback [5]. This is validated by multiple design studies, whichshow that groups testing parallel design prototypes achieve higherfinal prototype performance [25,26]. Dahan and Mendelson deriveequations for determining the number of concepts to test in paral-lel based on cost analysis [27]. Riek and Moe suggest the numberof parallel prototypes may be proportional to the total budgetdivided by the expected prototyping cost [3,4].

1.3.3 Scaling. A scaled prototype, in this context, is one thathas been physically reduced or increased in size while retainingthe original proportions and relationships between componentsand the underlying working principles of the system. Christieobserves that scaled models may enhance the feasibility of proto-typing prior to manufacture [5]. For example, material cost andtime can be saved in design of a fluid chamber by validating inter-faces with a scaled model (Fig. 3(a)). Moe proposes that strategicmethods are required to guide the choice of when to scale a design[4]. Viswanathan observes that loads and related boundary condi-tions should also be scaled accordingly with geometric scaling[9]. This requirement can be achieved through the use of dimen-sionless parameter groups, scaling laws (similitude), or mappingsof performance parameters across functional testing. One form ofsimilitude employs multiple parallel models to achieve high fidel-ity predictive modeling [28–30].

1.3.4 Subsystem Isolation. Subsystem isolation, in this con-text, refers to a prototype that models the performance and func-tion of a single subsystem in isolation, rather than a full design

Table 1 Metrics, both for process and outcome

Specific process variables Number of iterationsNumber of parallel conceptsUse of scalingUse of subsystem isolationUse of requirement relaxationUse of virtual prototypes

Outcome assessments Performance of each prototypeTime to build each prototypeCost of each prototypeAdherence to suggested approach

Fig. 2 (a) Use of iteration in design for spoke holes on a con-veyor belt: schematics of four tested design generations andimage of the final design. Iteration was employed to achieve tar-get performance. (b) Parallel load testing of three strut designconcepts for a prosthetic limb. Parallel testing highlights differ-ences in performance.

Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-3

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 4: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

concept. For example, it might save effort to produce one or twocomponents and test the design of a joint without producing theentire system (Fig. 3(b)). Christie notes that it may be beneficialto break a complex or large design into smaller subsystemsthat can each be approached with a local strategy. Effectivere-integration of the subsystems is critical [5]. Drezner elaboratesthat large systems (e.g., large naval vessels or complex satellites)are generally too costly to prototype at a system level. Subsystemisolation can reduce uncertainty in these cases. A meta study ofDoD designs found that sufficient testing data was obtained forabout 60% of the cost of a fully integrated prototype [8]. Thisapproach also reduces time to market.

1.3.5 Requirement Relaxation. Relaxation of requirementsindicates that a prototype is intentionally constructed to meet areduced percentage or subset of the functional requirements. Forexample, a three-dimensional whiteboard product may be proto-typed in poster-board to test size and usability features (Fig. 3(c)).Relaxed requirement prototyping is a form of low fidelity proto-typing, in which fidelity to final requirements is reduced. Lowfidelity prototyping is a more general technique in which anyaspect(s) of the design could be relaxed. Christie details that earlyprototypes should achieve threshold (as opposed to objective/final) design requirements. This approach also permits earlierstage results. To achieve useful information, requirement thresh-olds should be defined a priori to fabrication [5]. Thomke and Bellexplore analytical models to suggest that each test reduces someuncertainty in design performance, and removes a subsequentpotential cost of an error. Thus, significant savings may beachieved through multiple low fidelity prototypes [31]. Low fidel-ity prototypes reduce cost to permit faster prototyping [8]. This isa beneficial outcome, given that faster prototyping is correlatedwith reduced design fixation [13].

1.3.6 Virtual Prototyping. A virtual prototype evaluates someaspects of the real-world behavior of a design via simulation on acomputational platform [32]. Virtual prototyping is typicallyimplemented through the use of analytical models, computer-based simulation, and visualization techniques. An example is vir-tual modeling of a Baja vehicle for component design and struc-tural analysis (Fig. 3(d)). Three key benefits of virtual prototypingmay include reduced cost of testing [33]; the opportunity to col-lect data that would be infeasible with a physical model due togeometric constraints [34]; and the synthesis of design and testing[35]. An example would be simulation of millions of use cycles toidentify component fatigue failure modes [36]. There are limita-tions to the virtual approach. Virtual models preclude timeexpenditure on interaction with the tool interface [37]. The out-come may also be critically dependent on the selection of model-ing parameters [38]. A virtual model will only evaluatephenomena that are directly encoded in the simulation [3]. How-ever, advances in multiphysics modeling and high-speed comput-ing are leading to very accurate virtual modeling tools. Sefelinreports that virtual prototypes may in some cases increase

flexibility, and reduce effort over physical prototypes [33]. Ulrichand Eppinger propose that a designer select between virtual orphysical prototyping by comparing the ratio of accuracy to cost[21]. Christie proposes that the decision to implement either phys-ical or virtual prototypes should be a strategic one [5].

1.3.7 Outcome Assessments. Several outcome assessments areidentified that quantify the design impact of implementing the var-iables. These metrics are also employed in the empirical studies.The first three outcome assessments: performance, time to buildeach prototype, and cost of each prototype are used based on theprecedence of previous literature. Performance, i.e., the degree ofmeeting design requirements, is a direct measure of the success ofthe projects. Time, i.e., person-hours spent on fabrication or blue-print design, and cost, i.e., budget spent in dollars, are metricswhich may be direct if there are requirements of such for the pro-ject. Time and cost also help to evaluate the prototyping process.Finally, adherence to the suggested approach (from the method) isa self-reported measure of how closely a participant observed theirteam to utilize the method or if instead a different approach waschosen. Adherence to the suggested approach is used to comparehow closely use of the method is connected to increase ordecrease of the other metrics.

1.4 Summarized Heuristics and Systematic Method. Theobservations from the literature were critically evaluated andcombined to develop a set of conditions to guide when and towhat extent each of the techniques has a high potential to posi-tively influence the design process. Iteration and parallel concepttesting directly improve performance, while virtual prototypes,scaling, subsystem isolation, and requirement relaxation can per-mit reduction of cost and time expenditures without loss of per-formance. These later four techniques are critical because theymay enable iteration and parallel concept testing, even in situa-tions of constrained budget and time.

Figure 4 represents the simplified systematic method. Themethod is a systematic tool that can be used as a source of guid-ance for the development of a prototyping strategy. The methodsynthesizes these best practices. There are potentially a number ofdifferent ways to do this. Other format options might includeflowcharts, equations, or prompts to guide team discussion, etc.The key is to provide designers with a basic approach for consid-ering and evaluating key insights of the literature findings. Themethod expands beyond the traditional view of a stage-gate pro-cess with a “proof of concept,” then “alpha” and “beta” level pro-totypes to include a large potential “prototyping space,” definedby six independent variables that represent independent techni-ques. The method is a translation of the empirical evidence foundin the literature for each of the six key techniques into a form thatcan be applied in general to a design problem. The information ispresented to the designer as several subquestions under each tech-nique with associated Likert scales for response. The average ofthese subquestion scores then provides a single value that indi-cates a suggested approach.

Fig. 3 (a) Scaled (left) and full (right) model of fluid chamber for testing interfaces, (b) isolated prototypes for joint interfacesto reduce effort, (c) relaxed requirement model for a three-dimensional white board, to test usability, and (d) virtual design of aBaja vehicle for structural analysis and part design

081102-4 / Vol. 137, AUGUST 2015 Transactions of the ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 5: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

To determine the suggested approach for an individual tech-nique, the designer responds to each Likert prompt under a head-ing then takes the average of those scores. The average of theLikert score is then mapped onto the given scale, which providesa unique approach for each technique. Note there is a relevantmagnitude which indicates the degree to which that approach islikely to benefit the process. When a neutral response is identified,the designer must reconsider the questions until an indication to-ward either approach is given. This combination of multipleprompts allows for competing elements of the design context tobe weighed against each other and still permit a clear plan foreach technique. From the method, there are 46 implementableunique strategies that could be indicated. This approach saves sig-nificant time over memorizing and attempting to individuallyweigh all of the information identified in the literature review.Typically, the method is presented by providing examples asshown in Figs. 2 and 3, and then the method in Fig. 4 is physicallyprovided to the designers. The suggested approach is then trans-lated into the context of a specific problem as a complete proto-typing strategy through the efforts of the designer. This concepttends toward a dimensionally enhanced approach as compared tothe traditional strategy of achieving stage gate objectives insequence (from alpha level to beta level prototyping, etc.).

1.5 Case Studies: Prototyping Efforts. The method guidesapplication of each technique. Execution of a complete strategycan become complex. To elaborate, two case studies are reviewedin which strategic prototyping was applied. The first case is thearchitectural renovation of a historic structure in Beijing. The sec-ond is a precommercial product from Singapore University ofTechnology and Design (SUTD).

A large scale architectural renovation project is currently under-way in Beijing. Though disclosure agreements require that detailsare anonymous, the overall strategy may be discussed. Productionof a full scale prototype is impossible in this case for two reasons.First, it would exceed the budget of the project. Second, it is toodangerous to risk damaging the original structure. To address

these issues, a more systematic strategy was adopted for the finalprototype. The design was segmented into two isolated subsys-tems. The first was an iteratively developed virtual model repre-senting the full system, including the original structure as well asthe enhancements. It also simulates external inputs such as sun-light. Simultaneously, a series of parallel subsystem prototypeswere pursued using production material samples to prototype eachinterface. For example, a single tile of the holographic flooringwas fit to the stone wall. This strategy permitted detailed prototyp-ing of the design with reduced risk.

Considering a second case, Idea cube is a hand held three-dimensional white board in a precommercial product developmentphase at SUTD. It can be used for concept sketching or for repre-sentational drawing of objects. The cost of prototyping this designat full scale is a fraction of the project budget. However, the initialprototype was produced with relaxed requirements to permit rapidfunctional exploration. Then, the effort was split into three iso-lated subsystem prototyping efforts. The cube itself was itera-tively refined over several full scale, full requirement prototypes.The second subsystem was a small sculpture that fits inside thecube to demonstrate the impact of viewing angle. For this design,four different parallel concepts were tested at full scale. Theselected sculpture was also iterated several times. The third sub-system was a custom packaging. Dimensions were set early in theproject to permit smooth integration of the final design.

2 Experimental Exploration and Validation

To investigate the proposed method and its elements, it wasnecessary to construct several complementary experiments. Theseexperiments were each designed in a similar manner with varia-tions in type of design requirements and length of study to permita rich understanding of the effects of the method. There werethree shorter-term, design challenge type experiments, and onelong-term in situ study. Multiple design problems were explored.This experimental approach reduces the potential that resultscould be skewed by the selection of a particular design problem.

Fig. 4 Survey tool for implementation of prototyping method

Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-5

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 6: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

A high-level, contrasting overview of the unique objective foreach study is presented in Table 2. Particularly the binary andcontrolled performance measure studies permit a clear quantifica-tion of performance effects, while the in-class study provides adeeper view of cost and time expenditure factors for the scaling,subsystem isolation, requirement relaxation, and virtual prototyp-ing strategies. Sections 2.1 to 2.4 detail objectives, data recordingtechniques, metrics, design problems, participants, time allot-ments, and material allotments for each study. Note that therewere no repeat participants between any of the four experiments.Also, for each study, participants consisted of a random mixtureof male and female junior and senior university students majoringin either mechanical engineering or industrial design. The numberof participants varies slightly due to the fact that participation wason a voluntary basis for each study.

2.1 Study 1—Binary Design Objective. In the first two con-trolled studies, an experimental group was exposed to the method,while a control group was left to develop their own approach toprototyping (as typically occurs in most design contexts). The firststudy evaluated impact of the method on performance for a“binary” design requirement. That is, this requirement simply hada pass/fail condition. This generally mimics binary (hard) designrequirements. For this study, a researcher observing each teammeasured and recorded the performance distance, in this caseusing a caliper. The participants kept a log of their iteration andconcept testing. There were two conditions: experimental (withthe method) and control, without the method.

Teams were asked to build a device that starts within a boundedarea on the floor and then moves a given object, a U.S. quarter-dollar coin, to cover a target. It was required to stay within boundsand operate using the system’s stored energy (i.e., teams could notpush the device to actuate, rather they were required to pull arelease pin). Figure 5 illustrates the problem. There were 15 minfor introduction of the problem, followed by 30 min for method

instruction (experimental group) or free ideation (control), thenthere were 180 min for building. Each team was supplied with aset kit of basic prototyping supplies. There were 36 participants inthis study, equally divided between experimental and controlgroups. Participants completed the design problem in teams oftwo persons.

2.2 Study 2—Open Design Objective. The second con-trolled study evaluated the impact of the method on an open-ended design performance requirement. This requirement had notheoretical limit on performance. This generally relates to variable(soft) type design requirements. For this study, a researcherobserving each team recorded the performance of each iteration(in this case distance, determined by tape measure), changesmade, and time of testing. There were two conditions: experimen-tal (with the method) and control, without the method.

The second controlled study is complementary to the first, withthe only difference that teams were required to move an object,a piece of paper, as far as possible down a hallway. Again, thedevice was required to start within a bounded box, operate usingstored energy, and not pass over the sidelines (see illustration inFig. 6). The problem introduction was 15 min long, followed by5 min for method instruction (experimental group) or free ideation(control); there were then 50 min for building. Each team was sup-plied with the following materials in a premade kit. There were 64participants in this study, equally divided between experimentaland control groups. Participants completed the design problem inteams of two persons.

2.3 Study 3—Virtual Prototyping. The third controlledstudy evaluated differences between physical and virtual proto-types. Some design problems are impractical to solve solely withvirtual or physical modeling, respectively. However, the objec-tives of this study were to evaluate performance differences for aproblem that could readily be solved by either, and whether the

Table 2 Overview of studies

Title Unique objective Recorded metrics

Study 1—Binary design objective Explore a problem with strict pass/fail or “binary”measure of performance

Performance of final build; Number of iterations; numberof concepts

Study 2—Open design Objective Explore a problem where there is no practical upperlimit on design performance

Time of each build; performance of each build; number ofconcepts; number of iterations

Study 3—Virtual prototyping Explore a problem for which either physical orvirtual prototypes appear applicable

Time of final build, Performance of final build

Study 4—Capstone design Explore a context in which the prototyping effortsoccur over an extended period

Performance of each prototype Use of: scaling, subsystemisolation, requirement relaxation, and virtual prototyping;cost of each prototype; time to fabricate each build; adher-ence to suggested approach

Fig. 5 (left) Depiction of design problem and (right) example prototype from study 1—binary design objective. Thisdesign acts like a drawbridge, dropping the coin into place.

081102-6 / Vol. 137, AUGUST 2015 Transactions of the ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 7: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

method encourages virtual prototyping or not. For this study, aresearcher observing each team recorded the time to complete thedesign. The researcher also measured and recorded performanceof the physical prototypes, while the virtual prototype perform-ance was extracted by the researcher from the simulation soft-ware. Participants were provided with the method and wereallowed to subsequently choose physical or virtual prototypingbut not both. Thus, there were two conditions: virtual prototypingand physical prototyping by employing the method.

In this third controlled study, participants were required todesign a four-bar linkage that traces a path. The design perform-ance metric for this problem was the ratio of horizontal (x) motionto vertical (y) motion. The primary objective was to obtain thehighest ratio in the allotted time. Figure 7 provides details. Indi-viduals were introduced to the design problem for 15 min, thengiven the design method (Fig. 4) and allowed to choose betweenvirtual or physical prototyping. Individuals were separated accord-ing to their selection and given a 5 min introduction to either thephysical prototyping tools or the linkage design software. Eachparticipant was provided with either a computer terminal andaccess to the linkage simulator for virtual prototyping or a setmaterials kit for physical prototyping. Then all participants weregiven up to 50 min to prototype. There were 32 participants in thisstudy. Each participant completed the design problemindividually.

2.4 Study 4—Capstone Design. This study consisted of pro-viding the method to students in a senior mechanical engineeringcapstone design course. The objective of the in-class study was toevaluate the impact of the method and use of the variousapproaches supported by the method. In particular scaling, subsys-tem isolation, requirement relaxation, and physical or virtual pro-totypes were implemented in full for this experiment. Many of theprojects in this course are industry sponsored and teams often

produce high-end custom fabricated prototypes. This permitteddeeper exploration of the approach taken by participants, as wellas the observation of additional quantitative information such asthe cost of each prototype. For this study, researchers conductedindividual interviews with each team to collect data. Tworesearchers worked with the teams while they completed an evalu-ation of how much budget was spent by the team on each proto-type (in dollars), how many hours each took to produce, and theteam-perceived performance of each prototype and value of infor-mation gained from each prototype.

For the in-class study, each team was matched with an industryor research sponsor and provided a unique design problem. Theseranged from development of sealing valves for offshore miningrigs to prototypes for medical equipment. This was an advantagein that it also permitted validation of the methodology for a broadsegment of the design problem space, thus reducing any potentialinfluence due to a specific problem. At the beginning of thesemester, two researchers introduced the methodology through alecture and provided the survey tool. The teams then had threemonths to build various prototypes. The researchers returned toconduct interviews at the end of the semester with each team andto assess their prototyping effort. Figure 8 shows an example pro-totype. There were 105 participants in this study. Participantscompleted their design projects in teams of 3–5.

3 Results and Discussion

This section reports results from all four experiments. For con-venience, the results section is mapped to the research hypotheses.Data pertinent to each hypothesis are presented in sequence. Thepursuit of several experiments permits each research hypothesis tobe evaluated. It is observed that each of the individual techniquescan have a positive impact on design outcome. These results arereported with a new level of detailed quantification. Furthermore,

Fig. 6 (left) Design problem and (right) example prototype from study 2—open design objective. The design acts like aramp, guiding a disk into a rolling motion along the track.

Fig. 7 (left) Depiction of design problem and (right) exampleprototype from study 3—virtual prototyping. This sample fromthe physical prototyping condition will trace the pencil in adesired pattern.

Fig. 8 An example prototype from the capstone design study,a cam phaser

Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-7

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 8: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

the method increases use of the individual techniques andimproves overall performance.

Throughout this section, both the Student’s t-test and the test oftwo proportions are employed. The Student’s t-test is applied forhypothesis testing in cases where a difference of means can takeon variable values. For the purposes of this work, it is assumedthat any value for p less than 0.05 will suffice to reject the null hy-pothesis with statistical significance for the Student’s t-test. Sec-ond, for instances in which there is a binomial distribution (onlytwo possible values), the results are analyzed using the compari-son of two populations defective proportion. This test uses a trans-formed z-test to test the hypothesis that two samples are from thesame population. For the purposes of this work, it is assumed thatany value for p less than 0.05 will suffice to reject the null hypoth-esis with statistical significance for the test of two population’sdefective proportion.

3.1 Hypothesis 1: Individual Techniques Improve Outcome

3.1.1 Iteration. The influence of iteration on design perform-ance was measured in study 2—open design objective. The litera-ture indicates that iteration improves performance [22,13]. Thiscorrelation was observed here also. The open-ended nature of thisstudy allows quantification of the marginal effects of iterations,which are reported here for the first time (Fig. 9). Repeated testswithout design changes are excluded. Performance continues toincrease with many iterations. The performance value canincrease to 400% of the initial test performance. Another way oflooking at this result is that on average, each iteration provides a12% increase in performance. The r2 value is 0.85 for mapping theresults to the line equation of gradient 12%, and intercept at 19 ft(computed average for initial performance).

In remarkable complement to the increase in performanceobserved with iteration is the decreasing cost of execution interms of time to complete each iteration (Fig. 10). The first buildtakes significantly more time than subsequent, evolutionarybuilds. Furthermore, there is also a general trend of decreasingtime to build each ith iteration. There are some outliers at highernumbers of iteration, thus large standard error at these points.From experimental observations, these were instances where aprototype failed and required significant repair time. Another wayof looking at these results is that the time to build each iterationdecreases by about 8% for each iteration (excluding the initialbuild).

3.1.2 Parallel Concepts. As would be expected from theexisting literature [25,26], exploration of a second concept wasalso correlated with a performance increase beyond the first con-cept. The average performance of a team’s second concept (44 ft)was greater than their first (19 ft) across all teams with multipleconcepts in study 2—open design objective. This difference is sig-nificant with Student’s t-test at p< 0.001.

3.1.3 Interaction of Iteration and Parallel Tests. Severalinteresting results were found in terms of comparing iteration andparallel concept development, from analyzing results of study 2—open design objective. On average, teams which included an addi-tional concept introduced it on the third iteration. Furthermore,the average score for the first two iterations in teams thatattempted two or more concepts (19 ft) was significantly lowerthan the average scores of the first two iterations for those thatonly iterated (29 ft). This difference was significant with a Stu-dent’s t-test at p¼ 0.05. One possible interpretation of this resultis that the teams chose to explore a second concept after observingthat the first concept was not working well.

In terms of time expenditure, the first iteration of the secondconcept only took an average of 12 min to fabricate. This timeduration is significantly less than the time required to produce thefirst iteration of the first concept (30 min). This difference is statis-tically significant for Student’s t-test at p¼ 0.04. This suggeststhat as participants build the first concept they may be learning thetools and design problem more deeply, so that development of anew concept does not take much more time than an additional iter-ation of one concept.

In the experiments, exposure to the method has been controlled;however, teams are given freedom of choice in pursuing a proto-typing strategy, where the objective is to simulate as realistic adesign experience as possible. Based on the distributed usage ofiteration and parallel testing, the experiment is fractional factorial.The main effects are reported in Secs. 3.1.1 and 3.1.2.

3.1.4 Scaling. Observations for scaling, subsystem isolation,and requirement relaxation were collected from study 4—capstone design. To evaluate the main effects of scaling, subsys-tem Isolation, and requirement relaxation, single factor ANOVAand pairwise Student’s t-tests are employed. Participants wereinterviewed and completed a form detailing performance, value ofinformation on a ten point Likert scale, cost spent in dollars, andtime spent in hours, respectively, for each prototype built in theexperiment. The possibility of testing for interaction effects wasalso evaluated. The teams were free to ultimately decide their ownstrategy, after exposure to the method. As a result of this freedom

Fig. 9 Performance with respect to increasing number ofcycles of iteration. 61 Standard error shown. Source: study 2—open design objective. Each point represents the average per-formance of the ith iteration, across all teams.

Fig. 10 Time spent to develop each iteration with respect to ithiteration. 61 Standard error shown. Source: study 2—opendesign objective.

081102-8 / Vol. 137, AUGUST 2015 Transactions of the ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 9: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

of choice, some of the factor trials for an ideal 23 factorial experi-ment are not present which would be required to evaluate a bilin-ear (or greater order) regression model of the interaction effects ofthese three techniques. For instance there were no cases of scalingwithout requirement relaxation. Thus, the experiment is equiva-lent to a fractional factorial approach. The significant main effectsare as follows.

For scaling, as anticipated from the literature [28–30], therewas a reduction in the cost to produce each prototype. Scaled pro-totypes cost teams significantly less than full-size physical proto-types with p¼ 0.003. There was also a reduction in time to buildeach prototype, but not quite significantly so, at p¼ 0.058. Therewas no significant difference in performance of scaled prototypes.However, there was a significant increase in the value of informa-tion gained, at p¼ 0.039 for the t-test that scaled models affordmore useful information (Fig. 11).

3.1.5 Subsystem Isolation. The mean cost and time to produceprototypes of an isolated subsystem were less than that of produc-ing a full system. However, these differences were not quite sig-nificant (Student’s t-test at p¼ 0.07 for each). There was not anysignificant reduction in performance or information gained (t-testvalues are p¼ 0.40 and p¼ 0.255, respectively). Only a small per-centage of prototypes were produced as isolated subsystems; thus,there were not enough data points for statistical significance(Fig. 11). However, based on the trends of the other results, itappears that with more data points these differences would likelybecome significant.

3.1.6 Requirement Relaxation. For requirement relaxation,there was an expected significant savings in cost (Student’s t-test,p¼ 0.01) and time (Student’s t-test, p< 0.001) to produce a proto-type as compared to prototypes constructed without requirementrelaxation. There was no significant difference in the information

gained with relaxed prototypes. As would be supported by the lit-erature [8], however, there was a significant reduction in perform-ance of these prototypes (t-test p¼ 0.009). This is exactly what isexpected for relaxed requirement prototypes, as it is intended thatthe saved time and cost will offset the performance. The final per-formance of teams that pursued at least one prototype with arelaxed requirement was slightly higher on average than those thatonly pursued full scale prototypes, but not significantly so (t-testvalue is p¼ 0.16). This approach allows for additional iterationsor more budget allocation to the final prototype (Fig. 11).

3.1.7 Virtual Prototyping. There was a large reduction in costobserved with virtual modeling compared to physical modelingobserved in study 5—capstone design. This result was significantfor the Student’s t-test at p¼ 0.005. It is important to note for thisresult that modeling software was already available to the teams.In some cases modeling software may not be available and wouldrequire purchase. There was a significant increase in performanceat p¼ 0.009 for the Student’s t-test (Fig. 11). There was a smallincrease in value of information gained for virtual prototyping butnot significantly so, at t-test p¼ 0.061.

For study 3—virtual prototyping, the task was to design a four-bar linkage to produce the highest ratio of horizontal to verticaltravel within a cycle. Individuals in the virtual prototyping condi-tion achieved a functional design in significantly less time(17.8 min) on average than teams with a physical prototype(32 min). This test was significant at t-test p¼ 0.008. The averageperformance of prototypes in the virtual condition was also signif-icantly higher (width to height ratio of 23.4) than that of the physi-cal prototypes (width to height ratio of 5.9) for the objective tomaximize the ratio of lateral to vertical translation of the linkage.This difference was significant with t-test p< 0.001. Of course,the results of this study are highly dependent on the specificdesign problem being addressed.

Fig. 11 Four metrics to evaluate prototypes, with regard to scaling, subsystem isolation, and requirement relaxation.61 Standard error shown. Results are for each prototype on average in class study with regard to: (top left) costexpended, (top right) time spent, (bottom left) performance achieved, (bottom right) information gained. Source: study4—capstone design.

Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-9

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 10: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

3.2 Hypothesis 2: Use of Individual Techniques Increaseswith Exposure to Method. This section reviews whether partici-pating designers applied the techniques more often when exposedto the method. Section 3.3 will evaluate impact on outcome.

During study 2—open design objective, one average the experi-mental teams iterated on their design concepts 13 times while thecontrol group only iterated 9.3 times. This difference of meanswas significant using Student’s t-test with p¼ 0.006. Teams instudy 1—binary design objective also pursued more iterations onaverage: 6.1 for experimental versus 3.7 for control, also signifi-cant at p< 0.001 for Student’s t-test.

The average number of concepts pursued per team was higherin the experimental group (1.69 concepts) than the control group(1.26 concepts) for the variable performance, study 2—opendesign objective. This difference was significant with the Stu-dent’s t-test at p¼ 0.005. Teams in the experimental group ofstudy 1—binary design objective also pursued more concepts: 3.2on average versus 1.6 in control, which was significant for Stu-dent’s t-test with p< 0.001. This result is seen as a positive resultas the literature identifies that pursuing multiple design conceptsin parallel prototypes is correlated with increased performance[23,25].

For scaling, from study 4—capstone design, there was a signifi-cant observation that teams adhering to the method (high Likert-scale response, i.e., “4” or “5”) used scaled prototypes more oftenthan those that diverged (low Likert-scale response, i.e., “1” or“2”), with p¼ 0.04 for the test of two population defective propor-tions. It was noted that significantly more teams used subsystemisolation when they adhered to the suggested approach, but stillless often than they employed other techniques like scaling. Simi-larly, significantly more teams used requirement relaxation whenthey adhered to the suggested approach. Finally, we observe thatteams adhering to the suggested approach applied virtual proto-typing more often than those that diverge from the method. SeeTable 3.

Additionally, study 3—virtual prototyping also indicates thatsignificantly more individuals select virtual prototypes when pre-sented with the method. From the total set of participants, only 9individuals selected to pursue physical and 23 individuals selectedvirtual prototypes (test of two proportions, p< 0.001).

3.3 Hypothesis 3: Use of the Method Improves Outcome.We observe that, overall, the teams exposed to the method startedprototyping sooner. This result is seen as a positive result asreducing time to begin testing of the first prototype has been cor-related with success in the previous research [10,11]. Table 4shows that the mean time passed from the start of the prototyping

session to test of the first prototype was only 19 min for the exper-imental group, which was 22 min faster than the control group forstudy 3—virtual prototyping. In other words, the experimentalgroup produced prototypes in less than half the time it took thecontrol group. Time to first test was also faster for experimentalgroups in study 2—open design objective.

In terms of direct performance measure outcomes, the experi-mental teams also outperformed the control teams in both of thecontrolled experiments that allowed this comparison, see Table 4.For study 1—binary design objective, where teams were requiredto “cover a target,” all of the experimental teams met the targetrequirement while only 56% of the control teams met the targetperformance. This percentage is the raw percent of teams thatwere able to build a device to move a quarter to cover the target“X,” under its own power, and within the allotted time. There wasalso a significant difference in performance between the experi-mental and control groups for study 2—open design objective.This challenge was to move an object as far as possible. On aver-age, the experimental teams moved the object 50 ft, while the con-trol teams moved the object 41 ft.

For study 4—capstone design, it was observed that there was acorrelation between adherence to the method and prototype per-formance. Each team addressed a unique design problem. This isadvantageous as it permits the method to be evaluated in a broad

Table 3 Use of scaling, subsystem isolation, requirement relaxation, and virtual prototypes with respect to adherence to the sug-gested approach for study 4—capstone design. Note that those adhering to the method used these practices significantly more of-ten. Source: study 4—capstone design.

Adherence to method Likert scores:1, 2¼ diverged3¼ neutral or N/A4, 5¼ adhered

% Prototypesscaling

% Prototypessubsystemisolation

% Prototypesrequirementrelaxation

% Prototypesvirtual

Adherence scores 1, 2 33% 0% 50% 43%Adherence scores 4, 5 54% 48% 86% 77%Test of proportions p value 0.0427 0.0001 0.0278 0.0176

Table 4 Design outcome effects of exposure to method

Metric Control Experimental Significance

Time to test (min) 41 19 t-test, p¼ 0.001Performance—binary (percent ofteams that covered target)

56% 100% Test of proportions, p¼ 0.004

Performance—variable (ft) 41 50 t-test, p¼ 0.018

Fig. 12 Comparison of how closely teams followed their sug-gested approach, and outcome performance of prototypingefforts. 61 Standard error shown. Source: study 4—capstonedesign.

081102-10 / Vol. 137, AUGUST 2015 Transactions of the ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 11: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

range of design problems. However, each team has different rele-vant performance metrics. Therefore, general metrics were usedfor study 4—capstone design. Each team was asked severalquestions, including: (1) “How closely did your team follow themethod?” and (2) “What was the outcome performance of yourprototyping efforts?” The results in Fig. 12 highlight that perform-ance was directly proportional to how closely teams adhered toapproach suggested by the method, for their specific problem. Thedifference between close adherence (“4” or “5,” on Likert scale)and low adherence (“1” or “2,” on Likert scale) was significant forStudent’s t-test at p< 0.001. These problems were also much lon-ger term than the controlled problems and evaluate the method ona larger scale.

4 Conclusions

Each of the individual research hypotheses was closely eval-uated in Secs. 3.1 through 3.3. The results highlight that the indi-vidual techniques and method improve various design outcomes.It was observed that employing iteration and multiple design con-cepts improved design performance significantly. Specifically, forthe first time, the quantitative value of continued iteration hasbeen reported. Substantial cost and time expenditure reductionoccur with scaling, subsystem isolation, requirement relaxation,and virtual prototyping. Which generally occur without loss ofperformance. Individuals also spent less time to produce a func-tional concept with virtual prototypes in study 3—virtual proto-typing. Teams exposed to the method achieved greater finaldesign performance than control groups with the same materialand time allotments in study 1—binary design objective and study2—open design objective. The results of study 4—capstonedesign also indicate with statistical significance that teams whichadhere to the method closely achieve higher self-efficacy for pro-totyping. These results are reported with several limitations,which are elaborated in Sec. 4.1.

4.1 Limitations to the Study. It may be argued that an em-pirical design study is limited by selection of the design problem.One of the challenges of design problem selection is in identifyingwhich results are generalizable, and into what other contexts. Onepossible approach to address this issue is to explore several uniqueproblems and contexts. In this work, several different and comple-mentary design problems were evaluated in multiple, parallel con-trolled studies. In a fourth, in-class study, participants of thisstudy addressed a large variety of design problems. This in classstudy also occurred over a longer term. Comparable effects wereobserved between studies 1-3, and study 4—capstone design. Fur-ther, the results regarding each of the individual techniques matchwhat would be expected from the literature. Although this parallelapproach reduces potential effects from the design problem selec-tion, a remaining challenge is to quantify generalizability of theresults.

The research studies only addresses six techniques while theremay in fact be a large number of other valuable techniques. How-ever, the literature identifies these techniques as critical with sig-nificant empirical support. By forming a method from strategiesthat are well founded on empirical results from the literature, weincreased chances of success and the potential value of themethod.

The fundamental assumption of this work is that context can beeffectively evaluated to determine a suggested approach toward aprototyping strategy. In some cases, the context may be somewhatvague, ambiguous or unclear. Engaging in a miscalculated proto-typing strategy could result in loss of time or resources. An exam-ple would be underestimating the cost of iterating on a relaxedrequirement prototype and later not having enough budget to buildthe second iteration. Therefore, the context must be carefully eval-uated to ensure that an appropriate strategy is identified. Oneinsight from the results is that iteration can lead to marginal bene-fit only with variable performance measures. If a binary

performance measure is already met by a current design, effortsshould be focused on meeting other requirements.

4.2 Future Research. There are several interesting avenuesnow open for future research. It was noted in the literature reviewthat there is significant impact on outcome with regards to thetype of fabrication process employed. It is known that faster fabri-cation is preferable; however, there may be many techniques toachieve this. In our current work we are exploring additional sour-ces of design information to identify best practices for fabrication.

In addition to the experiments described above, versions of themethod have been deployed at universities and design seminarsaround the world. Feedback by participants of all backgrounds,from Air Force colonels to native entrepreneurs, indicates that theindividual techniques are critical for prototyping, yet the methodof transmitting this information can be further simplified. Possiblevariations could include simplifying the heuristics into demonstra-tive “cards” with graphic examples or something more naturallyintegrated with the design process, such as an annotated“prototyping notebook” for tracking and planning prototypes.

It would also be of value to experimentally validate marginaleffects of parallel prototyping and the remaining techniques toachieve a full regression model, with the caveat that such a pro-scribed experimental model may have other effects on the pro-gression of an in situ design problem; to explore hybridtechniques; and to develop analogous methods that are integratedwith other processes in design such as ideation.

Acknowledgment

This work was supported by the Singapore University of Tech-nology and Design (SUTD) and the SUTD-MIT InternationalDesign Center (IDC, idc.sutd.edu.sg). This material is also basedin part on research sponsored by the United Sates Air Force Acad-emy under Agreement No. FA7000-12-2-2005. The U.S. Govern-ment is authorized to reproduce and distribute reprints forGovernment purposes notwithstanding any copyright notationthereon.

References[1] Badri, M., Mortagy, A., Davis, D., and Davis, D., 1997, “Effective Analysis

and Planning of R&D Stages: A Simulation Approach,” Int. J. Project Manage.,15(6), pp. 351–358.

[2] Thomke, S. H., 1998, “Managing Experimentation in the Design of NewProducts,” Manage. Sci., 44(6), pp. 743–762.

[3] Riek, R. F., 2001, “From Experience: Capturing Hard-Won NPD Lessons inChecklists,” J. Prod. Innovation Manage., 18(5), pp. 301–313.

[4] Moe, R. E., Jensen, D. D., and Wood, K. L., 2004, “Prototype PartitioningBased on Requirement Flexibility,” ASME Paper No. DETC2004-57221.

[5] Christie, E., Jensen, D. D., Buckley, R., Menefee, D., Ziegler, K., Wood, K. L.,and Crawford, R., 2012, “Prototyping Strategies: Literature Review and Identi-fication of Critical Variables,” American Society for Engineering EducationConference 2012, San Antonio.

[6] Otto, K., and Wood, K. L., 2001, Product Design: Techniques in Reverse Engi-neering and New Product Development, Prentice Hall, Upper Saddle River.

[7] Krishnan, V., and Ulrich, K. T., 2001, “Product Development Decisions: AReview of the Literature,” Manage. Sci., 47(1), pp. 1–21.

[8] Drezner, J. A., and Huang, M., 2009, On Prototyping: Lessons from RANDResearch, RAND Corporation, Santa Monica.

[9] Viswanathan, V., 2012, “Cognitive Effects of Physical Models in EngineeringIdea Generation,” Ph.D. thesis, Texas A&M University, College Station.

[10] Yang, M. C., 2005, “A Study of Prototypes, Design Activity, and Design Out-come,” Des. Stud., 26(6), pp. 649–669.

[11] Jang, J., and Schunn, C. D., 2012, “Physical Design Tools Support and HinderInnovative Engineering Design,” ASME J. Mech. Des., 134(4), p. 041001.

[12] H€aggman, A., Honda, T., and Yang, M. C., 2013, “The Influence of Timing inExploratory Prototyping and Other Activities in Design Projects,” ASME PaperNo. DETC2013-12700.

[13] Viswanathan, V. K., and Linsey, J., 2011, “Design Fixation in Physical Model-ing: An Investigation on the Role of Sunk Cost,” ASME Paper No. DETC2011-47862.

[14] Schunn, C., Cagan, J., Paulus, P., and Wood, K. L., 2007, “NSF Workshop inEngineering and Science: The Scientific Basis of Individual and Team Innova-tion and Discovery, NSF 07-25,” National Science Foundation, www.nsf.gov/pubs/2007/nsf0725/nsf0725.pdf

Journal of Mechanical Design AUGUST 2015, Vol. 137 / 081102-11

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms

Page 12: New A Systematic Method for Design Prototyping · 2015. 9. 7. · ent approaches to prototyping can have a significant impact on both short- and long-term outcomes [2]. Yet, prototyping

[15] Youmans, R. J., 2011, “The Effects of Physical Prototyping and Group Workon the Reduction of Design Fixation,” Des. Stud., 32(2), pp. 115–138.

[16] Christensen, B., and Schunn, C., 2007, “The Relationship of Analogical Dis-tance to Analogical Function and Preinventive Structure: The Case of Engineer-ing Design,” Mem. Cognit., 35(1), pp. 29–38.

[17] Camburn, B. A., Dunlap, B., Viswanathan, V., Linsey, J., Jensen, D. D., Craw-ford, R., Otto, K., and Wood, K. L., 2013, “Connecting Design Problem Char-acteristics to Prototyping Choices to Form a Prototyping Strategy,” ASEEAnnual Conference 2013, Atlanta.

[18] Camburn, B. A., Dunlap, B., Kuhr, R., Viswanathan, V., Linsey, J., Jensen, D.D., Crawford, R., Otto, K., and Wood, K. L., 2013, “Methods for PrototypingStrategies in Conceptual Phases of Design: Framework and ExperimentalAssessment,” ASME Paper No. DETC2013-13072.

[19] Hammon, C. L., Green, M. G., Dunlap, B. U., Camburn, B. A., Crawford, R.,and Jensen, D., 2014, “Virtual or Physical Prototypes? Development andTesting of a Prototyping Planning Tool,” ASEE Annual Conference 2014,p. 9025.

[20] Dunlap, B. U., Hammon, C. L., Camburn, B. A., Crawford, R., Jensen, D.,Green, M. G., Otto, K., and Wood, K. L., 2014, “Heuristics-Based PrototypingStrategy Formation: Development and Testing of a New Prototyping PlanningTool,” ASME IMECE 2014, Montreal.

[21] Glegg, G. L., 1981, The Development of Design, Cambridge University,London.

[22] Dow, S. P., Heddleston, K., and Klemmer, S. R., 2011, “The Efficacy of Proto-typing Under Time Constraints,” Design Thinking: Understand—Improve—Apply, Understanding Innovation, C. Meinel, L. Leifer, and H. Plattner, eds.,Springer-Verlag, Berlin, pp. 111–128.

[23] Ulrich, K. T., and Eppinger, S. D., 2000, Product Design and Development,McGraw-Hill, New York.

[24] Thomke, S. H., 2003, Experimentation Matters: Unlocking the Potential ofNew Technologies for Innovation, Harvard Business, Boston.

[25] Dow, S. P., Glassco, A., Kass, J., Schwarz, M., Schwartz, D. L., and Klemmer,S. R., 2010, “Parallel Prototyping Leads to Better Design Results, More Diver-gence, and Increased Self-Efficacy,” ACM Trans. Comput.-Hum. Interact.(TOCHI), 17(4), p. 18.

[26] Neeley, W. L., Lim, K., Zhu, A., and Yang, M. C., 2013, “Building Fast toThink Faster: Exploiting Rapid Prototyping to Accelerate Ideation During EarlyStage Design,” ASME Paper No. DETC2013-12635.

[27] Dahan, E., and Mendelson, H., 2001, “An Extreme-Value Model of ConceptTesting,” Manage. Sci., 47(1), pp. 102–116.

[28] Cho, U., Wood, K. L., and Crawford, R. H., 1998, “On-Line Functional TestingWith Rapid Prototypes: A Novel Empirical Similarity Method,” Int. Rapid Pro-totyping J., 4(3), pp. 128–138.

[29] Dutson, A. J., Wood, K. L., Beaman, J. J., Crawford, R. H., and Bourell, D. L.,2003, “Application of Similitude Techniques to Functional Testing of RapidPrototypes,” Rapid Prototyping J., 9(1), pp. 6–13.

[30] Cho, U., Dutson, A., Wood, K. L., and Crawford, R., 2005, “An AdvancedMethod to Correlate Scale Models With Distorted Configurations,” ASME J.Mech. Des., 127(1), pp. 78–85.

[31] Thomke, S. H., and Bell, D. E., 2001, “Sequential Testing in Product Devel-opment,” Manage. Sci., 47(2), pp. 308–323.

[32] Clin, J., Aubin, C., and Labelle, H., 2007, “Virtual Prototyping of a BraceDesign for the Correction of Scoliotic Deformities,” Med. Biol. Eng. Comput.,45(5), pp. 467–473.

[33] Sefelin, R., Tscheligi, M., and Giller, V., 2003, “Paper Prototyping-What is itGood for?: A Comparison of Paper-and Computer-Based Low-Fidelity Proto-typing,” CHI’03 Extended Abstracts on Human Factors in Computing Systems,pp. 778–779.

[34] Engineering Models Ease and Speed Prototyping, NASA2008.[35] Wang, G. G., 2002, “Definition and Review of Virtual Prototyping,” ASME J.

Comput. Inf. Sci. Eng., 2(3), pp. 232–236.[36] Wen, J. H., 2008, “Virtual Prototyping in Redesign and Durability Test Asses-

sment,” SAE Technical Report No. 2008-01-0862.[37] Anderl, R., Mecke, K., and Klug, L., 2007, “Advanced Prototyping With

Parametric Prototypes,” Digital Enterprise Technology, Springer, New York,pp. 503–510.

[38] Zhu, Y., and Ahmad, I., 2008, Developing a Realistic-Prototyping Road UserCost Evaluation Tool for FDOT, Florida Department of Transportation Con-struction Office, Department of Construction Management, College of Engi-neering and Computing, Florida International University, Tallahassee.

081102-12 / Vol. 137, AUGUST 2015 Transactions of the ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 06/11/2015 Terms of Use: http://asme.org/terms