27
Understanding Design Decisions under Competition using Games with Information Acquisition and a Behavioral Experiment Jitesh H. Panchal School of Mechanical Engineering Purdue University West Lafayette, IN 47907 Email: [email protected] Zhenghui Sha Department of Mechanical Engineering University of Arkansas Fayetteville, AR 72701 Email: [email protected] Karthik N. Kannan Krannert School of Management Purdue University West Lafayette, IN 47907 Email: [email protected] ABSTRACT Abstract: The primary motivation in this paper is to understand decision-making in design under competition from both prescriptive and descriptive perspectives. Engineering design is often carried out under competition from other designers or firms, where each competitor invests effort with the hope of getting a contract, attracting customers, or winning a prize. One such scenario of design under competition is crowdsourcing where designers compete for monetary prizes. Within existing literature, such competitive scenarios have been studied using models from contest theory, which are based on assumptions of rationality and equilibrium. Although these models are general enough for different types of contests, they do not address the unique characteristics of design decision- making, e.g., strategies related to the design process, the sequential nature of design decisions, the evolution of strategies, and heterogeneity among designers. In this paper, we address these gaps by developing an analytical model for design under competition, and using it in conjunction with a behavioral experiment to gain insights about how individuals actually make decisions in such scenarios. The contributions of the paper are two fold. First, a game-theoretic model is presented for sequential design decisions considering the decisions made by other players. Second, an approach for synergistic integration of analytical models with data from behavioral experiments is MD-17-1215 - Page 1- Corresponding author: Jitesh H. Panchal

Understanding Design Decisions under Competition using

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Understanding Design Decisions under

Competition using Games with Information

Acquisition and a Behavioral Experiment

Jitesh H. Panchal

School of Mechanical Engineering

Purdue University

West Lafayette, IN 47907

Email: [email protected]

Zhenghui Sha

Department of Mechanical Engineering

University of Arkansas

Fayetteville, AR 72701

Email: [email protected]

Karthik N. Kannan

Krannert School of Management

Purdue University

West Lafayette, IN 47907

Email: [email protected]

ABSTRACT

Abstract: The primary motivation in this paper is to understand decision-making in design under competition

from both prescriptive and descriptive perspectives. Engineering design is often carried out under competition

from other designers or firms, where each competitor invests effort with the hope of getting a contract, attracting

customers, or winning a prize. One such scenario of design under competition is crowdsourcing where designers

compete for monetary prizes. Within existing literature, such competitive scenarios have been studied using models

from contest theory, which are based on assumptions of rationality and equilibrium. Although these models are

general enough for different types of contests, they do not address the unique characteristics of design decision-

making, e.g., strategies related to the design process, the sequential nature of design decisions, the evolution of

strategies, and heterogeneity among designers. In this paper, we address these gaps by developing an analytical

model for design under competition, and using it in conjunction with a behavioral experiment to gain insights about

how individuals actually make decisions in such scenarios. The contributions of the paper are two fold. First, a

game-theoretic model is presented for sequential design decisions considering the decisions made by other players.

Second, an approach for synergistic integration of analytical models with data from behavioral experiments is

MD-17-1215 - Page 1- Corresponding author: Jitesh H. Panchal

presented. The proposed approach provides insights such as shift in participants’ strategies from exploration to

exploitation as they acquire more information, and how they develop beliefs about the quality of their opponents’

solutions.

Keywords: Decision making, competition, information acquisition, game theory, behavioral experiments,

crowdsourcing.

1 Introduction: Design Decisions under Competition

Engineering design involves two types of decisions: artifact-related decisions and design-process related decisions.

Artifact-related decisions, such as selecting the geometry and dimensions of the parts, are driven by design specifications.

On the other hand, design process decisions are related to the use of resources during the design process. Decisions such as

how to partition a design problem, how to sequence design activities, how to allocate resources among teams, and when to

stop design iterations are examples of design-process decisions. Some of the important design process decisions are related

to acquiring information to support an artifact decision. For example, information acquisition decisions for computational

design involve deciding where to sample the design space, which simulation models to sample from, and when to stop sam-

pling [1]. Since acquiring information requires costly simulation runs or physical experiments, designers must judiciously

choose what information to acquire, how much information to acquire, and when to stop acquiring information. These

decisions influence not only the efficiency of design, but also the final design outcome [2].

Design process decisions are challenging because they are interrelated with the artifact decisions. To assess design

process alternatives (e.g., alternate partitions of the problem), designers must consider the impact on the resulting artifacts.

However, evaluating the impact on the artifact necessitates executing different design options to select the best process

option, which is highly inefficient and costly. One of the ways in which this challenge is addressed in information acquisition

decisions is by considering information sequentially, and deciding whether to continue gathering more information or not

at each step based on the potential improvement on the artifact [1, 3]. Various other information acquisition strategies have

been developed for computational design and optimization [4].

The difficulty in evaluating the impact of design process decisions is further compounded in design under competition

where success is dependent on other competing designs in the market. In such cases, designers must not only consider

the outcome of their own decisions, but also their competitors’ decisions. The designers in such cases are not necessarily

interested in developing the best possible design, but in developing designs that are better than those of the competitiors. In

order to understand how to make design decisions under competition, it is essential to answer the following research question

(RQ1): How should sequential information acquisition decisions be made for design under competition?

One of the special cases of decision making under competition is design crowdsourcing [5]. Within crowdsourcing, par-

ticipants make design process and artifact-related decisions in the presence of competition from other participants. Crowd-

sourcing contests can be designed in multiple ways [5], and the design of crowdsourcing contests significantly affects the

participation, the outcomes, and the cost of running the contest. Panchal [5] highlights that designing a crowdsourcing ini-

tiative involves multiple decisions, including the duration of the activity, the number of stages, deciding who can participate,

MD-17-1215 - Page 2- Corresponding author: Jitesh H. Panchal

how the problem should be framed, how to choose the award recipients, how much reward should be given, whether to allow

team formation or not, how should the rewards be distributed among team members, etc. These options have direct influence

on the outcomes of a contest, including the number of participants, the quality of the solutions received, and the amount of

effort invested by the contestants. Designing effective crowdsourcing contests requires an understanding of how individuals

actually make decisions of acquiring information and refining the design. In order to design effective crowdsourcing contests

for engineering design, it is essential to answer the following research question (RQ2): How do individuals make sequential

information acquisition decisions for design under competition?

This paper is motivated by the two research questions, RQ1 and RQ2, which are related to developing normative and

descriptive models of design decisions under competition. The primary contributions of this paper to the design decision

making literature include: (a) a game-theoretic model for sequential design decisions under competition, and (b) syner-

gistic integration of the analytical model with the results of behavioral experiments to generate insights about individuals’

decisions.

The paper is organized as follows. An overview of existing literature on models of decision making under competition

are presented in Section 2. The overall research framework and the details of the experiment are presented in Section 3.

The proposed analytical model is presented in Section 4. The integration of analytical model and experimental results is

presented in Section 5. Finally, closing comments are presented in Section 6.

2 Review of Relevant Literature

The problem of design under competition is different from the problem of non-cooperative design analyzed within the

engineering design literature (e.g., [6, 7, 8, 9]). In design under competition, different players develop solutions to the same

problem and the contest designer picks the best solution. On the other hand, in non-cooperative design, designers work on

different aspects of the overall systems design problem with the goal of achieving Pareto optimality [10]. The literature

on design for market systems (DFMS) [11] is focused on modeling manufacturing firms competing in a profit maximizing

game. In the rest of the paper, the focus is on decisions made by individuals under competition.

2.1. Existing Analytical Models of Contests

Contest theory, which is based on Game theory [12], has recently been utilized to understand how people make decisions

under competition. A contest is modeled as a non-cooperative game among participants. A participant’s payoff (πi) is

dependent on the prize amount for the tournament (Π), the probability of winning the prize (Pi), and and the cost (Ci)

incurred in developing the solution. For example, in a winner-takes-all contest the expected payoff is:

E(πi) = ΠPi−Ci (1)

The outcome of a contest is dependent on the quality of the submissions from all participants. If the quality of the solution

(qi) generated by participant i is a stochastic function of the effort invested (ei), then the winning probability is dependent on

the effort invested by each participant, i.e.,

Pi = Pi(e1,e2, . . . ,eN) (2)

MD-17-1215 - Page 3- Corresponding author: Jitesh H. Panchal

The relationship between the effort (ei) and the probability of winning for a specific participant i is quantified in terms of

contest success functions (CSF) [13]. CSFs depend on the specific problem being considered, and are generally difficult to

quantify. Generic functional forms for CSFs can be derived in stochastic, axiomatic, optimally derived, and micro-founded

ways [14]. A commonly used CSF is:

Pi =

f (ei)

N∑j=1

f (e j)

ifN∑j=1

f (e j)> 0

12

otherwise

(3)

where f (ei) is a non-negative increasing function denoting that as the effort increases, the probability of winning for that

participant increases.

Commonly used functional forms of f (ei) include the power form: f (ei) = emi with m > 0, and the exponential form:

f (ei) = ekei with k > 0. These two functional forms result in different contest success functions. The power form results in

Pi =em

iN∑j=1

emj

, whereas the exponential form results in a multinomial “logit form” of the contest success function Pi =ekei

N∑j=1

eke j

.

The latter formulation can be derived both axiomatically and stochastically [13, 15].

These CSFs can be used to calculate the expected payoff in Eqn. (1) in terms of the players’ strategies (e.g., effort).

At the Nash equilibrium [16], each player chooses the strategy that is a best response to other players’ strategies. Sha and

co-authors [17] present rational reaction sets and Nash equilibria for simple contests with two players assuming different

functional forms for the CSFs.

The structure of the basic contest models discussed above can be extended to more sophisticated contests, e.g., contests

with diverse cost and prize structures, nested contests, contests with alliances, and dynamic contests [18]. The formulation

can be used to quantify the effects of different tournament design concepts on the equilibrium effort invested by the players in

terms of exogenous parameters (e.g., prize amount), the endogenous parameters (e.g., expertise and effort), and the structure

of the game (e.g., winner-takes-all vs. auction style).

Contest models have resulted in various insights, such as (i) restricted entry is better than free and open entry [19],

(ii) the optimal number of contestants is two [20], (iii) the optimal strategy for maximizing effort from the contestants is to

allocate the entire prize to a single winner, (iv) auction style contests reduce the sponsor’s expenditure compared to fixed prize

contests, and (v) the inefficiencies due to underinvestment of effort can be reduced by performance-contingent award [21].

2.2. Empirical Studies on Contests

The results of game-theoretic models discussed in Section 2.1 are meaningful under the assumption of idealized rational

behavior. In addition to the theoretical studies, few studies driven by empirical data and lab experiments have also been

performed. Empirical studies have been used to validate some of the basic assumptions in theoretical models. For example,

it has been established that contestants indeed have strategic behavior, i.e., the contestants’ behaviors are based on the

decisions of other contestants [22]. Such studies validate the use of game-theoretic framework for contests.

Boudreau et al. [23] present an empirical analysis of software contests on the TopCoder platform [24]. The authors find

two competing effects from the data:

MD-17-1215 - Page 4- Corresponding author: Jitesh H. Panchal

1. incentive effects: increasing the number of players reduces individual effort, and

2. parallel path effects: increasing the number of players increases the possibility of finding an extreme value solution.

The authors show that one of these effects dominates based on the level of uncertainty in the problem. If the uncertainty is

low, then the incentive effects dominate and the amount of effort reduces with increasing number of players. In contrast,

if problem uncertainty is high, then the parallel path effects dominate, and the probability of finding an extreme solution

increases. Sheremeta [25] performed lab experiments on different types of contests including a grand contest, contests with

equal and unequal prizes, and a contest with multiple sub-contests. Based on the experiments, Sheremeta found that (a) the

grand contest generates higher effort levels among all simultaneous contests, and (b) in multiple prize contests, equal prizes

produce lower efforts than unequal prizes.

Recently, Sha et al. [17] synergistically utilized analytical models from contest theory and behavioral experiments to

study design contests. The authors developed a function optimization game where the participants attempt to optimize

a design characterized by a single parameter, whose performance is quantified by an unknown function. The players in

the game can learn the performance of a design at a pre-specified cost. The behavioral experiment is used to understand

how individuals make decisions in design crowdsourcing scenarios, and how their decisions deviate from the rationality

assumption. Using the experimental data, the authors validate hypotheses derived from the analytical models, including

“increasing the cost reduces the expected effort”, “quality monotonically increases with the effort”, and “increasing the

effort increases the probability of winning”. The authors also observe deviations from rationality, which can be attributed to

the anchoring bias [26].

2.3. Research Gaps and the Focus in this Paper

The contest theory models discussed in Section 2.1 are based on a number of assumptions, specifically (i) there is

only one decision made by each player within a contest, (ii) a player’s strategy only consists of effort, ei, (iii) the game is

static, (iv) individuals operate at the Nash equilibrium, and (v) players are homogeneous. Although these models are general

enough for different types of contests, and can provide insights into certain design activities, they do not address the unique

characteristics of design-related decisions described in Section 1.

First, the models do not account for strategies related to the design process, e.g., search strategies, information acqui-

sition strategies, and strategies to stop acquiring information before making an artifact decision. Second, the models do

not account for the sequential nature of design decisions, e.g., information acquisition decisions are followed by artifact

decisions, which are further followed by information acquisition decisions, and so on. Designers need to make decisions by

considering the impact on future decisions. Third, current models do not consider evolution of strategies, e.g., they do not

account for how people change their strategies as they gather more information. Fourth, the assumption of Nash equilibrium

can be attributed to many factors such as iterated reasoning, mutual rationality, and learning over time [12, p. 23]. It is not

clear whether this is a valid assumption for design situations. There is a lack of design-related studies about what players

actually assume about other players, how they account for other players’ decisions, and whether there is any learning effect

over time. Finally, because of the homogeneity assumption, the models only provide insights about the aggregate behaviors

MD-17-1215 - Page 5- Corresponding author: Jitesh H. Panchal

of groups of individuals. The models do not address the heterogeneity among people, and the uniqueness of each player. This

is a significant limitation because behavioral experiments have shown that there is significant variation among subjects and

their strategies. These research gaps limit the application of contest theory models to design decisions under competition. To

address these research gaps, we specifically focus on the following aspects: design-specific strategies, evolution of strategies,

mutual rationality, learning over periods, and heterogeneity.

3 Research Framework

Different research approaches including field studies, empirical analysis of secondary data, and controlled experiments

can be used for understanding the strategies adopted by different individuals in design problems. Field studies involve

analyzing professional designers working on real design problems, and secondary data are those collected for purposes other

than analysis. Using the former approach is challenging due to the presence of many uncontrollable variables that influence

designers’ decisions, while the latter is limited due to the lack of access to designers’ private information. Controlled

experiments allow greater control of the parameters that are not the focus of the study, and the information available to each

designer is clear.

Ideally, the goal is to conduct experiments that simulate reality as closely as possible, while completely controlling

for the variables of interest. However, these are conflicting goals. The design of experiments requires a tradeoff between

realism and control. There are multiple types of experiments with different levels of control, as shown in Figure 1. On

the far left, there are laboratory experiments, which provide the highest level of control. Here, subjects participate in tasks

that are well controlled to study the effects of treatment variables. On the far right are naturally occurring data, obtained

from real-world designers in their regular activities. Between these two extremes, there are field experiments that provide

a balance between control and realism [27, 28]. Field experiments can be further classified into three types, based on the

aspects that are controlled.

Fig. 1: Classification of experiments with human subjects (based on [28])

Clearly, the type of the experiment should be chosen based on the researcher’s objective. The choice of the experiments

has implications on the validity, ease of replication, and the ability to analyze the cause-and-effect relationships. Laboratory

experiments have the highest internal validity, whereas natural data provide the highest ecological validity which is the degree

to which an investigation represents real-life situations. In this study, our objective was to achieve high level of control in

order to understand designers’ strategies. Hence, we use lab experiments. Future studies will be focused on increasing the

ecological validity through field experiments.

3.1. Using Games for Engineering Design Research

One approach to conducting lab experiments to study design-related decision making is using simple games that repre-

sent abstractions of specific activities within the design process. An example of such games is the parameter design game,

MD-17-1215 - Page 6- Corresponding author: Jitesh H. Panchal

initially developed by Hirschi and Frey [29]. In the game, there are two sets of parameters, inputs (x) and outputs (y), related

through a linear mapping (y = Mx). The participants adjust the input variables to bring the values of each output variable

y ∈ y within acceptable ranges ([y∗− ε,y∗+ ε]). The game has been used to study the effects of (i) problem scale and cou-

pling among parameters on the time for completion of tasks by humans [29], (ii) technical and social complexity on design

effort [30], and (iii) scale and coupling on solution quality [31]. While the task appears much simpler than a real design

problem, it is valuable from the standpoint of the human-subject experiment. The advantage of using such simplified domain

independent problems within games is that they prevent confounding with subjects’ domain specific knowledge. Further,

Grogan and de Weck argue that “in this context-free case, even linear systems are not perceived as simple due to limited

cognitive abilities without quantitative aids” [30].

The use of such games is increasingly gaining attention within the engineering design and systems engineering literature.

Recent examples of games used for experiments include a truss design game [32], a 3D peekaboo game [33], an EcoRacer

game [34], and Orbital Federates game [35]. McComb et al. [32] use the truss design game to understand how humans

sequence design operations. Ren et al. [34] use the EcoRacer game to compare the effectiveness of humans and optimization

algorithms for computationally hard problems. While their study is framed as a crowdsourcing game, competition is only

used to attract participants. They do not analyze the competitive scenario and its effects on individual decisions, which is the

focus of this paper.

3.2. Function Optimization Game

In this study, we use the function optimization game developed by Sha et al. [17]. The choice of this game is driven by

the need for a problem that is general enough to provide insights about broad classes of decision making situations in design,

but at the same time has the specific features discussed in Section 1. In this game, each participant competes with another

randomly selected participant in finding the optimum of an undisclosed function, f (x). The participants can learn about the

value of the function for specific inputs at a pre-specified cost, c. At the end of the game, the participant who achieves the

better output wins the contest, and receives a fixed prize (Π).

As Sha and co-authors [17] note, this game is a simple surrogate for decisions made in design under competition. The

game embodies the following characteristics similar to design decision making scenarios: (i) a designer’s goal is to find the

best design [36], (ii) designers need to evaluate the performance of design alternatives through computational or physical

experiments, (iii) experiments incur costs, and (iv) greater number of experiments result in better understanding of the

design space, therefore, better quality. In addition to being a simplified representation of decisions commonly encountered

within the design process, the game also embodies the sequential information acquisition decisions, and enables the study

of strategic decisions. There are advantages of using such a simple abstraction. It allows researchers to control for the

influence of domain specific knowledge on participants’ strategies. Due to its domain independence, it does not require

specific knowledge or expertise, and can be executed with student subjects. The simple game also enables detailed analytical

modeling. Therefore, the function optimization game enables controlled lab experiments.

MD-17-1215 - Page 7- Corresponding author: Jitesh H. Panchal

3.3. Participants’ Decisions

The analytical model used by Sha and co-authors [17] is the standard model from contest theory, where the decision

makers only make a single decision involving one strategic variable–the amount of effort (ei). The quality is assumed to be

an explicit function of the effort (e.g., qi = αei or qi = αexp(βei)). This allows for explicit derivation of Nash equilibrium

under special cases. Using the simple model, Sha and co-authors [17] test hypotheses about the effect of cost on the number

of tries, the effect of number of tries on the solution quality, and the effect of number of tries on the winning probability.

The simple model restricts the analysis of individual strategies for information acquisition, the evolution of strategies, how

individuals account for their competitors’ decisions, and the heterogeneity among the strategies of the individuals, which is

the focus of this paper.

In contrast to the standard contest theory model, in this paper we assume that individuals make decisions sequentially.

At each step, t, the participants make two decisions. The first decision is whether to continue with an additional information

acquisiton step (i.e., sampling more points) or not. This decision depends on three factors: the current quality of the design

(i.e., the best function value achieved so far), the expected improvement in the quality if additional information is acquired,

and the final quality of the competitor’s design (which is not known). Gathering additional information may result in im-

provement in the solution quality, which increases the probability of winning, thereby increasing the expected value of gross

payoff (ΠPi). However, this additional information comes at a cost, c. Conceptually, if the expected improvement in the

payoff is less than the cost, i.e., (Π∆Pi < c), then the decision maker should stop further sampling. Since the probability of

winning (Pi) is also dependent on the decisions made by the other player, this is a strategic decision, and it is best modeled

as a game.

If the decision to acquire more information has been made, then the second decision is to choose the value of design

variable x that is most likely to result in the optimum value of the function. This is an individual decision because it is not

affected by the decisions made by the other player. Assuming that the decision makers are rational, they should pick the

value of x that maximizes the net payoff. The net payoff is maximized by maximizing the probability of winning, which in

turn is maximized by choosing the value of x that results in the maximum improvement in the quality of the solution.

Fig. 2: Sequence of decisions made by participants in the game

These two decisions are repeated until the participant decides to stop sampling further, as illustrated in Figure 2. If the

participant decides to stop, the best function value is used as his/her final submission. The winner is determined based on the

MD-17-1215 - Page 8- Corresponding author: Jitesh H. Panchal

final submissions of the competing participants. These decisions form the basis for the model presented in Section 4, where

analytical models of the decisions for the function optimization game are presented in detail. Before discussing the model,

we provide an overview of the experimental setup in Section 3.4.

3.4. Experimental Setup

The experiment was carried out with 44 senior undergraduate Mechanical Engineering students at Purdue University.

The subjects competed in randomly assigned pairs to simulate two-player games. Each subject participated in two treatments:

low cost treatment (c = 10 tokens) and high cost treatment (c = 20 tokens). Within each treatment, there were 15 periods

and the subjects played the game once in each period. Therefore, the subjects played the game 15 times in low cost treatment

and 15 times in high cost treatment. Within a period, the subjects made the two decisions (decision to choose x, and decision

to stop or not) sequentially multiple times. Each step of choosing x is referred to as a try.

At the end of each period, the winners receive the prize amount (Π = 200 tokens) minus their cost in that session.

At the start of each period, subjects are re-matched. They are never informed about whom they are competing against.

At the end of each period, participants are informed whether they won or not, the solution (x and f (x)) obtained by the

winner, and the actual optimum of the function in that period. This provides feedback to the players about the opponent’s

solution when he/she does not win. This feedback does not directly reveal the solution or the opponent’s strategy in the

future periods because the function is randomly generated and participants are randomly matched in each period. However,

it allows the players to form beliefs about the opponent’s solution quality in the subsequent periods. In addition to controlling

for the influence of participants’ domain knowledge, the experiment is designed (and executed) to control for a number of

other factors, including order effects, learning effects, and wealth effects. The experiment was implemented in z-Tree [37].

Additional details of the game can be found in Ref. [17].

4 The Proposed Normative Model

While there are many search strategies for design optimization, a promising approach that accounts for sequential

decision making using unknown functional form is to model the function as a stochastic process. Such an approach is

widely used in Bayesian global optimization [38]. From a cognitive standpoint, the approach is based on the assumption

that individuals build an abstract model of the function based on the available information, but there is also uncertainty

around the model. Different stochastic processes can be used for modeling the unknown function. Our model is based on

the Wiener process model because of its computational simplicity in calculating the distribution of random variables after

each observation [39]. Another advantage is that the approach can be extended in the future by considering more general

random processes (e.g., Gaussian Processes), and can be extended to design problems in multiple dimensions. Finally,

as we illustrate in the rest of the paper, the model provides a connection between prescriptive and descriptive models for

decision making. The details of the mathematical modeling of the function are presented in Section 4.1. The mathematical

formulation of expected improvement for choosing the next point is presented in Section 4.2. The details of the decision

to stop are presented in Section 4.3. Finally, the usefulness of the model for design research and practice is discussed in

Section 4.4.

MD-17-1215 - Page 9- Corresponding author: Jitesh H. Panchal

4.1. Modeling the Function as a Wiener process

In this model, the function is assumed to be embedded in a family of curves generated by a random process. Given two

points x j−1 and x j, and the corresponding function values f j−1 and f j, the function value f (x) for x ∈ [x j−1,x j] is modeled

as a random variable. If f (x) is modeled as a Wiener process with parameter σ, the distribution of f (x) is normal with mean

(µ) and variance (v), conditioned on the information zt is:

µ(x|zt) = f jx− x j−1

x j− x j−1+ f j−1

x j− xx j− x j−1

(4)

v2(x;σ|zt) = σ2 (x− x j−1)(x j− x)

x j− x j−1(5)

Here, zt is the set of all the data points (x, f (x)) obtained until time-step t. Note that the mean is linear between the two

points and the variance is zero at the end points. It is assumed that f (x) for x ∈ [x j−1,x j] depends only on the two points, and

the parameter σ. It does not depend on any other points outside the range. Further details are given in [39, 40].

0 0.2 0.4 0.6 0.8 1

−1

−0.5

0

0.5

1

x

f(x)

0 0.2 0.4 0.6 0.8 10

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

x

ExpectedGain

(G)

Fig. 3: Illustration of the Wiener process model with input points (0,1) and (1,0), σ = 1.0.

The use of this model for an unknown function of single variable is illustrated in Figure 3. Assume that the function to

be minimized is known at two points, f (0) = 1, and f (1) = 0. In the top part of the figure, the mean µ(x), evaluated using

Eqn. 4 is shown using the solid black line, and the dotted blue lines represent (µ(x)± v(x;σ)) where v(x;σ) is evaluated

using Eqn. 5.

MD-17-1215 - Page 10- Corresponding author: Jitesh H. Panchal

4.2. Decision: Choosing the Next Point based on Expected Improvement

The choice of the next point is an individual decision. If a participant has decided not to stop sampling, the decision on

where to sample does not depend on the decisions made by the competitor. Ideally, the overall sampling strategy should be

based on the goal of minimizing the function at the lowest cost possible. Since the sampling decisions are made sequentially,

each sampling decision should be made considering the impact on the subsequent sampling decisions. Dynamic program-

ming approaches are generally recommended within decision theory for making such sequential decisions. However, due

to the associated complexity, we do not consider such dynamic programming approaches. Instead, we consider a myopic

strategy where individuals only consider one step ahead. Individuals are assumed to try to maximize their payoff by only

considering one additional data point. The one-step ahead strategy is easier to use, and is one of the most commonly used

strategy within the global optimization literature. Multiple criteria have been developed in the literature for selecting the next

point. These include expected improvement, probability of improvement, value of information, and knowledge gradient [41].

We assume that at each step, individuals try to maximize the expected improvement in the solution, because that corresponds

to the maximization of the expected payoff. Assuming that a player has decided to invest in one more try, the expected

payoff (see Eqn. 1) is maximized by maximizing the probability of winning (Pi), which can be achieved by maximizing the

improvement.

The expected improvement for a function minimization problem is calculated as follows. The solution quality at try t,

denoted by qt , is related to the improvement possible in the current best value f ∗t . If x is the point to be chosen at the next

try, (t +1), then the possible improvement, It , is given by the following random variable:

It(x;σ) = max{( f ∗t − f (x)),0} (6)

The Expected Improvement (gain), G at t is

Gt(x;σ) = E[max{( f ∗t − f (x)),0}|zt ] (7)

=∫ f ∗t

−∞

( f ∗t − z)dFx(z) (8)

where Fx is the normal distribution with mean µ(x) and variance v(x;σ). The expected gain can also be written as:

Gt(x,σ) = v(x;σ)Ψ

(f ∗t −µ(x)v(x;σ)

)(9)

where,

Ψ(x) =∫ x

−∞

(x− z)dφ(z) (10)

= ϕ(x)+ xφ(x) (11)

Here, φ is the standard normal distribution, and ϕ is its density.

φ(x) =1√2π

∫ x

−∞

e−z2/2dz (12)

ϕ(x) =1√2π

e−12 x2

(13)

Assuming that the decision maker chooses x that maximizes the expected improvement as the decision criterion, the next

point to be chosen is

xt+1 = argmaxx

Gt(x;σ) (14)

MD-17-1215 - Page 11- Corresponding author: Jitesh H. Panchal

and the improvement at the chosen next point is the following random variable

It(xt+1;σ) = max{( f ∗t − f (xt+1)),0}, (15)

which has a Gaussian distribution with mean µ(xt+1|zt) and variance v(xt+1,σ|zt), truncated at f ∗t . Using this improvement,

the best solution at the next time-step, f ∗t+1, can be quantified by the following random variable

f ∗t+1 = f ∗t − It(xt+1;σ) (16)

= f ∗t −max{( f ∗t − f (xt+1)),0} (17)

In Figure 3 (bottom), the expected gain for x = [0 1], evaluated using Eqn. 9, is plotted. The expected gain is maximum

at x = 0.84. The probability distribution of f ∗t+1 at x = 0.84, evaluated using Eqn. 17, is shown in red in the top part of the

figure.

4.3. Decision: Whether to Stop Information Acquisition or Not

Consider a simpler case where there is no competition, and the payoff is proportional to the quality of the solution, i.e.,

(α f ∗), with a constant α, then an individual using the one single step ahead strategy should stop if the expected gain in the

net payoff is less than zero. Alternativey, the individual should stop if the expected gain in the gross payoff is less than the

(deterministic) cost of gathering information at the additional data point. Mathematically,

stop if: αGt(xt+1;σ)< c (18)

i.e.,

stop if: E[max{( f ∗t − f (xt+1)),0}|zt ]<cα

(19)

This is the commonly used stopping criterion in the optimization literature. However, this does not account for the decisions

made by other players. We extend this criterion to account for the competition, specifically for a two-player game.

4.3.1. Stopping Criterion for a Two-player Game

In terms of the net payoffs of each player, πi, a player i should stop if the expected improvement in the net payoff from

t to (t +1) is less than zero, i.e.,

stop if: E[πi,t+1]−E[πi,t ]< 0 (20)

Here, the payoff of player i at time step t is

πi,t = ΠPi,t −Ci,t (21)

= ΠP( f ∗i,t < f ∗−i)−Ci,t (22)

where Pi,t is the probability of player i winning after t tries; Ci,t is the cost incurred by player i until t; f ∗−i is the best solution

submitted by the other player. Note that f ∗i,t is known to player i after t tries. If player i continues for one more step, the net

payoff is:

πi,t+1 = ΠP( f ∗i,t+1 < f ∗−i)−Ci,t+1 (23)

where Ci,t+1 =Ci,t + c. Note that f ∗i,t+1 is a random variable.

Substituting Eqns. (22) and (23) in Eqn. (20), we get the stopping criterion for player i as:

[ΠP( f ∗i,t+1 < f ∗−i)−Ci,t+1]− [ΠP( f ∗i,t < f ∗−i)−Ci,t ]< 0 (24)

P( f ∗i,t+1 < f ∗−i)−P( f ∗i,t < f ∗−i)<cΠ

(25)

MD-17-1215 - Page 12- Corresponding author: Jitesh H. Panchal

For a two-player case, the stopping conditions for both players are given by the following two equations, where T1 and

T2 are the total number of tries for players 1 and 2, respectively.

P( f ∗1,T1+1 < f ∗2,T2)−P( f ∗1,T1

< f ∗2,T2)<

(26)

P( f ∗2,T2+1 < f ∗1,T1)−P( f ∗2,T2

< f ∗1,T1)<

(27)

4.3.2. Best Response Stopping Strategy

Games are generally analyzed using the best response or rational reaction of players to their competitors’ decisions. The

rational reaction set of a player, i, can be obtained by assuming the strategy played by the other players (represented by −i),

and determining the strategy for i that maximizes the net payoff. In this section, we determine the best response stopping

strategy for a player. Since the interaction between the two decision makers is primarily in terms of the comparison of their

final solutions, we use the final solution of the other player in determining the rational reaction. The final solution of the

other player is f ∗−i,T−i. For brevity, we denote it as f ∗−i, and determine player i’s stopping strategy in terms of f ∗−i. Therefore,

for a given f ∗−i, since f ∗i,t is precisely known to player i,

P( f ∗i,t < f ∗−i) =

0, if f ∗i,t > f ∗−i

1, if f ∗i,t < f ∗−i

(28)

On the other hand, based on Eqn. (17)

P( f ∗i,t+1 < f ∗−i) =P( f ∗i,t −max{ f ∗i,t − f (xi,t+1),0}< f ∗−i), if f ∗i,t > f ∗−i

1, if f ∗i,t < f ∗−i

(29)

The first case can be split into two sub-cases, depending on whether f ∗i,t > f (xi,t+1) or not:

P( f ∗i,t+1 < f ∗−i) =P( f (xi,t+1)< f ∗−i), if f ∗i,t > f ∗−i; f ∗i,t > f (xi,t+1)

P( f ∗i,t < f ∗−i), if f ∗i,t > f ∗−i; f ∗i,t < f (xi,t+1)

1, if f ∗i,t < f ∗−i

(30)

P( f ∗i,t < f ∗−i) in the second case for ( f ∗i,t > f ∗−i) is 0. Therefore,

P( f ∗i,t+1 < f ∗−i) =

P( f (xi,t+1)< f ∗−i), if f ∗i,t > f ∗−i

1, if f ∗i,t < f ∗−i

(31)

Here, f (xi,t+1) is a Gaussian distribution with mean µ(xi,t+1|zi,t) and variance v(xi,t+1;σ|zi,t). Therefore,

P( f ∗i,t+1 < f ∗−i) =

φ

(f ∗−i−µ(xi,t+1|zi,t)

v(xi,t+1;σ|zi,t)

), if f ∗i,t > f ∗−i

1, if f ∗i,t < f ∗−i

(32)

Substituting Eqns. (28) and (32) in Eqn. (25), we get the stopping criterion as

stop if

φ

(f ∗−i−µ(xi,t+1|zi,t)

v(xi,t+1;σ|zi,t)

)<

and f ∗i,t > f ∗−i

or

f ∗i,t < f ∗−i

(33)

MD-17-1215 - Page 13- Corresponding author: Jitesh H. Panchal

Intuitively, the first condition refers to the case where the current solution is not as good as the other player, but the expected

improvement in the solution is higher than the cost for an additional try. The second condition refers to the scenario where

the current solution is better than the best final solution of the other player, due to which the player i should stop. Using these

conditions, player i can determine whether to stop or to continue, given the solution quality of the other player. This is the

best response stopping criterion for the players.

4.4. Utility of the Model for Design Research and Practice

The model is an idealized mathematical representation of rational decision makers participating in a contest. It has three

main uses in design research and practice. First, from the standpoint of design research, the model can be used as a baseline

normative model for experimental studies on design decision making. In conjunction with human subject experiments, the

model can be used to identify decision makers’ strategies and their beliefs about the opponents’ solution quality. That is how

the model is used in the rest of this paper. From the standpoint of design practice, the second use of the model is that it can

help participants in design contests (such as crowdsourcing contests) in making rational decisions about what information to

acquire, and when to stop acquiring information considering their beliefs about the outcomes of the other participants. Third,

the model can help contest designers in estimating the outcomes of a contest, and in making decisions about a contest such

as the prize amount.

To evaluate the utility of the model in practical design scenarios, let us consider the core ingredients of the model:

(i) representing the unknown function as a random process,

(ii) the choice of the next point in the design space, and

(iii) the best response stopping strategy.

In general, these three ingredients are applicable to all design scenarios where the solution quality is only dependent

on the cost incurred during the design process (i.e., cost of computational and physical experiments). This is true in crowd-

sourcing challenges where the problem is well defined and the knowledge needed for solving the problem is widely available.

Examples of this are the airline bearing bracket challenge [42] and GE jet engine bracket challenge [43] on GrabCAD. Other

examples in the software development domain are development challenges and data science challenges on Topcoder [24].

For such scenarios, these three ingredients help in addressing the limitations of existing game-theoretic models discussed in

Section 2.1 by eliminating the need for ad hoc contest success functions. The model accounts for the sequential nature of

design, where designers make decisions about what information to acquire throughout the design process, and update their

beliefs about the mapping between the design space and the performance space. Therefore, the model is a more realistic

description of a general design process than the ones available in the literature on contest theory.

The use of the specific stochastic process for deriving the information acquisition criterion (Eqn. 14) and the stopping

criterion (Eqn. 33) does place some restrictions on the broader applicability of the model, but these can be surmounted by

replacing the Wiener process model with an appropriate stochastic model. Specifically, Equations 4 and 5 are based on the

assumption that the mapping between the design space (x) and the performance space f (x) is modeled as a Wiener process.

The model of the function presented in Section 4.1 can be adapted to higher dimensional problems using other stochastic

processes (e.g., Gaussian Process regression). The choice of the next point relies on the myopic expected improvement crite-

MD-17-1215 - Page 14- Corresponding author: Jitesh H. Panchal

rion, which is well accepted in engineering design, but can be further refined by considering multiple-step-ahead strategies.

Finally, the best response strategy is derived for a two-person game. It can be extended to n > 2 players by replacing the

final solution of the opponent f ∗−i with the best solution from all the n players.

In summary, this model is an idealization of general design-under-competition scenarios. It can be refined and par-

ticularized for specific design scenarios by accounting for the characteristics of design problems, the participants, and the

information available to the participants. However, this particularization is out of scope of this paper. In the rest of the paper,

we focus on the first use of the model - understanding how humans make these decisions under competition.

5 Results from the Behavioral Experiments

In this section, we use the analytical model in conjunction with the data collected from the experiment described in

Section 3.4. The focus is on gaining insights about the five areas that existing contest models fail to address (see Section 2.3):

design-specific strategies, evolution of strategies, mutual rationality, learning over periods, and heterogeneity. We assume

that individuals make decisions using the model described in Section 4. Specifically, it is assumed that

1. individuals model the function as a Wiener process, with mean and variance given by Eqns. (4) and (5),

2. individuals use the expected improvement maximization criterion in Eqn. (14) for selecting the next point, and

3. individuals stop when the expected improvement in their net payoff (πi) is negative, resulting in the stopping criterion

of Eqn. (33).

The discussion of results is divided into two sections based on the two decisions: choosing the next point, and stopping. The

overall approach for analysis of results is as follows. The data about individual decisions are used to estimate two parameters

in this model: σ and f ∗−i. Parameter σ is estimated based on individual decisions on the next point, whereas f ∗−i is estimated

based on the decision to stop further information acquisition. Unless specified, the level of significance used in this paper is

α = 0.1.

5.1. Decisions on Which Next Point to Choose

5.1.1. Strategies

The parameter σ in the model quantifies a decision maker’s uncertainty about the function between two known data

points. The amount of uncertainty has an impact on the decision about the next point (xt+1). Before analyzing the experi-

mental data, let us consider the impact of uncertainty on the search strategy. The effect of σ on the choice of the next point

xt+1 for the illustrative example used in Figure 3 is calculated using Eqn. 14, and plotted in Figure 4. If uncertainty about the

function is low (i.e., low σ), then expected improvement maximization using Eqn. (14) results in the next point (xt+1) that is

closer to the current best solution x∗t = argminx{ f j−1, f j}. This corresponds to the exploitation strategy commonly adopted in

approaches for local search. As σ increases, xt+1 moves away from the current best solution, towards the inferior solution.

At the extreme, σ→ ∞, the next point xt+1 converges to the mid point x̂ j =x j−1 + x j

2. This corresponds to the exploration

strategy commonly adopted in bisection approaches. Note that for σ≥ 0, the next point xt+1 is always between the mid point

x̂ j and the current best solution x∗t . This relationship between σ and strategy can be used to infer the decision strategies of

MD-17-1215 - Page 15- Corresponding author: Jitesh H. Panchal

individuals.

0 10 20 30 40 500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

xt+1

σ

Fig. 4: Effect of σ on the next point for x j−1 = 0, x∗t = x j = 1, x̂ j = 0.5

Now, consider the results from the experiment based on the function optimization game, discussed in Section 3.4. To

visualize how the xt+1 values from the experiment are distributed within the interval of existing points, the xt+1 values are

normalized to [0,1] such that 0 corresponds to x∗t and 1 corresponds to argmaxx{ f j−1, f j}, and the probability distribution is

plotted in Figure 5. Additionally, using the actual decisions made by the individuals about the next point (xt+1), we calculate

the σ that solves Eqn. (14) for each try. This results in a dataset of σ values. The probability distribution of σ is plotted in

Figure 6, with the dashed line corresponding to an exponential distribution.

0 0.2 0.4 0.6 0.8 10.050.250.5

0.75

0.9

0.95

0.99

0.995

0.999

0.9995

xt+1 normalized

Probability

Fig. 5: Cumulative distribution of normalized xt+1 with exponential fit

0 10 20 30 40 500.050.250.5

0.75

0.9

0.95

0.99

0.995

0.999

0.9995

σ

Probability

Fig. 6: Cumulative distribution of σ with exponential fit

MD-17-1215 - Page 16- Corresponding author: Jitesh H. Panchal

Through the statistical analysis of the σ dataset, the following observations are made. First, it is observed in Figure 5

that with 91.77% probability, the normalized xt+1 values lie between [0.0,0.5], which corresponds to the range between x∗t

and x̂ j. This indicates that most of the decisions about xt+1 are consistent with the overall Wiener process model. This

provides confidence about the appropriateness of the model. The rest (8.23%) of the points, corresponding to normalized

xt+1 > 0.5 are ignored because they are inconsistent with the model. According to the model, these points would result in a

negative value of expected improvement, and hence, would never be chosen because it would be against the expected payoff

maximization principle.

Second, the σ values are found to be exponentially distributed (see Figure 6) with parameter λ = 0.045. Using this

distribution, we estimate that 15% of the σ values are below 3.61, which correspond to the strategy of exploitation, and about

15% are above 42.15, corresponding to the exploration strategy. This indicates that the participants use both exploration and

exploitation strategies in their decision making.

5.1.2. Analysis of Strategies using Linear Mixed Effects Regression

We analyze the effects of different variables on the strategy. Participants differ in how different factors affect their

strategies. To analyze these differences, we use a linear mixed effects regression model [44]. As in general linear models,

the dependent variable is formalized by a set of independent variables, as well as the error term. In the linear mixed effects

model, the coefficients (effects) of such explanatory variables are not constant. Instead, a specific type of distribution is

assumed for quantification of the heterogeneity among subjects. The general form of the model in matrix notation is:

y = Xβ+Zγ+ ε (34)

where y is a N× 1 column vector, the outcome variable; X is a N× p matrix of the p explanatory variables; β is a p× 1

column vector of the fixed effects regression coefficients; Z is the N × q design matrix for q variables assumed to have

random effects; γ is a q×1 vector of the random effects; and ε is a N×1 column vector of the residuals. ε is assumed to be

multivariate normally distributed.

The fixed effect coefficient β is directly estimated as a column vector. The random effects γ ∼ N(0,G),where G is the

covariance matrix of the random effects. The random effects are modeled as deviations from the fixed effect, and the variance

in G is estimated. The estimated mean and the corresponding t-statistic of β, and the standard deviation of the random effect

γ are listed in Table 1 and discussed in Sections 5.1.3 and 5.1.4.

5.1.3. Evolution of Search Strategies with Tries and Periods

It is observed that as the number of tries increases, the median of σ decreases, along with the interquartile range (IQR),

as shown in Figure 7. This indicates that as individuals acquire more information about the function, they shift their strategy

from exploration to exploitation. This is also quantified by the negative value (−3.01) of the associated parameter in the fixed

effect of the model shown in Table 1. In the random effects, the standard deviation corresponding to the tries (t) is 17.72.

This can be interpreted as follows. For some people, as t increases, σ decreases; but for others, increase in t corresponds to

increase in σ, which indicates some heterogeneity in the participants’ strategies. Specifically, 40 out of the 44 participants

shift their strategy from exploration to exploitation, while the rest shift toward greater exploration as the number of tries

MD-17-1215 - Page 17- Corresponding author: Jitesh H. Panchal

Table 1: Linear mixed effects regression of σ

Fixed Effects

Variables Estimated parameter t-statistic

Intercept 35.80 11.40

Cost ID (= 1) 5.77 1.225

Tries (t) −3.01 −1.11

Period 0.014 0.022

CostID*tries −0.87 −1.92

CostID*(x j− x j−1) 0.047 1.57

CostID*period −0.21 −1.35

Random Effects

Variables Std. Dev.

Intercept 16.71

CostID (= 1) 24.05

Tries (t) 17.72

Period 4.18

Residual 14.84

increases.

2 3 4 5 6 7 8 9 10 11 12 13

010

2030

4050

Tries (t )

σ

Fig. 7: Box plot of σ for different tries

The effect of period is not significant. This indicates that for most of people, the number of periods do not affect the

σ values, and therefore, their strategy for selecting the next point. Additionally, based on the random effects, the deviation

among people (= 4.18) is also small compared to the other factors.

The interaction effect between cost level and tries is statistically significant. This means the effect of number of tries

on σ is different at different cost settings. On average, the number of tries in the high cost setting has greater effect on the

values of σ than in the low cost setting. On average, with a unit increase in tries, t, the decrease in σ in the high cost setting

is 0.87 more than the decrease in the low cost setting. This reflects that in the high cost setting, participants shift towards the

MD-17-1215 - Page 18- Corresponding author: Jitesh H. Panchal

exploitation strategy faster.

5.1.4. Effect of Cost on Strategies

As discussed in Section 3.4, the experiments were carried out for two cost settings: low cost (ID = 0), and high cost

(ID = 1). The low cost setting is used as the reference level in the regression model. The effects of cost on σ are shown

in a box plot in Figure 8. On average, σ is higher in the high cost setting than in the low cost setting. From the low cost

to high cost setting, the mean of σ increases from 13.45 to 20.15. This indicates that on average, participants tend towards

greater exploration as the cost increases. However, based on the standard deviation (= 24.05) in the random effects model,

it is inferred that there are significant differences among different participants. By estimating the parameter of the cost for

each individual, it is found that 34 out of 44 individuals have positive values. This implies that most of the people shift their

search strategy towards exploration as the cost setting changes from low to high.

0 1

010

2030

4050

Cost level (0 = low cost; 1= high cost)

σ

Fig. 8: Box plot of σ for different cost levels

5.2. Decisions on Whether to Stop

Within game theory, the solution of a game is generally defined by placing a number of assumptions on the players’

knowledge, and their assumptions about the other players’ knowledge. For example, Nash equilibrium is defined by as-

suming common knowledge, rationality of participants, infinite computation capability, and mutual consistency [12]. Such

assumptions are also made within contest models described in Section 2.1 where player i assumes that the other player, −i,

will play the best response to the strategy (i.e., effort) played by player i. In the equilibrium stopping criterion presented in

Section 4.3.1, Players 1 and 2 are assumed to have knowledge of the competitor’s quality at stopping. However, studies in

behavioral economics have shown that real people may not converge to the equilibrium solution [45]. It has also been shown

that subjects often ignore the rationality of their opponents. In this section, we focus on understanding how individuals form

beliefs about the decisions of other players, how learning plays a role in forming these beliefs, and the differences among

different players.

MD-17-1215 - Page 19- Corresponding author: Jitesh H. Panchal

5.2.1. Beliefs about the Opponent’s Solution Quality

The stopping criterion for a two-player scenario is presented in Eqn. (33). In this equation, f ∗−i is the final solution of the

opponent. Since the stopping decision for player i is driven by f ∗−i, which is unknown to i, we assume that player i assumes

some f ∗−i and makes stopping decisions based on that assumed value. The parameter f ∗−i can be used to provide potential

cognitive explanations for how a player guides his/her decisions using the belief about the opponent’s solution quality. For

example, according to a heuristic called “threshold stopping rule” [46], a player stops information acquisition as soon as

he/she reaches a pre-determined solution quality. In the case of competition, f ∗−i may be used as the target solution quality.

Other such heuristics include the difference threshold rule, mental list rule, and representational stability rule [46].

Using the data about whether a player decided to continue or stop during a specific try, we estimate the value of f ∗−i. This

process results in a range within which f ∗−i lies because Eqn. (33) consists of inequalities. The ranges of f ∗−i are calculated

for the last two steps, Ti−1 and Ti, where the player decides not to stop at Ti−1, but decides to stop at Ti. We use σ estimated

in the previous section to estimate f ∗−i.

To effectively compare the f ∗−i values across different sessions, we translate it by subtracting the actual optimum of

the function, i.e., ∆ f ∗−i = ( f ∗−i− fopt). The obtained translated range is represented in terms of lower and upper bounds,

[∆ f ∗−i, ∆ f ∗−i]. The mid point of the range is denoted by:

f̂ ∗−i =∆ f ∗−i +∆ f ∗−i

2. (35)

Through f̂ ∗−i, we analyze the players’ assumptions about the quality of the opponents’ solution.

It is observed that over 97.5% of the f̂ ∗−i values are less than 0. This indicates that in most periods, the participants’

assumed solution quality of their opponents at stopping is at least as good, or better, than the real optimum (i.e., f ∗−i ≤ fopt ).

To gain a better understanding of the distribution, we plot the cumulative distribution of − f̂ ∗−i in order to fit an exponential

distribution, as shown in Figure 9. The values around 0 have the most frequency. Specifically, for about 20% of the values,

the distance between the assumed optimum and the real optimum is 1.0. The median distance is 17.46. This indicates that

in many cases, their assumed solution of the opponent was very close to the actual optimum. To analyze the impact of

different variables on the assumed f ∗−i, we fit a linear mixed effects model. The results are shown in Table 2, and discussed

in Section 5.2.2.

0 50 100 150 200 250 300 3500.050.250.5

0.75

0.9

0.95

0.99

0.995

0.999

0.9995

−f̂∗

−i

Probability

Fig. 9: Cumulative distribution of − f̂ ∗−i with exponential fit

MD-17-1215 - Page 20- Corresponding author: Jitesh H. Panchal

Table 2: Linear Mixed Effect Regression for f̂ ∗−i

Fixed Effects

Variables Estimated parameter t-statistic

Intercept −73.67 −8.44

CostID (= 1) −9.74 −0.84

Period 3.00 4.49

( f ∗i − fopt) −16.63 −2.26

costID* ( f ∗i − fopt) 0.23 0.56

costID*period −0.26 −0.28

Random Effects

Variables Std. Dev.

Intercept 43.30

CostID (= 1) 44.21

Period 2.21

( f ∗i − fopt) 45.68

Residual 48.85

5.2.2. Effect of Period and Cost on f̂ ∗−i

Period: As period increases, the median of f̂ ∗−i gets closer to 0. This means that participants’ assumption about the

opponent’s best values converges to the real optimum. This shows that learning plays an important role in affecting people’s

judgment on their opponent’s performance. During the first few periods, most of the values of f̂ ∗−i are significantly less than

0, indicating that during those periods, participants are still trying to understand the game. As the period increases, people

become more familiar with the game. The effect of period is quantified using the regression model. The fixed effect value of

parameter for period is 3.0 with a t-statistic of 4.49 indicating the significance of this effect, whereas the standard deviation

in the random effects is 2.21, indicating that the variation among participants is small.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

−30

0−

200

−10

00

100

200

Period

Ave

rage

val

ue o

f f̂ −

i*

Fig. 10: Box plot of f̂ ∗−i for different periods

Cost: From Table 2, it is observed that the fixed effect of cost on f̂ ∗−i is not significant since the t-statistic for cost is

MD-17-1215 - Page 21- Corresponding author: Jitesh H. Panchal

−0.84. The non-significance is likely due to the parameter’s random effects among people. It is observed that there is a

large standard deviation in the random effects (= 44.21), which indicates that the effect of cost is different for different

people. A potential explanation for the differences in the effect of cost is the payoff function assumed in the normative

model. It is assumed that each player maximizes the expected prize (see Eqn. 22). This payoff function does not account

for risk attitudes, which may be different for different participants. The payoff also does not account for the behavioral

characteristics of human beings, such as different attitudes to gains and losses. These attitudes towards risk can be accounted

by changing the payoff functions to utility functions and allowing for differences among different participants.

6 Closing Comments

6.1. Contributions

From the standpoint of design research, this is a step in using game theory in understanding design scenarios. Specif-

ically, this is a step towards understanding how sequential information acquisition decisions for design under competition

should be made (normative aspect), and how they are actually made by individuals (descriptive aspect). To address the nor-

mative aspect, we present a mathematical model based on the concepts from global optimization, and extend it to the case

of design competition. The model is used in conjunction with the data generated from the behavioral experiment to gain

insights about the descriptive aspect. In addition to the contributions to theoretical analysis of design under competition,

one of the primary contributions of the paper is a systematic approach for analyzing the experimental data with the model to

learn about participants’ decision making strategies and what they believe about their opponent’s solution quality.

Some of the key observations about individual strategies are as follows: (i) individuals shift their strategy from explo-

ration to exploitation as they acquire more information about the function, (ii) heterogeneity exists in the effects of different

parameters on strategies, (iii) most participants tend towards greater exploration as cost increases, (iv) individuals’ search

strategies for information acquisition is consistent across periods, and (v) the participants’ assumed solution of their oppo-

nents at stopping is at least as good as the real optimum, in most cases. These insights cannot be gained by using existing

models from contest theory, which focus on the aggregate population-level parameters. These insights can have implications

for designing crowdsourcing contests. For example, if the contest designer’s goal is to encourage exploration of the design

space (e.g., to better understand the potential solutions and to generate diverse solutions), our experimental results indicate

that the cost of information acquisition should be high and information provided about the solution space should be low.

Similarly, due to the learning effects, individuals’ beliefs about competitors converge to those generally assumed in game

theory (such as at the Nash equilibrium) as they play the game multiple times. Hence, repetition is helpful in achieving

behavior that is closer to that predicted by analytical models.

6.2. Limitations

While the simplicity of the game helps in experimentation, it also comes with limitations in extending results to real

design scenarios. First, this paper only addresses decision-making activities in design. It does not capture other activities

in the design process, such as understanding customer requirements, idea generation, problem partitioning, and concept

refinement. It is well established within the design literature that decision making does not capture all the tasks that a

MD-17-1215 - Page 22- Corresponding author: Jitesh H. Panchal

designer performs [48]. Second, the game presented in this paper is based on a parametric design problem. Therefore, it

does not account for non-parametric design problems, particularly those in the early stages of design. Third, the game only

accounts for monetary rewards. However, humans are not only driven by extrinsic monetary rewards but also by intrinsic

motivations such as inherent satisfaction in completing a task, recognition, and obligation to the community [50]. Fourth, the

study is focused on individual decision making only. It does not account for group decision making or team dynamics. Fifth,

the extensibility of analysis results is limited by the assumptions in the model (e.g., Wiener process with specified mean and

variance, pre-defined information acquisition criterion, myopic stopping criterion, etc.). Finally, as discussed in Section 4.4,

the proposed model is only applicable for crowdsourcing scenarios where the solution quality is only dependent on effort

(cost of design). Since all players are assumed to be similar in expertise, knowledge and past experience, the proposed

model is not suitable to model crowdsourcing scenarios where the goal is to get diverse solutions in response to a set of

requirements.

Some of these limitations are attributed to the specific design of the experiment, and can be addressed in future research.

For example, the game can be extended to other forms of contests, and particularized to specific design problems by replac-

ing f (x) with physics-based models. Future research opportunities include consideration of expertise and design-specific

knowledge, team formation, and application to more realistic design problems. On the other hand, some of these limitations

are due to the inherent nature of laboratory experimentation. To gain a holistic understanding of decision making in design,

the results from this experiment can be used in conjunction with field experiments and naturally occurring data.

Acknowledgments

The authors gratefully acknowledge financial support from the National Science Foundation through NSF CMMI grant

1400050. The authors thank the anonymous reviewers for their helpful and constructive comments that contributed to

significantly improving the paper.

References

[1] Moore, R. A., Romero, D. A., and Paredis, C. J. J., 2014. “Value-based global optimization”. Journal of Mechanical

Design, 136(4), jan, p. 041003.

[2] Thompson, S. C., and Paredis, C. J. J., 2010. “An investigation into the decision analysis of design process decisions”.

Journal of Mechanical Design, 132(12), p. 121009.

[3] Panchal, J. H., Paredis, C. J., Allen, J. K., and Mistree, F., 2008. “A value-of-information based approach to simulation

model refinement”. Engineering Optimization, 40(3), pp. 223–251.

[4] Mockus, J., 2013. Bayesian Approach to Global Optimization: Theory and Applications. Mathematics and its Appli-

cations. Kluwer Academic Publishers, The Netherlands.

[5] Panchal, J. H., 2015. “Using crowds in engineering design – towards a holistic framework”. In Proceedings of the

International Conference on Engineering Design 2015 (ICED 2015), The Design Society.

[6] Lewis, K., and Mistree, F., 1998. “Collaborative, sequential and isolated decisions in design”. ASME Journal of

Mechanical Design, 120(4), pp. 643–652.

MD-17-1215 - Page 23- Corresponding author: Jitesh H. Panchal

[7] Chanron, V., and Lewis, K., 2005. “A study of convergence in decentralized design processes”. Research in Engineering

Design, 16(3), pp. 133–145.

[8] Ciucci, F., Honda, T., and Yang, M. C., 2011. “An information-passing strategy for achieving pareto optimality in the

design of complex systems”. Research in Engineering Design, 23(1), pp. 71–83.

[9] Takai, S., 2010. “A game-theoretic model of collaboration in engineering design”. Journal of Mechanical Design,

132(5), p. 051005.

[10] Fernndez, M. G., Panchal, J. H., Allen, J. K., and Mistree, F., 2005. “Concise interactions and effective management of

shared design spaces: Moving beyond strategic collaboration towards co-design”. In ASME 2005 International Design

Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 231–248.

[11] Frischknecht, B. D., Whitefoot, K., and Papalambros, P. Y., 2010. “On the suitability of econometric demand models

in design for market systems”. Journal of Mechanical Design, 132(12), p. 121007.

[12] Fudenberg, D., and Tirole, J., 1991. Game Theory. MIT Press, Cambridge, MA. Translated into Chinesse by Renin

University Press, Bejing: China.

[13] Skaperdas, S., 1996. “Contest success functions”. Economic Theory, 7(2), pp. 283–290.

[14] Jia, H., Skaperdas, S., and Vaidya, S., 2013. “Contest functions: Theoretical foundations and issues in estimation”.

International Journal of Industrial Organization, 31(3), pp. 211 – 222.

[15] Skaperdas, S., 1998. “On the formation of alliances in conflict and contests”. Public Choice, 96(1/2), pp. 25–42.

[16] Nash, J. F., 1950. “Equilibrium points in n-person games”. Proc. of the National Academy of Sciences, 36, pp. 48–49.

[17] Sha, Z., Kannan, K. N., and Panchal, J. H., 2015. “Behavioral experimentation and game theory in engineering systems

design”. ASME Journal of Mechanical Design, 137(5), p. 051405.

[18] Konrad, K. A., 2009. Strategy and Dynamics in Contests. Oxford University Press, New York, NY.

[19] Taylor, C. R., 1995. “Digging for golden carrots: An analysis of research tournaments”. The American Economic

Review, 85(4), pp. 872–890.

[20] Fullerton, R., and McAfee, P., 1999. “Auctioning entry into tournaments”. Journal of Political Economy, 107(3),

pp. 573–605.

[21] Terwiesch, C., and Xu, Y., 2008. “Innovation contests, open innovation, and multiagent problem solving”. Manage.

Sci., 54(9), pp. 1529–1543.

[22] Archak, N., 2010. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowd-

sourcing contests on topcoder.com.

[23] Boudreau, K. J., Lacetera, N., and Lakhani, K. R., 2011. “Incentives and problem uncertainty in innovation contests:

An empirical analysis”. Management Science, 57(5), pp. 843–863.

[24] TopCoder, 2017. Topcoder. https://www.topcoder.com/getting-started/.

[25] Sheremeta, R. M., 2011. “Contest design: An experimental investigation”. Economic Inquiry, 49(2), pp. 573–590.

[26] Tversky, A., and Kahneman, D., 1974. “Judgment under uncertainty: Heuristics and biases”. Science, 185(4157),

pp. 1124–1131.

MD-17-1215 - Page 24- Corresponding author: Jitesh H. Panchal

[27] Harrison, G. W., and List, J. A., 2004. “Field experiments”. Journal of Economic Literature, 42(4), dec, pp. 1009–

1055.

[28] Levitt, S. D., and List, J. A., 2009. “Field experiments in economics: The past, the present, and the future”. European

Economic Review, 53(1), jan, pp. 1–18.

[29] Hirschi, N., and Frey, D., 2002. “Cognition and complexity: An experiment on the effect of coupling in parameter

design”. Research in Engineering Design, 13(3), pp. 123–131.

[30] Grogan, P. T., and de Weck, O. L., 2016. “Collaboration and complexity: an experiment on the effect of multi-actor

coupled design”. Research in Engineering Design, 27(3), pp. 221–235.

[31] Flager, F., Gerber, D. J., and Kallman, B., 2014. “Measuring the impact of scale and coupling on solution quality for

building design problems”. Design Studies, 35(2), pp. 180 – 199.

[32] McComb, C., Cagan, J., and Kotovsky, K., 2016. “Utilizing markov chains to understand operation sequencing in

design tasks”. In Design Computing and Cognition 2016, J. S. Gero, ed., Springer, pp. 421–440.

[33] Yao, H., and Ren, M. Y., 2016. “Impressionist: A 3d peekaboo game for crowdsourcing shape saliency”. In ASME 2016

International Design Engineering Technical Conferences and Computers and Information in Engineering Conference.

[34] Ren, Y., Bayrak, A. E., and Papalambros, P. Y., 2016. “EcoRacer: Game-based optimal electric vehicle design and

driver control using human players”. J. Mech. Des, 138(6), may, p. 061407.

[35] Grogan, P. T., and de Weck, O. L., 2015. “Interactive simulation games to assess federated satellite system concepts”.

In 2015 IEEE Aerospace Conference, Institute of Electrical & Electronics Engineers (IEEE).

[36] Papalambros, P. Y., and Wilde, D. J., 2000. Principles of Optimal Design: Modeling and Computation, second edi-

tion ed. Cambridge University Press, New York.

[37] Fischbacher, U., 2007. “z-tree: Zurich toolbox for ready-made economic experiments”. Experimental Economics,

10(2), pp. 171–178.

[38] Mockus, J., 1989. Bayesian Approach to Global Optimization. Springer Netherlands.

[39] Kushner, H. J., 1962. “A versatile stochastic model of a function of unknown and time varying form”. Journal of

Mathematical Analysis and Applications, 5(1), pp. 150 – 167.

[40] Locatelli, M., 1997. “Bayesian algorithms for one-dimensional global optimization”. Journal of Global Optimization,

10(1), pp. 57–76.

[41] Powell, W. B., and Ryzhov, I. O., 2012. Optimal Learning. John Wiley & Sons, Inc., mar.

[42] GrabCAD, 2016. Airplane Bearing Bracket Challenge. https://grabcad.com/challenges/airplane-bearing-bracket-

challenge.

[43] GrabCAD, 2013. GE jet engine bracket challenge. https://grabcad.com/challenges/ge-jet-engine-bracket-challenge.

[44] McLean, R. A., Sanders, W. L., and Stroup, W. W., 1991. “A unified approach to mixed linear models”. The American

Statistician, 45(1), feb, p. 54.

[45] Camerer, C. F., 1997. “Progress in behavioral game theory”. Journal of Economic Perspectives, 11(4), pp. 167–188.

[46] Browne, G. J., and Pitts, M. G., 2004. “Stopping rule use during information search in design problems”. Organiza-

MD-17-1215 - Page 25- Corresponding author: Jitesh H. Panchal

tional Behavior and Human Decision Processes, 95(2), nov, pp. 208–224.

[47] Hatchuel, A., 2001. “Towards design theory and expandable rationality: The unfinished program of herbert simon”.

Journal of Management and Governance, 5(3), pp. 260–273.

[48] Hatchuel, A., and Weil, B., 2003. “A new approach of innovative design: An introduction to c-k theory”. In Proceedings

of the International Conference on Engineering Design 2003 (ICED 2003), The Design Society.

[49] Hatchuel, A., Reich, Y., LeMasson, P., Weil, B., and Kazakci, A., 2013. “Beyond models and decisions: Situating

design through generative functions”. In Proceedings of the International Conference on Engineering Design 2013

(ICED 2013), The Design Society.

[50] Lakhani, K. R., and Wolf, R., 2005. “Why hackers do what they do: Understanding motivation and effort in free/open

source software projects”. In Perspectives on Free and Open Source Software, J. Feller, B. Fitzgerald, S. Hissam, and

K. R. Lakhani, eds. MIT Press, Cambridge, ch. 1, pp. 3–22.

MD-17-1215 - Page 26- Corresponding author: Jitesh H. Panchal

List of Tables

1 Linear mixed effects regression of σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2 Linear Mixed Effect Regression for f̂ ∗−i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

List of Figures

1 Classification of experiments with human subjects (based on [28]) . . . . . . . . . . . . . . . . . . . . . . 6

2 Sequence of decisions made by participants in the game . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Illustration of the Wiener process model with input points (0,1) and (1,0), σ = 1.0. . . . . . . . . . . . . . 10

4 Effect of σ on the next point for x j−1 = 0, x∗t = x j = 1, x̂ j = 0.5 . . . . . . . . . . . . . . . . . . . . . . . 16

5 Cumulative distribution of normalized xt+1 with exponential fit . . . . . . . . . . . . . . . . . . . . . . . . 16

6 Cumulative distribution of σ with exponential fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

7 Box plot of σ for different tries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

8 Box plot of σ for different cost levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

9 Cumulative distribution of − f̂ ∗−i with exponential fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

10 Box plot of f̂ ∗−i for different periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

MD-17-1215 - Page 27- Corresponding author: Jitesh H. Panchal