8
Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995 Effects of DSS, Modeling, and Exogenous Factors on Decision Quality and Confidence Nancy Paule Melone Charles H. Lundquist College of Business University of Oregon Eugene, OR 97403-1208 Internet: [email protected] Lawrence Wai Chan CS First Boston 5 World Trade Center New York, NY 10048 Ote hurx.#ed eighteen ur&rgnaa’&e business stints ptzticiwed in az experiment whichrquinzd tti they mqqe px&ction avd wor@oEe with the objective of minimizing costs using oneof two forms of computer-bared LX!%. Using a 2x2 designs subjectswent randomly resigned to one of four experimentid conditions: (I) presence or absence of a maleling fedure crossed with (2) a “good”or “ba? decision environment implemented a~ &cEaGag or incn&ng trendsdes. Results indicate thd subjects with accessto a m&ring feats per$ovned better than those without suchcqability; subjects’ cor$&nce, however, was not convltied with envixmmentd effects or the availability of a moaHiragfedur. Weah0 foundashvng correlation between cor$hhce ad achrrpI decision quality for all cona&ions. All these results ~TZ in the opposite dinxtion firn those of apnzviouu’y published study using the same tmk but a dljjfemnt LX%?. Finally, we found no evidence of overcon.derace on avenge in any of our conditions. 1: hdduction Do decision support systems (DSS) improve decision quality? Tbe scientific evidence about this question is mixed[e.g., 1;2;3].SeveralreasonswhyDSSmayhave equivocal effects have been smd: ( 1) ‘ I.. . all of the Timothy W. McGuire Charles H. Lundquist College of Business University of Oregon Eugene, OR 97403-1208 Internet: [email protected] Timothy A. Gawing The Prudential Insurance Co. of America Asset ManagementCo., Inc. 71 Hanover Road Florham Park, NJ 07932-1597 studies where performance scores were based on subjective ratings did not show a significant effect of a DSS”; (2) “of the six studies showing no improvement in performance due to use of a computer-based DSS, four were basedon a one-time measurement of performance;” (3) “in most studies reviewed here, the decision aids were developedby the researchers. The subjects could use a ‘black box’to help them in decision making. In limited cases, it was possible for the subjects to investigate alternate scenarios..., but the users could not see the model and its underlying assumptions.” [3, pp. 144-1451. Overconfidence is a pervasive finding in the psychological literature [e.g., 4; 5; 6; 7J Unlike those in psychology, the findings with respect to confidence and the availability of DSS are inconclusive. For example, Cats-Baril and Huber [8] state in their literature review that “in fact, researchers have found no difference in the confidenceplacedby subjectsin their decisionsregardless of whether decision aids were presented [9; 10; 111.” While in their own study these authors found no main effect for medium (i.e., pencil-and-paper versuscomputer) on confidence, they did find that subjects who used heuristics had a lower average level of confidence than subjects who did not use them. Melone et al. [ 121 also found no significant effect of DSS on confidence. In contrast, Kotteman, Davis, and Remus [2] found that subjects who used what-if analyses possessedinflated confidenceregardingthe quality of their decisionsrelative to subjects without accessto a DSS; they attribute their findings of overconfidence to the illusion-of-control phenomenon [13; 141. 152 1060-3425/95$4.00@1995IEEE Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

  • Upload
    dangdat

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

Effects of DSS, Modeling, and Exogenous Factors on Decision Quality and Confidence

Nancy Paule Melone Charles H. Lundquist College of Business

University of Oregon Eugene, OR 97403-1208

Internet: [email protected]

Lawrence Wai Chan CS First Boston

5 World Trade Center New York, NY 10048

Ote hurx.#ed eighteen ur&rgnaa’&e business stints ptzticiwed in az experiment which rquinzd tti they mqqe px&ction avd wor@oEe with the objective of minimizing costs using one of two forms of computer-bared LX!%. Using a 2x2 designs subjects went randomly resigned to one of four experimentid conditions: (I) presence or absence of a maleling fedure crossed with (2) a “good” or “ba? decision environment implemented a~ &cEaGag or incn&ng trendsdes. Results indicate thd subjects with access to a m&ring feats per$ovned better than those without such cqability; subjects’ cor$&nce, however, was not convltied with envixmmentd effects or the availability of a moaHiragfedur. We ah0 foundashvng correlation between cor$hhce ad achrrpI decision quality for all cona&ions. All these results ~TZ in the opposite dinxtion firn those of apnzviouu’y published study using the same tmk but a dljjfemnt LX%?. Finally, we found no evidence of overcon.derace on avenge in any of our conditions.

1: hdduction

Do decision support systems (DSS) improve decision quality? Tbe scientific evidence about this question is mixed[e.g., 1;2;3].SeveralreasonswhyDSSmayhave equivocal effects have been smd: ( 1) ‘I.. . all of the

Timothy W. McGuire Charles H. Lundquist College of Business

University of Oregon Eugene, OR 97403- 1208

Internet: [email protected]

Timothy A. Gawing The Prudential Insurance Co. of America

Asset Management Co., Inc. 71 Hanover Road

Florham Park, NJ 07932-1597

studies where performance scores were based on subjective ratings did not show a significant effect of a DSS”; (2) “of the six studies showing no improvement in performance due to use of a computer-based DSS, four were based on a one-time measurement of performance;” (3) “in most studies reviewed here, the decision aids were developed by the researchers. The subjects could use a ‘black box’ to help them in decision making. In limited cases, it was possible for the subjects to investigate alternate scenarios..., but the users could not see the model and its underlying assumptions.” [3, pp. 144- 1451.

Overconfidence is a pervasive finding in the psychological literature [e.g., 4; 5; 6; 7J Unlike those in psychology, the findings with respect to confidence and the availability of DSS are inconclusive. For example, Cats-Baril and Huber [8] state in their literature review that “in fact, researchers have found no difference in the confidence placed by subjects in their decisions regardless of whether decision aids were presented [9; 10; 111.” While in their own study these authors found no main effect for medium (i.e., pencil-and-paper versus computer) on confidence, they did find that subjects who used heuristics had a lower average level of confidence than subjects who did not use them. Melone et al. [ 121 also found no significant effect of DSS on confidence. In contrast, Kotteman, Davis, and Remus [2] found that subjects who used what-if analyses possessed inflated confidence regarding the quality of their decisions relative to subjects without access to a DSS; they attribute their findings of overconfidence to the illusion-of-control phenomenon [13; 141.

152 1060-3425/95$4.00@1995IEEE

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 2: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

We specuhte that the equivocality of the tiectiveness of DSS is due in part to the intrinsic potential of the DSS to improve objectively the pea%- of the decision maker. We explore this conjecture by using the same pmduction problem as, but a different DSS from, K0W Llaviq and i&anus [2] to study the effectiveness of a DSS with and without a modeling capability. Subjects in the “no mod&r@ condition are given a DSS that provides h&rid data in graphical form. In the “modeling” condition, subjects are provided the same historical data as subjects in the no modeling condition and also are provided a what-if modeling tool. The modeling tool used in the present study allows subjects to test parametric specifications on histwical data, vvhereas the “what-if’ tool provided in the K&&man, Davis, and Remus PI wt allowed subjects to test only spe4ific current decisiorrs.

Results of an earlier study by Melone et al [ 121 sugge&d that confidence might be a&cted by extraneous factors beyond the da&ion maker’s control. To explore this hypothesis in the present study, we cross the two DSS cxmditio~ with two enviroranental conditions: (1) increasing sales trend, and (2) decreasing sales trend.

Motivated by these issues, the present study addresses the following questions: . Do sdjects using a modeling feature and

graphical trend data make better decisions than those who have only graphical trend data?

. Does ttccess to a modeling feature inflate confidence in the quality of the resulting decision?

. Are subjects well calibrated? Does their level of confidence correspond to the quality of their decision?

. Can exogenous factors beyond the control of the subject affect confidence in the decision?

2: Mew

Subjects were 118 undergraduate students currently enrolled in upper-division industrial management courses. Of this group, 76 we males and 42 were females. The industrial management curriculum is recognized as one of the more quantitatively oriented in the country, and the campus has a national reputation as a leader in computing. Students are comfortable with computing, familiar with general production problems and @ortable with concepts taught in probability and statistics, including confidence intervals. Subjects received one unit of credit toward a general course requirement for participation in a business research project. The top performer in each

condition was awarded a cash prize of $25.00. The study employed a 2x2 design in which subjects were randomly assigned to one of four experimental conditions: (1) presence or absence of a modeling feature crossed with (2) a “good” or “bad” decision environment implemented as decreasing or increasing trend sales calculated as [(previous period trend sales r 150) + (random number)], where random numbers were uniformly distributed over the interval [-300, +300]. The primary measure of the subject’s decision performance (quality) is the fum’s cumulative total production costs at the end of the 18 decision periods. Several measures of decision confidence were captured. The measures included: (1) decision confidence measured on a 1-7 point Likert scale, on which “1” indicates least confident and “7” indicates most confident; (2) perceived performance relative to 100 classmates using a “click scale” (see Melone et d. [ 121); (3) perceived performance relative to 100 “experts” using a “click scale”. and (4) the interval that the subject believed would’contain 90% of classmates’ performances for that period. The confidence measures were collected after every three periods -i.e., six times. The analyses of confidence reported herein are based primarily on the Like&scale measure.

2.2: Expinmtal decision task and decision aid

The production task and the decision aid were implemented on four Macintosh IIci computers, using the CT programming language [ 15; 161. All data were captured directly by the computer as the subject interacted with the CT program. Subjects assumed the role of a vice president of production of a manufacturing firm. Their objective was to minimize tota? cumuidive ptrxhction costs while meeting market demand. In managing production, the subject was asked to decide in each of 18 consecutive decision periods the number of units to produce and the size of the workforce. The screen used for making these decisions and examining past performance over the iast 4 periods is shown in Figure 1.

The firm’s costs were calculated using the specific parameterization of the quadratic cost function of Holt, Modigliani, and Simon [ 171. In that specification the costs of hiring and layoffs dominate other controllable costs; therefore, changes in workforce size carry large penalties. Kotteman, Davis, and Remus [2] use only the quadratic components of the Holt, Modigliani, and Simon [ 171 quadratic cost function, dropping the linear terms. They use the same weights in the quadratic terms as Holt, Modigliani, and Simon [ 173. As in the Kotteman et al. study [2], we built a strong trend factor into demand. Consequently, it was very important for subjects to be sensitive to this trend factor in order not to make myopic

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

153

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 3: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

Figure 1: Decision Screen and Status

workforce decisions that would force large adjustments, and the associated quadratic penalties, in later periods. Accordingly, we designed a DSS that provided subjects in all conditions extensive historical performance data for the 24 most recent periods on: (1) the firm’s costs @. e., regular payroll, hiring and layoff, overtime, inventory, and total costs) and (2) inventory, actual sales, and number of units produced, each for the 24 most recent periods. This screen is shown in Figure 2.

ml

m

- - - - -

Figure 2: Performance Screen

After every three decision periods, subjects were asked to indicate how confident they were about their past decision making and where they saw their performance relative to their peers’ and experts’ performances using a screen similar to that used in Melone et al. [ 121. This screen is shown in Figure 3.

Subjects in the modeling condition were provided a “what-if’ feature that allowed them to test alternative

models on historical data before making their decisions about how many units to produce and how many workers to employ. This screen is shown in Figure 4. As can be seen from the initial prompt, subjects using this feature can create a new model or modify an existing one. Subjects are able to create and run as many models as they wish throughout the experiment, with up to five models retained at any time. Subjects can also compare the results of their models with actual performance over time.

Figure 4: Strategic Model Testing Screen

To develop the model, subjects are asked first to forecast the long-term trend in sales by indicating the average rate of increase (or decrease) per period. The trend adjustment from one period to another is indicated by T. For example, if a subject thought that the long-term trend for sales would increase at an average rate of 50

154

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 4: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

units per period, the subjed would enter 50 for T. Next, suI?jacts must enter the uleights to be placed on sales in each of the three previous periods (adjusted for trend changea). A positive or negative number could be entered caesponding to the peroentage weight to be assigned to the forecast. For eahxnple, a subject might assign weights of 50% to (Ft., + T) and 30% to (F,*+ 2T), leaving 20% of the weight for (F;, + 3T). The sum of the weights on past sales must be equal to 100% for both the production and the wxkkce models. At the base of the modeling SUEXX, it is suggested that the final set of terms should be added in the model equations, and the subject is asked “Do you Want to do that? (y/n).” The final term in the equatian includes the size of the work farce for the last period (W,,J and the number of units in inventory for the hst period o;.,).

The total cost curve is calculated for each of the various models that the subject creates. This pph, show in Figure 5, allows subjects to compare the performance of their models with actual historical total costs.

We chose to provide awetic modeling fm that would allow subjects to test rules, orpolicies, on historical data rather than a what-if capability that would allow them to provide qxcific data inputs for the current or past periods, because we thought that the parametric approach would focus subjects’ attention on the more critical dimensions of the problem. The impact of the weight on the quadratic cost component on workforce adjustments is that any good production/worl&rc&inventory policy must have a relatively smooth workforce. A what-if tool for evaluating difkrent options for the current decision is much less likely to direct the subject’s attenti’on to the impoatance of a smwth workforce over time than is a parametric model for testing polEicy options.

X*06

m Z.sE+Ob

X’Ob

I.SE.06

If’06

100000

0

Figure 5: Model Performance Screen: Total Cost vs.Timr

2.3: Fbnxdme

The experiment was conducted in an off&-like complex used for computer-based experimentation. Each of the four separate offices used in the experiment was equipped with a Macintosh IIci computer. Subjects were run individually by appointment. When a subject entered the reception room, the experimenter introduced himself and explained to the subject the general requirements of the experiment (what the subject would be asked to do). The subject was asked to read a consent form and, if willing to participate in the experiment, to sign the consent form. Once the consent form was signed a subject wils taken to his or her private office and seated comforably at the desk and computer. Following a (written and memorized) script, the experimenter requested that the subject perform the following activities: (1) enter demographic data; (2) work through the computer-based training exercise to learn how to use the decision aid; and (3) perform the experiment. The subject was asked if there were any questions. During the training session, the subject was provided more explicit information on the nature of the task and then introduced to each of the information- display screens and, if in the modeling conditions, to the modeling feature of the decision aid. The training session as well as other aspects of the experiment were pilot tested using upper-division industrial management students in an undergraduate managerial research course. None of these pilot subjects was used as a subject in the actual experiment. Having viewed the screens that were consistent with the subject’s assigned condition (modeling or no modeling condition), the subject was then asked to work through a practice decision exercise, enter his decisions, and after three such decisions, to assess his level of confidence. These training data are not used in any analyses, although confidence at the end of the training period is displayed in some illustrative figures in this paper. Subjects were asked again if they had any more questions before the actual experiment began. If not, the subject began the experiment. The subject was told that once the experiment started, the experimenter could answer only questions relating to the computer’s commands. The subject then worked alone until the 18 periods of the decision making and the 6 confidence assessments were completed. At the conclusion of the formal ,experiment, the experimenter thanked and debriefed the subject. There were no time limits placed on the subjects, although most completed the exercise in about an hour.

155

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 5: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

3: Results

3.1: Pelfonxmce coqmtison by condition

Figure 6 shows the average performance (cost) by month (i.e., peaiod) of subjects in the modeling and no modeling conditions. Except for the first few periods and the last period, sut$ects in the modeling condition managed the process with lower costs than did subjec& in the condition wihut the modeling fMure; this result is highly significant @ < .OOOl). The pa&an of the difference is consistent with the hypotheses that it took subjects in the modeling dtion a few periods to benefit from the modeling fegture and that subj~eds in the no modeling condition also learned reasonable decision rules, but at a slower rate than subjects in the modeling condition. Hence, the modeling feature supports decision making and improves subjects’ decisions, as intended.

Prriodr

Figure 6: Mean Performance (With and Without Modeling) vs. Time

Performance of our subjects was not related to class (sophomore, junior, senior -- F(2, 113) = 1.17, n.s.). Performance was peculiarly related to grade point average (GPA) grouping; performance of subjects with GPAs above 3.5 or below 2.5 averaged in the 35th-36th percentile (0th is be& 100th is worst); performance of subjects with GPAs in the 2.5 to 3.5 range averaged in the 55th percentile. These differences are statistically significant (F(2, 113) = 5.286, p = .0064).

3.2: co~oce corn by condition

Figure 7 shows the average Like&scale confidence level for each confidence period (every third period) for subjects in the modeling and the no modeling conditions. There is no discernible difference in the average confidence levels in the two groups. Furthermore, since the average ratings are near or moderately below the mid-point of “4” on the Likert scale, neither group shows evidence of overconfidence.

‘Ihe comparison based on the click-scale confidence measure is analogous to that based on the Likert-scale measure. The correlation between the average of the six measurements of each of these two measures for our subjects is .789.

Mean Confidence over Time (Likert Scale)

Figure 7: Mean Confidence (With Without Modeling) vs. Time

3.3: Libti scale confidence measure vs. Mud petfo-e

Figures 8 and 9 show the click scale confidence ratings of individual subjects, each averaged over the eighteen decision periods, plotted against the cost performance percentile of individual subjects [0 is best (lowest cost), 100 is worst (highest cost)], for the modeling and the no modeling groups, respectively. If no relationship existed between subjects’ confidence and their rank-ordered performance, the slope of the plot would be zero. The negative slopes of the regression lines for each of the two conditions are highly significant: -.23 for the modeling condition (t(47) = 3.458, p < .0012); -.26 for the no modeling condition (t(67) = 4.126, p < .OOOl). Hence, confidence is strongly negatively correlated with cost, or, equivalently, strongly positively correlated with pedormance (since lower costs are associated with higher performance). On the other hand, subjects arc not perfectly calibrated which would require a slope of - 1 .O.

Confidence vs. Performance with Modelmg

Figure 8: Confidence vs. Performance (With Modeling)

156

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 6: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

Confidence vs. Performance witlwt Modeling

0 20 40 M

I Rank-Ordrrrd P*,h-mmr* IX, I

Figure 9: Confidence vs. Performance (Without Modeling)

YUE; The solid lines on Figures 8 and 9 are.the regression The dashed lines on the same two figures indicate

perfect calibration lines.

The difference between these two slopes is not at all significant (t(ll4) = 0.409, n.s.). Hence, the presence or absence of a modeling capability had no significant effect on the relationship between subjects’ performance and their confidence about that performance.

3.4: lk influence of exogemup factors on confidence

The average confidence level in each of the periods in which confidence was measured was about “4” on the Likert scale for subjects in the declining sales condition. The average for subjects in the increasing sales condition was slightly greater in periods 2 through 5 but dropped below in periods 6 and 7 (see Figure 10). Hence, it appears apast that the confidence level remained roughly constant for subjects in the declining sales condition, whereas average confidence declined gradually over time in the increasing sales condition, perhaps as subjects grew increasingly frustrated about their rising level of costs. The difference between these groups is not statistically significant.

Figure 10: Mean Confidence (Likert Scale) vs. Time for High and Low Sales

7

Mean Confidence over Time (Likert Scale1

4: Discussion

This study investigated four questions. First, do subjects using a modeling feature and graphical trend data make better decisions than those who have only graphical trend data? Our results show that the modeling feature improved the objective quality of the decisions the subjects made, at least from the time they learned how to benefit from the feature to the time that the no modeling group learned how to perform comparably without the aid of the modeling feature.

Secondly, dces access to a modeling feature inflate confidence in the quality of the resulting decision? There was no difference in the confidence levels between subjects in our modeling conditions and subjects in our no modeling conditions.

Third, are subjects well calibrated? Does their level of confidence correspond to the quality of their decision? We found that subjects’ confidence is very significantly but nevertheless inadequately associated with their performance. ‘The slope of the regression of subjective performance percentile on actual performance percentile is in the -.23 to -.26 range (p < .OOl); with perfect calibration it would be equal to - 1 .O.

Finally, can exogenous factors beyond the control of the subject affect confidence in the decision? We did not find a strong effect of exogenous conditions (increasing or decreasing sales trend) on confidence.

Since the literature is equivocal about the effects of DSS on performance and confidence, it was inevitable that our results would agree with some previous findings and disagree with others. Perhaps most interesting is the comparison of our results with those of Kotteman, Davis, and Remus [2]. They found that their DSS did not result in improved performance but did result in greater levels of confidence. Also, they found subjects’ confidence in their unaided (no DSS) condition to be veridical. The correlation between their actual performance and confidence was .60 (p = .015). In contrast, for subjects in the aided (what-if) condition, the correlation was only .l 1 (p = .36). In our study, the corresponding correlations between performance rwrk and avenge confidence on the click scale were .45 for both the no modeling and modehng conditions. Since our subjects faced basically the identical production task, what factors might be responsible for the difference in results?

While our subjects were full-time undergraduate students and theirs were MBA students, we believe that it is most unlikely that this could explain the differences in the results. We cannot explain why undergraduates should perform better with modeling and not display overconfidence, or even increased confidence relative to the no modeling group of subjects.

167

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 7: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii International Conference on System Sciences - 1995

We believe that Sharda, Barr and McDonnell [3] have suggested an important f&or for the dif?&nce. Specifically, our modeling featute was not a “black box.” Subjects varied the parameters of a decision-rule that was displayed befm them. The treatment of trend was explicit. Simulations were based on historical @ subjects tested naeS, orpolicies, not specific decisions for a specific period.

Furthermore, even the presentation of information in the no modeling condition (and hence also in the modeling condition) wlas designed to focus subjects’ attention on the critical dimension of the problem: the strong trend in sales, whether incre&@ or decreasing. ‘This knowledge alone is critical in prevtiing the huge costs that occur when a subject has fsliled to increase or decrease the wakfii quickly enough and then makes relatively large adjustments in one or just a few periods (recall that wodcfhe adjustment lcosts are quadratic in size of the adjustment).

We find the general lack of overcotidence, on average, by our subjects ma@ puzzling. One possibility is that the click-scale measure aused subjects to make more rational confidence judgments than do Like&scale measures, and that our requesting both measm in adjacent questions resulted in less exuberant Like+scale responses than is more customary. This is consistent with the findings of the only other two studies of which we are aware that asked subjects where lhey believed their performances would rank relative to those of their peers [ 12; 21. Another possibility is that the nature of the problem, including the roles of trend and randomness, were so clear to the subjects that on average they helieved that their p&ormance would be in the middle of the pack. That individual confitice is correlated with relative performance is consistent with the hypothesis that subjbcts understood the problem well and were somewhat aware when they were doing rather well or rather poorly.

5: Refemm3

HI Akiag, RJ., and Power, D.J. “An Empirical Assessment of Computer-Assisted Decision Analysis.” Lb&ion Sciences, Vol. 17 (Fall), pp. 572-588, 1986.

PI Kotteman, J.E., Davis, F.D., and Remus, W.E. “Computer-Assisted Decision M&ing: Performance, Beliefs, and the Illusion of Control.” Ogmizdionul Behavior md k&mm Lkcision Fkxesses, Vol. 57, No. 1, pp. 26-37, 1994.

[31 Sharda, R, Barr, S.H., and McDonnell, J.C. “Decision Support System Effectiveness: A Review and an Empirical Test.” Mknqement Science, Vol. 34, No. 2 (February), pp. 139-159, 1988.

[4 1 Einhom, H.J., and Hogarth, RM. “Confidence in Judgment: Persistence of the Illusion of Validity.” Psychologiccz! Review, Vol. 85, No. 5, pp. 395-416, 1978.

[5 1 Hoch, S.J. “Counterfactual Reasoning and Accuracy in Predicting Personal Events.” bzand of Experimentd Psychology: Leaning, Memoy, md Cognition, Vol. 11, pp. 719-731, 1985.

161 Koriat, A., Lichtenstein, S., and Fischhoff, B. “Reasons for Confidence.” Jolatud of Experimenta! I?sychology: Hiunm Leaning md Memory, Vol. 6 (March), pp. 107-l 18, 1980.

[71 Lichtenstein, S., Fischhoff, B., and Phillips, L.D. “Calibration of Probabilities: The State of the Art to 1980.” In D. Kahneman, P. Slavic, and A. Tversky (Eds.) hdgment under kertainty: Heuristics azd Biases, Cambridge: Cambridge University Press, 1982.

[8 1 Cats-Baril, W.L., and Huber, G.P. “Decision Support Systems for Ill-Structured Problems: An Empirical Study.” Decision Sciences, Vol. 18 (Summer), pp. 350-372, 1987.

PI Kozar, KA. “Decision Making in a Simulated Environment: A Comparative Analysis of Computer Display Media,” unpublished Ph.D. thesis, University of Minnesota, 1972.

DOI Schroeder, RG., and Benbasat, I. “An Experimental Evaluation of the Relationship of Uncertainty in the Environment to Information Used by Decision Makers.” Decision Sciences, Vol. 6, pp. 556- 567, 1975.

P11 Senn, J.A., and Dickson, G.W. “Information Systems Stmcture and Purchasing Decision Effixtiveness.” Jmnal of Puzhasing md Mderids Mnqgement, Vol. 10, No. 3, pp. 52-64, 1974.

P21 Melone, N.P., McGuire, T. W., Hinson, G.B., and Yee, KY. “The Effect of Decision Support Systems on Managerial Performance and Decision Confidence.” In J.F. Nunamaker, Jr. and RH. Sprague, Jr. (Eds.), keedings of the Twenty-,%& Ann& Hmaii In&m&or& Confenmce on System Science, Vol. IV, Los Alamitos, California: IEEE Computer Society Press, pp. 482-489, 1993,

[I31 Langer, E.J. “The Illusion of Control.” Journal of Pemorzxlity md Socim’ Psychology, Vol. 32, pp. 3 ll- 323, 1975.

158

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE

Page 8: Effects of DSS, Modeling, and Exogenous Factors on ... · PDF fileobjective of minimizing costs using one of two forms ... subjective ratings did not show a significant effect of a

Proceedings of the 28th Annual Hawaii haternational Conference on System Sciences - 1995

[I41 I-anger, E.J., and Roth, J. “Heads I Win, Tails It’s Chance: The Illusion of Control as a Function of the Sequence of Outcomes in a Purely Chance Task.” .bzu& of Perwndity md Sxid Psychlop, Vol. 32, pp. 951- 955, 1975.

151 Sherwood, B.A., and Sherwood, J.N. The CT Lmgucge, Version 2.0. Wentwo~ NH: Falcon Software, 1989.

[W Sherwood, J.N. The CT Reference Manrd, Vetsion 2.0. Wentworth, NH: Falcon Software, 1989.

[ 17J Holt, C.C., Modigliani, F., and Simon, HA. “A Linear Decision Rule for Production and Employment Scheduling.” Mqement Science, Vol. 2, No. 2 (October), pp. I-30, 1955.

6: Acknowledgements

We gratefully acknowledge the support of a Carnegie Mellon Faculty Development Grant to Nancy Paule Melone and grants to Lawrence W. Ghan and Timothy A. Gerwing both of whom were undergraduates at the time, from the J.R Wean Foundation, administered through the Undergraduate Research Associates Program (URAP). Very special thanks to Associate Provost for Special Projects, Barbara Lamrus, and URAP Director, Jessie Ramey, for their unwavering support of our efforts to promote research experiences for approximately 25 management and economics students over the last three and one-half years. Special thanks to Thomas Morton at Carnegie Mellon for his advice in implementing the production task. Finally, we wish to thank Robert Clemen at the University of Oregon and the anonymous reviewers and track chairmen for their constructive comments on an earlier version of the paper.

169

Proceedings of the 28th Hawaii International Conference on System Sciences (HICSS '95) 1060-3425/95 $10.00 © 1995 IEEE