12
Don’t let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation Andreas Größler a System Dynamics Review Vol. 20, No. 3, (Fall 2004): 263–274 Received May 2003 Published online in Wiley InterScience Accepted December 2003 (www.interscience.wiley.com). DOI: 10.1002/sdr.286 Copyright © 2004 John Wiley & Sons, Ltd. NOTES AND INSIGHTS Abstract Simulators can be used for experimentation and for teaching purposes. How far promises of useful applications of simulators in these areas are justified is, however, an open question. Evaluation research on simulators is still in its infancy, both inside and outside the system dynamics commu- nity. A reason for this is methodological issues that derive from the nature of experimenting/ teaching, but which are also caused by the specific characteristics of simulators. Fifteen such issues are identified and explained in this article and classified into three categories; methods for the mitigation of these issues are discussed. Copyright © 2004 John Wiley & Sons, Ltd. Syst. Dyn. Rev. 20, 263–274, (2004) Simulators for teaching and research Simulators and interactive learning environments are promising tools for teach- ing and experimentation. 1 In teaching, simulators are used to transport systemic knowledge, knowledge about the relationship between structure and behavior; they are expected to help students understand dynamic systems. In psycho- logical research, simulation environments are used to assess subjects’ behavior in complex situations and their capability to control such situations; they are used to investigate human decision making in complex situations. In this case, simulators are used as instruments for experimentation: they are means of investigating human cognition, whilst, in the case of teaching, simulators are devices that should help students acquire knowledge. It is these two fields of application that are most prominently discussed in the literature. The issue of the teaching effectiveness of simulators has not been finally solved so far. To a certain degree, this lack of empirical evidence is caused by an overly optimistic approach towards simulators: the effectiveness and effi- ciency of simulation tools is assumed without scrutinizing their actual effects. If evaluation studies are conducted, frequently they are affected by (i) meth- odological obstacles, i.e. the studies do not take into account specific issues that must be considered when using simulators, and (ii) the high effort that is necessary for proper evaluation. In essence, the same can be said when simulators are considered as means of conducting psychological experiments. In both cases, subjects or learners are experiencing a virtual (and often complex) situation. In teaching, long-term Andreas Größler is Assistant Professor of Operations Management at Mannheim University, Germany. He holds a Ph.D. in business administration. He teaches system dynamics and manufacturing strategy courses at the graduate level. His research interests include modeling of decision making, concepts of rationality and the relation between cognition and behavior of individuals and organizational success. a Mannheim University, Industrieseminar, Schloss, D-68131 Mannheim, Germany. E-mail: [email protected] The author gratefully acknowledges the valuable comments of David Lane, the System Dynamics Review section editor Yaman Barlas and of three anonymous referees on earlier drafts of this paper.

Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Embed Size (px)

Citation preview

Page 1: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Andreas Größler: Simulators in Teaching and Experimentation 263Don’t let history repeat itself—methodologicalissues concerning the use of simulators inteaching and experimentation

Andreas Größlera

System Dynamics Review Vol. 20, No. 3, (Fall 2004): 263–274 Received May 2003Published online in Wiley InterScience Accepted December 2003(www.interscience.wiley.com). DOI: 10.1002/sdr.286Copyright © 2004 John Wiley & Sons, Ltd.

NOTES AND

INSIGHTS

Abstract

Simulators can be used for experimentation and for teaching purposes. How far promises of useful

applications of simulators in these areas are justified is, however, an open question. Evaluation

research on simulators is still in its infancy, both inside and outside the system dynamics commu-nity. A reason for this is methodological issues that derive from the nature of experimenting/

teaching, but which are also caused by the specific characteristics of simulators. Fifteen such

issues are identified and explained in this article and classified into three categories; methods forthe mitigation of these issues are discussed. Copyright © 2004 John Wiley & Sons, Ltd.

Syst. Dyn. Rev. 20, 263–274, (2004)

Simulators for teaching and research

Simulators and interactive learning environments are promising tools for teach-ing and experimentation.1 In teaching, simulators are used to transport systemicknowledge, knowledge about the relationship between structure and behavior;they are expected to help students understand dynamic systems. In psycho-logical research, simulation environments are used to assess subjects’ behaviorin complex situations and their capability to control such situations; they areused to investigate human decision making in complex situations. In thiscase, simulators are used as instruments for experimentation: they are meansof investigating human cognition, whilst, in the case of teaching, simulatorsare devices that should help students acquire knowledge. It is these two fieldsof application that are most prominently discussed in the literature.

The issue of the teaching effectiveness of simulators has not been finallysolved so far. To a certain degree, this lack of empirical evidence is caused byan overly optimistic approach towards simulators: the effectiveness and effi-ciency of simulation tools is assumed without scrutinizing their actual effects.If evaluation studies are conducted, frequently they are affected by (i) meth-odological obstacles, i.e. the studies do not take into account specific issuesthat must be considered when using simulators, and (ii) the high effort that isnecessary for proper evaluation.

In essence, the same can be said when simulators are considered as means ofconducting psychological experiments. In both cases, subjects or learners areexperiencing a virtual (and often complex) situation. In teaching, long-term

Andreas Größler

is Assistant Professor

of OperationsManagement at

Mannheim University,

Germany. He holds aPh.D. in business

administration. He

teaches systemdynamics and

manufacturing

strategy courses at thegraduate level. His

research interests

include modeling ofdecision making,

concepts of rationality

and the relationbetween cognition and

behavior of

individuals andorganizational

success.

a Mannheim University, Industrieseminar, Schloss, D-68131 Mannheim, Germany.E-mail: [email protected]

The author gratefully acknowledges the valuable comments of David Lane, the System Dynamics Review sectioneditor Yaman Barlas and of three anonymous referees on earlier drafts of this paper.

Page 2: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

264 System Dynamics Review Volume 20 Number 3 Fall 2004

changes in the cognitive structure and the behavior of the learners areintended, whereas this is (usually) not the case with subjects in psychologicalexperiments. Nevertheless, from many practical perspectives, the differencesbetween these two ways of using simulators (i.e. either as a training or aresearch tool) can be neglected. The specifics of simulators in experimentalsettings remain the same as in evaluation studies; and, in the same way,these specifics are often ignored. This article tries to uncover and discussthe issues of the use of simulation programs in evaluation or psychologicalresearch.

However, what is remarkable about the methodological situation is not thatthese issues exist: testing the validity and utility of simulators as instrumentsfor teaching and research is a complicated task. Some methodological issuescannot be solved because they are inherent in the experimental method orindissolubly tied to other, more desirable characteristics of simulation tools.What really strikes one is that these issues have been discussed in the gamingliterature for decades, following the various ups and downs of simulators (seeLane 1995 and the literature quoted there), but current researchers still seemnot to put too much effort into learning from these experiences despite a well-established interest in simulators within the system dynamics community.2

This article is an attempt to make these issues explicit in a comprehensiveform, to support all researchers in the system dynamics community conduct-ing studies with simulators.

Fifteen issues of simulator-based training/experimentation are extractedfrom the literature and presented in a structured form. These examples areprovided in order to illustrate potential problems when attempts are made todraw conclusions concerning teaching effectiveness or psychological char-acteristics. Methods to improve either simulators or experimental methodologyare described, the aim being to mitigate the effects of the issues presented.

Issues of (business) simulators as teaching and researchinstruments

When discussing the methodological soundness of simulators for teaching andexperimentation, it is not only the characteristics of simulators themselvesthat must be taken into account. In addition, characteristics of the users andsituational determinants have to be considered (Funke 1995a). Thus, in thefollowing sections, the issues are categorized according to three aspects ofsimulator application: the simulator itself; the users of the simulator; and thesituation in which the simulator is used. However, the categorization cannotbe done exclusively and the borders between the areas are fuzzy.

Fifteen problem fields of simulators in scientific studies can be identified(partly collected from Funke 1995b, Süß 1999 and Keys and Wolfe 1990).These 15 issues are classified into the three categories. Also, some brief ideas

Page 3: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Andreas Größler: Simulators in Teaching and Experimentation 265

are presented on how problematic influences on the validity of experimentsand evaluation studies can be reduced.

Characteristics of instrument (simulator)

VALIDITY OF MODEL Although simulators are based on formal models, thereal-world domain itself usually is not completely open to a formal descrip-tion. Therefore, the validity of the simulation game (i.e. whether it is a “good”formal representation of the relevant reality or not) often depends on theability, knowledge, experience and world view of the modeler. Thus, in prin-ciple, model development has a subjective character. The congruence betweensimulator and reality is a particular issue when qualitative variables are to bemodeled (for instance, the image of a firm or the perceived economic situationof a country), this being a major feature of system dynamics models (Forrester1961, ch. 4). Also, system dynamics models usually contain “unobservable”relations between variables that are necessary to represent the causal structureof a real-world domain (Bell and Senge 1980). An example for this would bethe relationship between the image of a company and its market share, whichcannot directly be observed from empirical data because of many confoundingvariables. Further decisions that have to be made during model development,which also depend on a skilful model developer, are connected with ques-tions of the system’s boundary (Forrester 1971, ch. 4) and the time horizon ofthe simulation (Perelman 1980). For all these issues no technical, algorithmicanswer exists and the quality of their resolution therefore relies on the modeler.

In some experimental contexts, fidelity (i.e. similarity between the simula-tion and the real world; Hays and Singer 1989) or external validity (i.e. trans-ferability of knowledge and insights gained in a simulator to the real world andvice versa) is not necessary or not desired, for example in order to suppressinfluences of existing knowledge about a domain. Similarly, too much reality—for instance, managing a simulated company consists to 95 percent of routinetasks—can suppress learning effects because it causes boredom (Prensky 2001,ch. 8). In contrast, external validity is highly desirable for learning purposes,being the “ultimate test” of their usefulness (Wolfe 1976). Lane (1995) furtherdiscusses the trade-off between fidelity and the gaming character of simulators.Different from fidelity or external validity, internal validity (i.e. consistencyand replicability of behavior) of the simulator is a prerequisite for most uses asa learning or research tool. Thus, internal validity has to be secured.

While the issue of an undetermined model validity cannot be solved ultim-ately, it can, nevertheless, be mitigated by a careful validation process of thesimulator and the underlying system dynamics model (Barlas 1996; Forresterand Senge 1980) and by the thorough education of modelers and designers.

LEVEL OF ABSTRACTION To a certain degree this point is a specialization of thefirst issue. However, to a substantial extent it is also more general because it is

Page 4: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

266 System Dynamics Review Volume 20 Number 3 Fall 2004

not limited to the formal model underlying a simulator but to the appearanceof the simulation game as a whole. The issue is that the right degree of abstrac-tion and detail of a simulator cannot be formally determined. Which level ofdetail, which information is necessary to understand a scenario and which issuperfluous? Where is the boundary of the system that needs to be modeled?On the one hand, more complexity of detail can increase fidelity and externalvalidity (see above); on the other hand, it blurs the transparent view on cause–effect relations. The same argument holds for the dynamic complexity of asimulation, i.e. the complexity of dynamic behavior it can produce: the inclu-sion of additional model structure may render a simulation outcome morerealistic whilst, nevertheless, the basic behavior mode of the system can becamouflaged when the simulation replicates all the tiny ups and downs thatcan be observed in the behavior of real systems.

EMBEDDING IN DOMAIN CONTEXT Connected to the issue just discussed is theproblem of not fully understood effects of the context of a scenario, its “storyline”. As psychological research shows (e.g., Hesse 1982; Beckmann and Guthke1995), the embedding of a structurally identical model into two differentdomain contexts mostly yields two different ways in which the model isperceived and affects the level of understanding it generates (supposedly dueto different knowledge about the domains that subjects have acquired beforeusing the simulation). The work on abstract, general system structures (forexample, molecules or archetypes; Hines 2000; Senge 1990, ch. 6 & App. 2)and studies on the design of domain-independent instructions (for example,by applying analogies; Clark 1992) may provide answers or more detaileddecision heuristics in the future concerning this basic question.

HANDLING OF TIME An often named advantage of simulations is that they(virtually) instantly show the results of decisions a user has made. This “com-pression of time” in simulators can affect users. For example, the process ofplanning and controlling a scenario may also be compressed in comparison toreal-world decision-making processes. Furthermore, the “expansion of time”is also a major characteristic of simulators (Kim 1989, p. 327), when users havemore time than in reality to contemplate a complex situation and to make adecision. This feature, however, causes problems with experimentation, inparticular when fidelity to real-world time scales is necessary. Simulators withadjustable time frames can be used to investigate this problem (Größler 1999).

UNFOUNDED STICKINESS TO FAVORABLE OR TO UNFAVORABLE BEHAVIOR PATHS Somesimulators have a tendency to get stuck in unfavorable states without thepossibility of improving the situation even though subjects recognize theirerrors (“cul-de-sac situations”). The occurrence of this issue in simulators islikely when strong positive feedback loops determine its behavior (Brehmer1992). Although this behavior of simulators is often very reality-like (badly led

Page 5: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Andreas Größler: Simulators in Teaching and Experimentation 267

companies go bankrupt and finally have to leave the business environment), itcauses problems when performance between subjects is to be compared and,thus, the reliability of experimental data is affected.

In contrast, some errors cannot be observed because they do not affect thesystem’s behavior or the corresponding effects occur much later or in anotherarea of the simulated system (Goodyear et al. 1991). In this case, despite a gravemisinterpretation of the situation by the user, the simulation proceeds—atleast for some time—in a favorable way. Strong stabilizing negative feedbackloops can lead to this behavior of the simulation (Mackinnon and Wearing1980). These issues cannot completely be solved because they are inherent inthe simulator method. Technically induced appearances of the issue can beavoided by applying rigorous tests using standard procedures from softwaredevelopment (Sommerville 2001) and following a thorough simulation devel-opment process (Duke 1981).

NO OPTIMAL SOLUTION It is a characteristic feature of many complex problemsthat no optimal solution exists or at least one cannot be computed. In the sameway, usually there is no optimal solution in business simulators against whichone might assess the performance of subjects using the simulator (no absolutebenchmark). Although some formally defined simulators exhibit the featureof a known optimal decision sequence, they lack almost any relation to reality(cf. Funke 1993). Thus, under most circumstances, it can be considered as asign of fidelity that such an optimal solution is not known. In order to overcomemeasurement problems, performance in simulator usage can be compared tothat of other users, for example to experts in the domain (relative benchmark;Christensen et al. 2000).

Characteristics of learners/subjects3

DIFFICULTY OF TASK Some simulators are so complex (regarding number andinterconnectedness of variables, dynamic behavior, handling of user interface;Packer and Glass-Husain 1997) that users—depending on their intelligenceand knowledge—may not be able to control and understand them (Goodyearet al. 1991). Consequently, people use trial-and-error strategies or even makecompletely random decisions because they cannot make any sense of the taskat hand. Performance scores are not valid in this case because they are not arepresentation of cognitive processes but result from rather random events.The problem diminishes when simulations are well-tested.

DIFFERENT COGNITIVE PROCESSES INVOLVED In different phases of using a simulatordifferent cognitive processes may prevail (Reigeluth and Schwartz 1989). Forinstance, while “exploring” a simulation in the beginning, after a while userschange into a “routine decisions” mode. To take this into account, differentmeasures for such different phases may be appropriate. However, a common

Page 6: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

268 System Dynamics Review Volume 20 Number 3 Fall 2004

psychological theory of these cognitive processes has first to be articulated.Then, different scores for these different processes can be implemented withinsimulators.

AMBIGUITY OF (PROCESS) SCORES It is argued that static scores (outcome meas-ures) hardly contain information about the cognitive process of flying thesimulator (for instance, Bakken 1989). In other words, the criticism is madethat performance measures often do not reflect the internal processes of userswhen learning and/or controlling a simulation game. However, process meas-ures (e.g., the strategy that a subject followed), which are used to eliminate thisdisadvantage, are open to multiple interpretations and are usually not un-ambiguous. Thus, if process scores are used, then their interpretation mustbe laid down in advance, not post hoc, and they must be quantifiable. Rathernaturally, this issue is connected with the general issue of setting clear andconsistent goals and performance criteria as discussed below.

DIFFERENT LEARNING STYLES Not all people prefer the same studying methodsand concepts. In particular, adult learners show a wide range of attitudestowards different teaching instruments (see, for instance, Kolb’s ExperientialLearning Model; Kolb 1984, ch. 4). This implies that some people justnaturally like to play with simulation games, try to solve problems imposedon them by the games and acquire knowledge that is transmitted by the simula-tion. Others, however, will just prefer to listen, for instance, to a lecture on aspecific topic and do not like to solve “artificial” problems. In other words,some users will perform better in simulation games than others just becausethey have a different learning style (and it is the same whether acquiredknowledge or problem-solving competence is taken as performance measure).This problem of using simulation games for teaching or research can only becontrolled but not ultimately solved when simulators are to be used.

CONFOUNDING USER CHARACTERISTICS In parallel to the different learning stylesjust discussed, there may be many other relevant user characteristics that canhardly be controlled completely (e.g., pre-usage knowledge about the domain,expertise in working with computers, general intelligence etc.). One particu-larly influential variable on performance (no matter whether defined as learn-ing or problem-solving) is the motivation of the users (Lepper and Malone1987). Based on psychological theories, those characteristics that are con-founding have to be controlled and examined during experimentation. This isa basic issue in all evaluation studies and psychological experiments.

Characteristics of learning/experimental situation

NO, UNCLEAR OR CONTRADICTORY GOALS AND PERFORMANCE CRITERIA In many experi-mental and learning situations, either no or only unclear or contradictory goals

Page 7: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Andreas Größler: Simulators in Teaching and Experimentation 269

are set that subjects or learners are supposed to achieve (e.g., “improve thecompany’s situation” instead of “strive for financial success”). Confronted withsuch fuzziness, it is no surprise that users either make up their own objectiveswhen playing the simulation or change their attention randomly betweendifferent goals. In either case, goal achievement cannot be traced in a valid andreliable way. Specifying clear-cut goals includes informing users about thecriteria for measuring these goals, i.e. how their performance is assessed (e.g.,financial success by accumulated profit made during simulation). These criterianeed to be consistent in relation to goals and other performance measures.

An issue that is connected to this and to the matter of ambiguous processscores is that performance criteria—although ubiquitously used—are not un-impeachable. For instance, management’s performance cannot easily be quan-tified using cumulative net income. Performance has various facets and changesdynamically (Bakken et al. 1994). In addition, defining performance scores ofmanagement’s ability and organizational success must be seen as a politicalprocess (Pettigrew 1977).

NO REAL RISK Although subjects usually are very much involved in the game,real risks are difficult to simulate. This is a fundamental issue because provid-ing a safe environment is at the same time an important advantage of simu-lators (Sterman 1994), which allows for experiential learning without thatstress-related obstacles that are met in reality. However, subjects may tend totake more risks than in real-life decisions. Therefore, experimental economiststry to mitigate this issue by insisting that subjects have the chance/risk to earn/lose money in experiments. The amount of money used in experiments, how-ever, can hardly substitute for the effect of, for example, the fear of losing one’sjob after a bad decision in reality.

DURATION OF GAME RUN IN RELATION TO DIFFERENT DATA Playing a scenario oftentakes a considerable time (in some cases a few hours) but yields only oneindependent measure of the game score. Thus, reliability can only roughly beestimated because data about replicated game runs is rarely available. Further-more, users can get tired or bored, which means that their performance na-turally degrades after a while. This is a basic issue because complex situationsneed time to be understood and managed by users. The single measurement ofperformance provided by one game run, on the other hand, is usually accom-panied by many observations (e.g., mouse clicks of users, windows observed byusers), which leads to the problem of data reduction. Brehmer (1992) suggeststhat observation data needs to be aggregated in order to become interpretable.

INTEGRATION WITH OTHER MEDIA It is often stated that simulators should beused in connection with other training measures (for instance, after teachingbasic, declarative knowledge about the domain). The final aim is to embed thesimulation into a complete suite of teaching methods (“interactive learning

Page 8: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

270 System Dynamics Review Volume 20 Number 3 Fall 2004

environment”). How simulators can practically be combined with other in-structional media and what characteristics these other media should have isonly rarely discussed (Davidsen and Spector 2002; Spector and Wang 2002).Which effects are due to the simulation and which are due to the other meas-ures remains unclear (see also Kerres (1998) for the importance of researchabout the embedding of teaching media into a didactic context). The field ofinstructional design and technology may have more answers in the future(Hooper and Reinartz 2002).

Besides these methodological issues, the high cost of developing and usingsimulators must be considered. Although there seems to be no general cost dataavailable in the literature, costs for the development of simulators can be assumedto be substantial.4 In addition to that, working with simulators often requiressome time to be invested by the users, in order to understand the complexcontent the simulation conveys. Thus, opportunity costs are high as well.

The issues described in this article have the potential to affect any teachingand research with simulators. Thus, their consideration might be crucial forthe evaluation of the usefulness and the meaningfulness of any scientific studyincorporating simulators. Nevertheless, not all of these issues are to the samedegree relevant for both teaching and psychological research. For example, theissue “No real risk” can cause a problem when decision making under stressshould be investigated with the use of a simulator because subjects just do notexperience the same kind of stress as in reality. However, for evaluation andteaching purposes this issue is mostly irrelevant: rather it can be seen as anadvantage than an issue when teaching is the purpose of a simulation game.Many of the other issues, however, have influences on both kinds of studies.Thus, in practice, teachers and researchers would do best to consider all pointsin order to prevent negative effects on their training or experiments.

When the list of issues presented above is considered, it is obvious that thequality of the simulator is one important factor for the quality of the experi-ment or the teaching. Although there are problems that are basically causedby experimental methodology and which are the same for experiments notincluding simulators, there are nevertheless some points that are simulatorspecific. These issues can be mitigated by a simulator that is designed forexperimentation (for a prototypical example see Größler et al. 1999). Note,however, that some authors have doubts whether simulators can be designedfor both purposes simultaneously, namely conducting psychological experi-ments and mediating knowledge (Andersen et al. 1990).

Conclusion

The effectiveness and relevance of simulators for learning and experiment-ing is intuitively acknowledged by many members of the system dynamics

Page 9: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Andreas Größler: Simulators in Teaching and Experimentation 271

community. However, issues exist that hinder the full exploitation of simulatorswithin teaching and research. Fifteen methodological issues have been identi-fied here. The impact of the issues and the ways of mitigating negative effectshave been briefly discussed. Costs are another important factor when simu-lators are used.

As a conclusion it can be stated that these issues affect all uses of simulators,and will therefore always demand careful consideration. Advocates of simu-lators as learning devices should approach this situation with both modestyand self-confidence. They must be modest because the effectiveness of theirfavorite tool has not been proven, and will never be in a general and situation-independent way. Nevertheless, they can be self-confident because simulatorsstand in one line with—when used within reason—very successful media forteaching and learning: books, lectures and audio/video. This article is anattempt to support the search for sensible limits to the application of simulatorsin teaching and research without replicating exaggerated claims or methodo-logical mistakes from the past.

Notes

1. Throughout the rest of the article the term “simulator” is used, partially inconnection with “business . . .”, which is the major domain of application.Most examples are taken from the business field, while business simulatorsalso serve as prototypical instances for all simulation games. Nevertheless,virtually all statements in this article can be applied to simulators fromother domains that include human and social factors. I do not consider,however, simulators of physical or technical systems, like flight simulators.Furthermore, simulators are not considered that are used as analysis orlearning tools in specific modeling projects, but without application externalto that project. For a definition of terms see Maier and Größler (2000).

2. Over the last few years, the proceedings of the International System DynamicsConferences have shown a steady number of about 15 to 20 papers each yeardealing with simulators.

3. In the following, it is assumed that the simulator is used by a singleperson. When a group of people has to play a simulation game and makejoint decisions, additional issues are common that are connected to groupdynamics.

4. Warren (personal communication, 2002) approximates the price for build-ing a business simulator to be between $20,000 and $100,000 depending onthe scale of the simulation (simple games on self-contained business issuesvs. games addressing relationships between multiple business functions)and supporting materials required (manuals, teaching guides, slides, etc.).Prensky (2001, ch. 13) lists the price for the development of a big militarysimulation game as up to US $3 million. Kerres (1998, ch. 3) estimates the

Page 10: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

272 System Dynamics Review Volume 20 Number 3 Fall 2004

costs for developing “conventional” computer-based training (CBT) programsto be about $10,000–30,000 per hour training.

References

Andersen DF, Chung IJ, Richardson GP, Stewart TR. 1990. Issues in designing inter-active games based on system dynamics models. Proceedings of the 1990 Inter-national System Dynamics Conference, Andersen DF, Richardson GP, Sterman JD(eds). System Dynamics Society: Chestnut Hill; pp. 31–45.

Bakken BE. 1989. Learning in dynamic simulation games; using performance as ameasure. Computer-Based Management of Complex Systems: Collected Papers fromthe 1989 International System Dynamics Conference, Milling PM, Zahn EOK (eds).Springer: Berlin; pp. 309–316.

Bakken BE, Gould J, Kim D. 1994. Experimentation in learning organizations: a manage-ment flight simulator approach. Modeling for Learning Organizations, MorecroftJDW, Sterman JD (eds). Productivity Press: Portland; pp. 243–266.

Barlas, Y. 1996. Formal aspects of model validity and validation in system dynamics.System Dynamics Review 12(3): 183–210.

Beckmann JF, Guthke J. 1995. Complex problem solving, intelligence, and learningability. Complex Problem Solving—The European Perspective, Frensch PA, Funke J(eds). Lawrence Erlbaum: Hillsdale/Hove; pp. 177–200.

Bell JA, Senge PM. 1980. Methods for enhancing refutability in system dynamicsmodels. TIMS Studies in the Management Sciences 14: 61–73.

Brehmer B. 1992. Dynamic decision making: human control of complex systems. ActaPsychologica 81: 211–241.

Christensen DL, Spector JM, Sioutine AV, McCormack D. 2000. Evaluating the impactof system dynamics based learning environments: preliminary study. Sustainabilityin the Third Millennium—Proceedings of the 18th International Conference of theSystem Dynamics Society, Davidsen PI, Ford DN, Mashayekhi AN (eds). SystemDynamics Society: Bergen.

Clark RE. 1992. Facilitating domain-general problem solving: computers, cognitive pro-cesses and instruction. Computer-Based Learning Environments and Problem Solving,de Corte E, Linn MC, Mandl H, Verschaffel L (eds). Springer: Berlin; pp. 265–285.

Davidsen PI, Spector JM. 2002. Cognitive complexity in decision making and policyformulation: a system dynamics perspective. Systems Perspectives on Resources,Capabilities, and Management Processes, Morecroft JDW, Sanchez R, Heene A (eds)Pergamon: Amsterdam; pp. 155–171.

Duke RD. 1981. The game design process. Principles and Practices of Gaming-Simulation, Greenblat CS, Duke RD (eds). Sage: Beverly Hills and London; pp. 47–53.

Forrester JW. 1961. Industrial Dynamics. MIT Press: Cambridge, MA. (Now availablefrom Pegasus Communications, Waltham, MA).

—— . 1971. Principles of Systems. Wright-Allen Press: Cambridge, MA. (Now availablefrom Pegasus Communications, Waltham, MA).

Forrester JW, Senge PM. 1980. Tests for building confidence in system dynamicsmodels. TIMS Studies in the Management Sciences 14: 209–228.

Page 11: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

Andreas Größler: Simulators in Teaching and Experimentation 273

Funke, J. 1993. Microworlds based on linear equation systems: a new approach tocomplex problem solving and experimental results. The Cognitive Psychology ofKnowledge, Strube G, Wender KF (eds). Elsevier: Amsterdam; pp. 313–330.

—— . 1995a. Experimental research on complex problem solving. Complex ProblemSolving—The European Perspective, Frensch PA, Funke J (eds). Lawrence Erlbaum:Hillsdale/Hove; pp. 243–268.

—— . 1995b. Erforschung komplexen Problemlösens durch computerunterstütztePlanspiele: Kritische Anmerkungen zur Forschungsmethodologie [Exploring com-plex problem solving by computer-based games: critical remarks about the researchmethodology]. Planspiele im Personal- und Organisationsmanagement, Geilhardt T,Mühlbradt T (eds). Verlag für Angewandte Psychologie: Göttingen; pp. 205–216.

Goodyear P, Njoo M, Hijne H, van Berkum JJA. 1991. Learning processes, learnerattributes and simulations. Education & Computing 6: 263–304.

Größler A. 1999. The influence of decision time on performance in use of a businesssimulator. Systems Thinking for the Next Millennium—Proceedings of the 1999Conference of the International System Dynamics Society, Cavana RY et al. (eds).System Dynamics Society: Wellington, New Zealand [CD-ROM Proceedings,Abstract p. 75].

Größler A, Notzon I, Shehzad A. 1999. Constructing an interactive learning environ-ment to conduct evaluation experiments. Systems Thinking for the Next Millennium—Proceedings of the 1999. Conference of the International System Dynamics Society,Cavana RY et al. (eds). System Dynamics Society: Wellington, New Zealand [CD-ROMProceedings, Abstract p. 145].

Hays RT, Singer MJ. 1989. Simulation Fidelity in Training System Design. Springer:New York.

Hesse FW. 1982. Effekte des semantischen Kontexts auf die Bearbcitung komplexerProblcme [Effects of Semantic Context on Working with Complex Problems]. Zeitschriftfür experimentelle und angewandte Psychologie 29(1): 62–91.

Hines J. 2000. Molecules of Structure—Version 1.4. Available: www.vensim.com/molecule.html [December 2003].

Hooper S, Reinartz TJ. 2002. Educational multimedia. Trends and Issues in Instruc-tional Design and Technology, Reiser RA, Dempsey JV (eds). Pearson Education:Upper Saddle River, NJ; pp. 307–318.

Kerres M. 1998. Multimediale und telemediale Lernumgebungen [Multimedia andTelemedia Learning Environments]. R. Oldenbourg: Munich and Vienna.

Keys B, Wolfe J. 1990. The role of management games and simulations in education andresearch. Yearly Review, Journal of Management 16(2): 307–336.

Kim D. 1989. Learning laboratories: designing a reflective learning environment.Computer-Based Management of Complex Systems: Collected Papers from the 1989International System Dynamics Conference, Milling PM, Zahn EOK (eds). Springer:Berlin; pp. 327–334.

Kolb DA. 1984. Experiential Learning: Experience as the Source of Learning andDevelopment. Prentice Hall: Englewood Cliffs, NJ.

Lane DC. 1995. On a resurgence of management simulations and games. Journal of theOperational Research Society 46: 604–625.

Lepper MR, Malone TW. 1987. Intrinsic motivation and instructional effectiveness incomputer-based education. Aptitude, Learning and Instruction, Vol. 3, Snow RE,Farr MJ (eds). Erlbaum: Hillsdale, NJ; pp. 255–286.

Page 12: Don't let history repeat itself—methodological issues concerning the use of simulators in teaching and experimentation

274 System Dynamics Review Volume 20 Number 3 Fall 2004

Mackinnon AJ, Wearing AJ. 1980. Complexity and decision making. BehavioralScience 25: 285–296.

Maier FH, Großler A. 2000. What are we talking about?—a taxonomy of computersimulations to support learning. System Dynamics Review 16(2): 135–148.

Packer DW, Glass-Husain W. 1997. Designing interactive multi-user learning laborat-ories. Proceedings of the 15th International System Dynamics Conference—SystemsApproach to Learning and Education into the 21st Century, Barlas Y, Diker VG,Polat S (eds). Bogaziçi University: Istanbul, Turkey; pp. 79–86.

Perelman LJ. 1980. Time in system dynamics. TIMS Studies in the ManagementSciences 14: 75–89.

Pettigrew AM. 1977. Strategy formulation as a political process. International Studiesin Management and Organization 7: 78–87.

Prensky M. 2001. Digital Game-Based Learning. McGraw-Hill: New York.Reigeluth CM, Schwartz E. 1989. An instructional theory for the design of computer-

based simulations. Journal of Computer-Based Instruction 16(1): 1–10.Senge PM. 1990. The Fifth Discipline—The Art and Practice of the Learning Organiza-

tion. Currency Doubleday: New York.Sommerville I. 2001. Software Engineering, 6th edn. Addison-Wesley: Harlow and

Munich.Spector JM, Wang X. 2002. Integrating technology into learning and working: promising

opportunities and problematic Issues. Educational Technology & Society 5(1): 1–7.Sterman JD. 1994. Learning in and about complex systems. System Dynamics Review

10(2/3): 291–330.Süß HM. 1999. Intelligenz und komplexes Problemlösen [Intelligence and complex

problem solving]. Psychologische Rundschau 50(4): 220–228.Wolfe J. 1976. Correlates and measures of the external validity of computer-based

business policy decision-making environments. Simulation & Games 7(4): 411–438.