10
Introduction Most real-world problems, and the ones of interest for purposes of modeling and simulation, may be viewed as manifestations of the fundamental, microcosmic design principle that characterizes this universe and extends from the astronomical-sized galaxies down to the minutest atom. At any level of abstraction, the subsystems of a system inherently possess independence, autonomy, concurrency, and their own sense of timing; and where they interact with each other, the nature of the interaction is asynchronous. The independence of each subsystem or entity with respect to all others refers to the intrinsic, independent existence of every entity. While there may be data and information dependency between two or more subsystems of a system, each subsystem possesses its own processing or decision-making engine, reflecting its intelligence. The interactions enable the subsystems to function and evolve. Examples of complex real-world problems include the movement of goods at retail stores, plate tectonics and earthquakes, the flow of vehicles in a highway system, global weather systems, and modeling the rivers within an ocean. The traditional approach to understanding the behavior of complex systems has been to develop analytical models that attempt to capture the system behavior through exact equations and then solve them using mathematical techniques. Given the large number of variables and parameters that characterize a system, the wide variation in their values, and the great diversity in the behaviors, the results of the analytical efforts have been restrictive. For today's problems that consist of a large number of subsystems, geographically dispersed over large distances, the limitations of the analytical efforts are so severe that the efforts themselves are of little practical use. While modeling and simulation have served as an alternative to understanding system behavior, large-scale simulation may be the most logical and often the only vehicle to study the current systems objectively. "Modeling" refers to the representation of a system in a computer-executable form, while "simulation" refers to its execution on a host computer system. Insight into the system behavior may be obtained by modeling different characteristics of the system and simulating such behavior for different choices of parameter values. Recent advances in distributed simulation coupled with the availability of powerful and accurate networked computers promise greater modeling accuracy and faster simulation results, implying a qualitative improvement in analyzing system behavior.

Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Introduction

Most real-world problems, and the ones of interest for purposes of modeling and simulation,may be viewed as manifestations of the fundamental, microcosmic design principle thatcharacterizes this universe and extends from the astronomical-sized galaxies down to theminutest atom. At any level of abstraction, the subsystems of a system inherently possessindependence, autonomy, concurrency, and their own sense of timing; and where they interactwith each other, the nature of the interaction is asynchronous. The independence of eachsubsystem or entity with respect to all others refers to the intrinsic, independent existence ofevery entity. While there may be data and information dependency between two or moresubsystems of a system, each subsystem possesses its own processing or decision-makingengine, reflecting its intelligence. The interactions enable the subsystems to function andevolve. Examples of complex real-world problems include the movement of goods at retailstores, plate tectonics and earthquakes, the flow of vehicles in a highway system, globalweather systems, and modeling the rivers within an ocean.

The traditional approach to understanding the behavior of complex systems has been todevelop analytical models that attempt to capture the system behavior through exact equationsand then solve them using mathematical techniques. Given the large number of variables andparameters that characterize a system, the wide variation in their values, and the greatdiversity in the behaviors, the results of the analytical efforts have been restrictive. For today'sproblems that consist of a large number of subsystems, geographically dispersed over largedistances, the limitations of the analytical efforts are so severe that the efforts themselves areof little practical use. While modeling and simulation have served as an alternative tounderstanding system behavior, large-scale simulation may be the most logical and oftenthe only vehicle to study the current systems objectively. "Modeling" refers to therepresentation of a system in a computer-executable form, while "simulation" refers to itsexecution on a host computer system. Insight into the system behavior may be obtained bymodeling different characteristics of the system and simulating such behavior for differentchoices of parameter values. Recent advances in distributed simulation coupled with theavailability of powerful and accurate networked computers promise greater modelingaccuracy and faster simulation results, implying a qualitative improvement in analyzingsystem behavior.

Page 2: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

2 Chapter 1 • Introduction

Clearly, the issues of modeling and simulation are intimately tied to the keycharacteristics of the real-world problems. The real-world problems, in turn, are organizedinto natural and physical processes.

1.1 THE NATURE OF PHYSICAL AND NATURAL SYSTEMS

Both physical and natural systems occur in nature and obey the natural laws. Physicalprocesses are designed by human beings and their behavior and limitations are, relatively,better understood. They also lend themselves to experimentation. In contrast, naturalprocesses occur spontaneously in nature, are not man-made, and betray no intervention byobviously intelligent beings. As a result, the functional behaviors of natural systems aregenerally poorly understood and are assumed to be governed by a large number of variablesand parameters. For example, while radioactive decay constitutes an example of a naturalprocess, an integrated circuit design refers to a physical process. Evidently, natural systemsare more difficult to model and simulate than physical systems.

A key characteristic of a natural or physical system is whether its nature is continuousor discrete. The terms "continuous" and "discrete" are generally associated with the timingbehavior of a system. Thus, for electronic hardware, the issue of continuous versus discretewould focus on the timing of the signals that are generated following execution of thebehavior of the electronic devices. Consider, for example, the problem of evolution on earthas viewed by aliens visiting from a distant star. In the event that they visit the earth everyhundred million years, the process of evolution would appear to them as discrete. They wouldprobably observe a different life-form dominating the earth every hundred million years. Theywould conclude that one life-form continually dominates the earth about a hundred millionyears, is then replaced by the subsequent one, and so forth. Should a different alien race visitthe earth several times during any given hundred-million-year interval, its members probablywould notice a gradual evolution of life and conclude that evolution is continuous. Thus, theissue of continuous and discrete, in this example, is fundamentally one of the resolution oftime used to analyze a system.

Time, however, is not the only metric that may be used to understand the continuousand discrete nature of a system. Other frames of reference include space and energy. Considerthe problem of matter and assume that we have a piece of solid material. From the ordinaryhuman perception, it appears continuous. Yet, upon examination with a fine microscope, itreveals a dense packing of discrete atoms or molecules. Each atom or molecule, in turn,appears to be continuous. When the resolution of the microscope is increased, the otherwisecontinuous molecules appear to consist of discrete electrons, protons, and neutrons, each ofwhich appears to be continuous. An even finer microscope reveals that they are composed ofdiscrete, even more fundamental particles. Thus, the issue of continuous and discrete isfundamentally one of the resolution of the specific frame of reference used to analyze asystem.

Thus, from the point of view of modeling and simulation of a system, the key issues arethe frame of reference utilized to analyze the system, and its resolution.

1.2 THE NATURE OF SIMULATION

Although simulation may be defined simply as emulating or mimicking the specificcharacteristics of an object or idea, this definition hides the fascinating nature of computersimulation and its potential capabilities. As stated in the preface, the advent of the computer

Page 3: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Section 1.3 • The Role of Time and Causality in Simulation 3

has imparted to simulation a key characteristic—the ability to mimic even abstract conceptsand ideas.

A second key characteristic of simulation is that it enables us, human beings, tocomprehend and execute natural and physical processes that are, otherwise, beyond ourability to comprehend. For instance, consider digital hardware that works at speeds ofnanoseconds, billion times faster than the speeds at which we function. Since human beingscan reasonably comprehend time ranging from a fraction of a second to several tens of years,normally, communication between us and digital systems is doomed to fail. Yet, throughmodeling on the computer and simulation (i.e., execution of the model on the computer), weare able to understand the progress of the digital system at our level. Thus, simulation may beviewed as dilating the hardware's notion of time—nanoseconds—and bringing it up to ournotion of time. Conversely, when the birth and death cycle of a star is simulated on acomputer, requiring a few minutes, the star's notion of time—namely millions of years—isessentially compressed and brought down to our notion of time. Thus, computer simulation isable to compress and dilate the notions of time inherent in different natural and physicalprocesses.

The third and perhaps the most unique characteristic of simulation is its ability tosuspend the known natural laws of the universe including time, space, and causality. Forexample, the state-of-the-art, discrete event simulation algorithm that forms the cornerstone ofdigital hardware simulation, operations research, and so on utilizes the anticipatory timingsemantics to achieve efficiency which, under specific scenarios, violates the law of causality.The nature of the violation is described in [2]. This ability of simulation is significant in that itenables one to emulate unknown, hypothetical processes, those that may appear to violate theknown physical laws. The history of science reveals that many ideas that had appeared tocontradict existing knowledge and were expected to fail often turned out to be very successfuland, in the process, qualitatively refined our understanding of the physical laws. Thus,simulation may very well constitute an indispensable tool for examining and executing suchconcepts and, thereby, fostering creative thinking. It may serve as a testbed to examine thenature of execution of radically different sets of physical laws. Such simulations wouldresemble dreams, where causal laws often appear to be suspended and physical principlesviolated. For instance, it is not uncommon for one to dream of conversing about currentactivities with one's parents, who passed away long ago. Even such scenarios may beemulated in a simulation. The only difference is that while dreams generally defy voluntarycontrol, simulations may be deliberately planned and intelligently executed.

1.3 THE ROLE OF TIME AND CAUSALITY IN SIMULATION

Although time, space, and energy may serve as the frames of reference, the execution of mostreal-world problems may be understood as a function of time.

1.3.1 The Concept of Universal Time

Time constitutes an important component in the behavior of the entities and theirinteractions. Although every entity, by virtue of its independent nature, may possess its ownunique notion of time, when a number of entities Ex, E2,..., choose to interact with eachother, they must necessarily share a common notion of time, termed universal time, thatenables meaningful interaction. The universal time is derived from the lowest common

Page 4: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Chapter 1 • Introduction

denominator of the different notions of time and reflects the finest resolution of time amongall the interacting entities. However, the asynchronicity manifests as follows. Where entities Aand B interact, between their successive interactions, each of A and B proceed independentlyand asynchronously. That is, for A, the rate of progress is irregular and uncoordinated andreflects lack of precise knowledge of that of B, and vice versa. At the points of synchroniza-tion, however, the time values of A and B must be identical. Where entities X and Y neverinteract their progress with time is absolutely independent and uncoordinated, one with theother, and the concept of universal time is irrelevant.

At a given level of abstraction, although each entity may have its own notion of time,both entities A and B, must understand the universal time. Otherwise, A and B will fail tointeract. Entertain the following scenario. GGX and GG2 are two great gods of time. While thelength of the interval of GGfs opening and closing of the eye is 1 million years of our time,that for GG2 is 1 nanosecond. Thus, the resolutions of the messages emanating from GGj andGG2 are 1 million years and 1 ns, respectively. Clearly, during the million years that the eyesof GGj are closed, all messages from GG2 to GGls sent at intervals as low as one nanosecond,will be ignored by GG}. In contrast, assuming that GG2 has a finite life span, when GGjopens the eyes and sends a message to GG2, GG2 has long been dead. Even if GG2 is notdead, there is no known way for GGj to be certain that this is the same GG2 to which GGls

prior to opening the eyes, had sent an earlier message. There is also the strong possibility thatGG2 will view the message from GG!, which is unchanging for billions of its timesteps, asinert and lacking any interesting information. In the Star Trek episode titled, "Wink of anEye," the beings from the planet below the Enterprise are somehow transformed into an"accelerated" state. While the crew of the Enterprise appear to the beings to be in slowmotion, the beings are invisible to the crew of the Enterprise. As expected, there is nomeaningful interaction between the two life-forms until the captain and the science officer ofthe Enterprise are also transformed into the "accelerated" state.

Thus, at any given level of abstraction, the entities must understand events in terms ofthe universal time, and this time unit sets the resolution of time in the simulation. An event isdefined as an incident, occurrence, or experience, especially of some significance. Consider amodule A with a unique clock that generates pulses every second connected to anothermodule B, whose unique clock rate is a millisecond. Figure 1.1 shows a timing diagramcorresponding to the interval of length 1 second between 1 second and 2 seconds. Figure 1.1superimposes the 1000 intervals, each of length 1 ms corresponding to the clock of B. Clearly,A and B are asynchronous. Module A is slow and can read any signal placed on the link everysecond. If B asserts a signal value vY at 100 ms and then another value v2 at 105 ms, bothwithin the interval of duration 1 second, A can read either vx or v2, but not both. Theresolution of A, namely 1 second, does not permit it to view v\ and v2 distinctly. Thus, the

v1 v2

I H 11 ms 100 ms 105 ms 1000 ms

1 s 2s

m Duration = 1 s ^=1000 ms

Figure 1.1 The concept of universal time.

Page 5: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Section 1.3 • The Role of Time and Causality in Simulation 5

interaction between A and B is inconsistent. If A and B were designed to be synchronous (i.e.,to share the same basic clock), A would be capable of reading every millisecond, and therewould be no difficulty.

1.3.2 The Different Views of Time

When a real-world problem is modeled and simulated on a host computer, the rate atwhich the entities are executed will depend on the details of the behavior of the entities andthe speed of the host computer. In general the time duration required by a host computer toexecute an entity will differ from the execution time of the entity under actual operation (i.e.,in terms of the universal time). The time required by the host computer is referred to as thewall clock time, since the time may be measured with respect to the clock on the wall. If thehost computer is extremely fast relative to the process it models, the wall clock time is shorterthan the universal time. If the process being modeled operates much faster than the hostcomputer, as is frequently the case for hardware, the wall clock time is longer. Thus, while acomputer rated at 200 MHz may undergo one operation cycle in 5 ns, a simulation of it mayrequire a wall clock time of 1 ms on a host computer. The reason for simulation, of course, isto assist us in the detection of errors and design flaws.

Consider the simulation of a digital hardware system. During simulation, events mayoccur at intervals defined in terms of integral multiples of the universal time step. Here,"events" refer to new values, that is, changes from the previous values in time, at the inputsof the gates. Assume that gates Gi9 G2, G3, and G4 in Figure 1.2 need to be executedcorresponding to input stimulus at 1, 5, 5, and 10 ns, respectively. Under event-drivensimulation, first gate Gx is executed corresponding to simulation time of 1 ns. Then, gates G2

and G3 are executed simultaneously for time 5 ns. Since there is no activity between thesimulation times—Ins and 5ns—the simulator advances the simulation time to 5ns.Similarly, the simulator next advances the simulation time to 10 ns and schedules gate G4

for execution. While the advancement of the simulation time is relatively straightforward for auniprocessor host computer, it is more complex for a host computer that consists of a numberof independent and asynchronously executing processors as we discuss in Chapter 4.

Assume that each of the gates require 2 ms of wall clock time for execution on the hostcomputer. Although Gj will execute on the host computer for 2 ms which is much longer thanthe 5 ns at which the gates G2 and G3 are scheduled for execution, the progress of thesimulation is guided by the principle of cause and effect within the world of simulation, andthe progress of simulation time is indifferent to the wall clock time.

Figure 1.2 presents the foregoing scenario graphically. The level of abstraction isdefined by the gates Gx through G4. The universal time progresses in terms of 1 ns and isshown here to range from 1 to 10 ns. The simulation time is shown advancing nonlinearlyfrom 1 to 5 to 10 ns, and the wall clock time progresses from 1 ms to 9 ms, given that the hostprocessor requires 2 ms of execution time for each gate.

It may be pointed out that, to preserve causality and thereby ensure correctness, theprogress of the simulation time must be monotonic, although it may be characterized as linearor nonlinear. For the virtual time algorithm, discussed in Chapter 4, the simulation time failsto advance monotonically, leading to uncertainty with respect to the accurate execution ofevents.

Page 6: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Chapter 1 • Introduction

Gate-level abstraction

G1G2

G3 G4

1 2 3 4 5 6Wall clock time

(ms)

Figure 1.2 Understanding the different views of time.

1.3.3 Is Time Fundamental?

Since the role of time is crucial in simulations, it is important to understand whether it isfundamental, especially if inconsistencies are observed during the simulation of real-worldproblems.

The use of the timing occurrence of two or more activities to uncover a physicalrelationship may not yield correct results under all scenarios. Two or more actions may occurtogether or in a sequence and cause an unsuspecting mind to conjure up an erroneousrelationship between them. For instance, consider that an individual, ignorant of thegravitational law and the fact that lighter things tend to go up, observes that coconuts fallto the ground during the day and the flame of a fire points up at night. If the individualinterprets these observations to mean that coconuts must fall down during the day and firemust point up at night, the conclusion is only half-truth and the reasoning is clearly wrong.Neither of these relationships has anything to do with the time of the day.

In relativistic physics, the simultaneity of two or more actions at a given instant of timemay be subjective. While two actions may appear to be simultaneous to one observer, theymay appear to take place in a sequence to a different observer. The law of conservation ofcharge, due to Michael Faraday, originally stated that the net charge in the universe isconserved. Let us devise a nonlocal system, one where the distances between two extremepoints may be nontrivial. For Faraday's law of conservation of charge to hold true in thissystem, if a charge at point A disappears, an identical charge must reappear at another pointB. Professor Feynman [3] argued that we cannot know precisely whether a charge reappears atB in the system at the very instant it disappears at A. Therefore, we are compelled to modify

Page 7: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Section 1.5 • The Importance of Modeling and Simulation in the Future 7

the law of conservation of charge to apply only locally. Thus, time cannot be the fundamentallaw in the universe.

While real-world systems on earth are unlikely to be subject to the enormous speedsthat warrant relativistic considerations, there is an important implication in that the use of timeas fundamental may lead to inconsistencies. Indeed, time is but a manifestation of theprinciple of causality, which is the fundamental law in the physical universe and thefundamental doctrine of philosophy. The principle states: For every cause there is aneffect, and for every effect there must have been a cause. Since the speed with which anaction may occur is limited by the finite speed of light, there is along the positive time axis afinite duration, however infinitesimal its interval, between a cause and its effect. The principleof causality provides the basis for the distinction between the past and the future. Cause is theunmanifested condition of the effect, while effect is the manifested state of the cause.

Given a closed system, S, the activity of the constituent entities may be understoodpurely from cause and effect. The activity of an entity, A, is triggered by a stimulus fromanother entity, B, which serves as the cause. The stimulus from B is its effect and is the resultof the activity of B, possibly caused by the effect of a different entity, and so on. Tounderstand how S is initiated, two possibilities must be considered. Either S is active foreveror S is inactive initially and is triggered by an externally induced event(s) at time = 0, definedas the origin of time. An example of the former is an oscillator, while examples of the latterare more common in the world.

1.4 THE AIM OF MODELING?

The goal of modeling is to represent in the host computer, as accurately and faithfully aspossible, a replica of the target object or idea including all its constituent components. Theunderlying assumption is that upon execution of the model on a host computer, the resultsgenerated will closely match those of the actual target object or idea. Where the ability toexecute a model on a host computer is lacking, little is gained from developing the modelexcept for limited mathematical manipulation and system documentation. Thus, along withthe degree of fidelity of the model, the capabilities of the host computer system play a criticalrole in determining the extent to which the goals of modeling may be satisfied.

1.5 THE IMPORTANCE OF MODELING AND SIMULATION IN THE FUTURE

The benefits of modeling and simulation are many. First, they enable one to detect designerrors, prior to developing a prototype, in a cost-effective manner. Second, simulation ofsystem operations may identify potential problems during actual operation. Third, extensivesimulations may potentially detect problems that are rare and otherwise elusive. Fourth, asstated earlier, hypothetical concepts that do not exist in nature, even those that appear to defynatural laws, may be studied through simulation.

Unlike in the past, the increased speed and precision of today's computers permits oneto develop for physical and natural processes high-fidelity models that yield reasonablyaccurate results. The simultaneous decrease in the price of high-performance computingengines has increased the use of simulations in many disciplines, especially those that have along association with simulation. Examples include the fields of computer-aided design ofdigital systems, construction architecture, automobile design, military planning, communica-tions networks, satellite systems, genetics, stellar simulation, and plate tectonics. Simulations

Page 8: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

8 Chapter 1 • Introduction

are increasingly being used in new areas such as studying the impact on ticket sales for agiven chain of movie theaters as a result of introducing new services such as purchasingadvance tickets for a distant theater location, or developing a new architecture for apersonalized rapid transit system in a given city.

Concurrent with the proliferation of simulation, the enormous speeds of today'scomputers coupled with sophisticated distributed algorithms are increasingly enabling thedevelopment of fast simulations. This would permit system architects to study the perfor-mance impact of a wide variation of the key parameters, quickly and, in a few cases, even inreal time. This, in turn, would imply a qualitative improvement in system design. In manycases, unexpected variations in external stress may be simulated quickly to yield appropriatesystem parameter values, which are then adopted into the system to enable it to successfullycounteract the external stress. Such use of simulation is observed in military planning today,though the examples are limited in scope, and it may be increasingly observed in the future inthe disciplines of communication networks, highway systems, and disaster management.Such simulations, in addition, closely resemble the actual operational system that is beingmodeled. An instructive illustration occurs in an episode of Star Trek, the Next Generation,when the engineering crew first develops and studies a simulation, which leads to modifica-tion of the harmonics of the protective shield, and finally the Enterprise actually enters anunknown energy field in space.

The speed of execution of the simulation is determined by the precision of the behaviorrepresented by a model and the speed of the host computer on which the model is executed.Where the simulation executes slower than real time (i.e., the speed of the operationalsystem), the model may focus on describing specific characteristics of the process in detail toyield highly accurate results through simulation that, in turn, may facilitate a deeperunderstanding of the process. Where the simulation speed is very fast, far exceeding thatof the actual operational system, the greatest benefit of modeling and simulation may berealized in the design process. Where the simulation executes in real time, the benefits rangefrom assisting in the design and facilitating a deeper understanding of the process toperformance analysis for unexpected sets of system parameters.

The emergence of networking has enabled geographically dispersed subsystems to beinterconnected to achieve superior resource utilization and higher efficiency. The world iswitnessing a rapid proliferation of such system designs, and the trend is likely to continue.While they impose a tremendous burden on the traditional, uniprocessor-based simulations,the emerging asynchronous, distributed simulations that execute on loosely coupled parallelprocessors hold significant promise.

1.6 ACCURACY OF THE SIMULATIONRESULTS: ISSUES AND LIMITATIONS

Systems ideally suited for behavior modeling and simulation on a host computer or testbedare those whose performance characteristics cannot be determined through back-of-the-envelope calculations; moreover, such systems defy characterization through analytic model-ing, consist of large number of entities extending over a wide geographical area, and arecomplex. For a modeling and simulation effort to be worthwhile, the key requirementsinclude, first, amenability to extensive simulation with reasonable computing resources andwithin a reasonable time period. Second, there must be a high probability that the effort willyield new insights into the nature of the system that are otherwise difficult to obtain. In thebook Patterns in the Sand—Computers, Complexity, and Everyday Life [4], authors

Page 9: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

Section 1.6 • Accuracy of the Simulation Results: Issues and Limitations 9

Bossomaier and Green argue that unlike the traditional systems that merely required the staticdescriptions of the constituent units, today's complex systems require an understanding oftheir emergent behavior, which unfolds along with the dynamic interactions between theconstituent entities. For such a complex system, the accuracy of the simulation results isinfluenced by a number of different factors: (i) the choice of the input stimulus or trafficutilized in the execution of the simulation, including its nature and the frequency of assertion,(ii) the fidelity of the behavior modeling (i.e., the faithfulness of the executable model relativeto the underlying physical and natural processes), (iii) the ability of the simulation to capturethe important absolute timing behaviors of the processes and the relative timing behaviorsbetween two or more processes, (iv) the correct order of execution of the activities, (v) thenature of the host computer or testbed, and (vi) the precision with which the data is collectedin the simulation system and analyzed.

Issues i and vi are elaborated in detail in Chapter 6; a solid intuition about the systembeing modeled and the ability to relate mathematics to the physics of the problem are key toaddressing these issues effectively. Relative to issue ii, the fidelity of the behavior modeling isinfluenced by the capabilities of the underlying modeling language, the available depth ofknowledge of the system being modeled, and the trade-off between the desired accuracy of theresults and the computational complexity and execution time on the host computer. Forexample, a modeling language such as Pascal that cannot capture the relative timing betweentwo or more subprocesses will fail to model an asynchronous system. Or consider the SPICEprogram, which attempts to capture the fine interactions between the transistors in a circuit:and, beyond a few transistors, the problem becomes intractable. To ensure accuracy, initially,the behavior model of a system is often executed, one step at a time, and the order ofexecution of the activities is examined carefully for correctness. Also, the intermediate resultsare evaluated meticulously against the available knowledge of the system operation forconsistency. This process is generally excruciatingly slow, and its use is limited to a smallsection of the system at a time. Except for simple systems, most real-world systems arecomplex, governed by many parameters and variables, and the task of modeling andsimulating them with an extreme degree of precision may be computationally intractable.In addition, complete and precise knowledge of the behavior (of most systems) is generallylacking and thus the system defies accurate representation anyway. In his book The Emperor'sNew Mind Concerning Computers, Minds, and the Laws of Physics [5], Penrose states thatwhile the number of parameters of a real system is finite, their sheer number and thecomplexity of their interactions render any effort to obtain a precise causal relationshipvirtually impossible. Relative to issue iii, the ability to capture the important, possibly subtle,timing interactions in the system is addressed by the concept of the timestep, which reflectsthe resolution of time in the system. Where the resolution of time in the simulation is coarse, aprocess of the system that incurs changes faster than the resolution of time in the simulationmay not be described accurately. This issue is further detailed in Section 3.2. Issue iv isdiscussed in the context of the proof of correctness of the underlying asynchronousdistributed simulation algorithms and presented in Sections 4.2.2.3 and 5.3.3. The mathe-matical proof of correctness guarantees that the algorithm will realize the correct order ofexecution of events and generate correct results, regardless of the underlying host computerand even though the system may incur millions of asynchronous events whose exactoccurrences are unknown a priori.

Under issue v, when a simulation of a complex system is developed for execution on auniprocessor, under the control of a centralized simulation algorithm, the events are forciblysynchronized by the global clock. The same is true when the simulation is developed for

Page 10: Introduction - Wiley...computer, requiring a few minutes, the star's notion of time—namely millions of years—is essentially compressed and brought down to our notion of time. Thus,

10 Chapter 1 • Introduction

execution on a parallel processor under the control of a synchronous distributed algorithm.Where the system, like most real-world systems, is asynchronous, neither of the twosimulations will succeed, in principle, in accurately capturing the activities of the system.The use of a loosely coupled parallel processor testbed, the distribution of the constituententities of the system among the processors, the independent execution of the entities, and theuse of an asynchronous distributed simulation algorithm that lacks a global clock will yield asimulation environment closely resembling the operational system. Similar to an actualsystem, the simulation system will incur the occurrence of events at irregular, a prioriunknown time instants, exposing the behavior models to a realistic environment. Errors in thebehavior models, if any, are likely to be magnified, and timing errors and races are likely to bemanifested by the asynchronicity of the testbed. Their detection and subsequent removal willhelp ensure a more reliable system design. Where such errors are not exercised in thesimulation, they may remain undetected, implying a potentially unreliable system design.

It must be pointed out that, despite continuing improvements, there is no knownscientific mechanism to guarantee through modeling and simulation the complete absence oferrors in any complex system. The real value of modeling and simulation lies in raising thelevel of confidence in the correctness and accuracy of the system design.