Upload
hadiep
View
217
Download
0
Embed Size (px)
Citation preview
Marcelo Gomes Metello
Dynamic Modeling for Training Games
PhD Thesis
Thesis presented to the Postgraduate Program in Informatics of the Departamento de Informática, PUC-Rio as partial fulfillment of the requirements for the degree of Doutor em Informática.
Advisor: Prof. Marco Antonio Casanova
Rio de Janeiro August 2011
Marcelo Gomes Metello
Dynamic Modeling for Training Games
Thesis presented to the Postgraduate Program in Informatics of the Departamento de Informática, PUC-Rio as partial fulfillment of the requirements for the degree of Doutor em Informática.
Prof. Marco Antonio Casanova Orientador
PUC-Rio
Prof. Antonio Luz Furtado PUC-Rio
Prof. Bruno Feijó PUC-Rio
Marcelo Tílio Monteiro de Carvalho Tecgraf/PUC-Rio
Prof. Clodoveu A. Davis Jr. UFMG
Antonio Miguel Vieira Monteiro INPE
José Eugênio Leal Coordenador(a) Setorial do Centro Técnico Científico - PUC-Rio
Rio de Janeiro, 21 de setembro de 2011
Todos os direitos reservados. É proibida a reprodução total
ou parcial do trabalho sem autorização da universidade, do
autor e do orientador.
Marcelo Gomes Metello
graduated in Computer Engineering at Universidade
Estadual de Campinas (2000), and received his Master
Degree in Computer Science from Stanford University
(2001). He has been acting in applied research and software
engineering since 2002 for the Tecgraf/PUC-Rio lab.
Ficha Catalográfica
Metello, Marcelo Gomes
Dynamic Modeling for Training Games / Marcelo Gomes Metello ; orientador: Marco Antonio Casanova. – 2011.
161 f. : il. ; 30 cm
Tese (Doutorado em Informática) – Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, 2011.
Inclui bibliografia.
1. Informática – Teses. 2. Jogos de treinamento. 3. Modelagem dinâmica. 4. Simulação. I. Casanova, Marco Antonio. II. Pontifícia Universidade Católica do Rio de Janeiro. Departamento de Informática. III. Título.
To my wife Katia, and my daughters Isabela and Giovana
Acknowledgements
To my advisor Prof. Marco Antonio Casanova for the great stimulus and technical
knowledge.
To the Tecgraf/PUC-Rio lab, especially to my coordinator Marcelo Tilio and Prof.
Marcelo Gattass, for their support which made this work possible.
To CNPq and PUC-Rio, for the grants which made this work possible.
To my family for all the support and encouragement.
To all professors of the Department of Informatics, PUC-Rio for their knowledge
and help.
Resumo
Resumo em portugues.
Palavras-chave
Jogos de treinamento, Modelagem dinâmica, simulação.
Abstract
This thesis focuses on training games, whose objective is not to entertain but
to help improve human decision making capabilities. Such games usually require
simulation of realistic situations. Since other areas of Computer Science provide
methods and tools for simulating and reasoning about real situations, it is highly
desirable to use them in training games. This thesis then starts by surveying the
areas of modeling and simulation, geographic information systems and multi-
agent systems with the purpose of designing a framework on which different
techniques from these areas can be integrated into a training game architecture.
The main contributions of this thesis are a discussion on the requirements
for dynamic modeling in the context of training games, the conception of the
process-oriented simulation (POS) paradigm as a consequence of the discussion,
and the materialization of POS in a DEVS-based formalism, called Process-
DEVS. As secondary contributions, this thesis describes a mapping of a workflow
representation to Process-DEVS, a framework for modeling cell space processes
on top of Process-DEVS with composition capabilities that preserve individual
independence of the sub-models, and a framework for modeling multi-agent
systems on top of Process-DEVS. As additional contributions, this thesis includes
the development of a planning system and a training game for the InfoPAE
system as a use case of Process-DEVS, and a technique to enable game loops to
handle variable game speeds and simulation processing peaks.
Key Words
Training games, dynamic modeling, simulation.
Table of Contents
1 Introduction 11
1.1 Motivation 11
1.2 Simulation Overview 13
1.2.1 Agent-Oriented Simulation 15
1.2.2 Simulation in Geographic Information Systems (GIS) 16
1.3 Requirements for Training Games 17
1.4 Objectives and Contributions 21
2 Related Work 23
2.1 Gaming 23
2.1.1 Scene Graphs 24
2.1.2 Game Loops 26
2.2 Modeling and Simulation 27
2.2.1 The DEVS Formalism 28
2.2.2 Cellular Automata 32
2.3 Multi-Agent Systems 34
2.3.1 Jason 35
2.3.2 SeSam 37
2.4 Planning 41
2.5 Summary 42
3 A Framework for Dynamic Modeling in Training Games 43
3.1 Introduction 43
3.2 A Discussion on the Framework Requirements 43
3.2.1 On the Nature of Time 44
3.2.2 On the Nature of Simulation Elements 47
3.2.3 On the Interaction between Elements 54
3.2.4 The Process–Oriented Simulation Paradigm 56
3.2.5 Process Creation and Destruction 57
3.3 The Process-DEVS Formalism for Process Modeling 59
3.3.1 Formal Model 59
3.3.2 Operational Semantics 70
3.4 Summary 78
4 Integrating Existing Formalisms 79
4.1 Workflows 79
4.1.1 Motivation: Business Process Modeling 80
4.1.2 A Discussion on Workflow Representation 81
4.1.3 Formal Workflow Model 86
4.1.4 Workflow Composition 95
4.2 Cell Space Processes 96
4.2.1 The Modularity Problem of Cellular Automata 96
4.2.2 Separating Behavior from Cell Space 97
4.2.3 Composition of Cell Space Processes 101
4.3 Multi-Agent Systems 107
4.3.1 Modular Agent Architecture 107
4.3.2 Simulation of Multi-Agent Systems 111
4.4 An Informal Discussion on Process Patterns 112
4.4.1 Parallel Pattern 113
4.4.2 Interference Pattern 113
4.4.3 Composite Pattern 115
4.5 Summary 117
5 The InfoPAE Use Case 118
5.1 Planning for Emergency Situations 118
5.2 A Motivating Example - Contingency Plans for Oil Leaks 121
5.3 Simulation Dynamics 123
5.3.1 The Environment 123
5.3.2 Processes 126
5.4 The InfoPAE Plan Simulator and Training Game 134
5.4.1 The InfoPAE Plan Simulator 134
5.4.2 The InfoPAE Training Game 136
5.5 Time Management 138
5.5.1 Simulation Speed and Game Loops 139
5.5.2 A Loop Model Study 142
5.6 Summary 149
6 Conclusions and Future Work 151
6.1 Conclusions 151
6.2 Future Work 154
7 References 156
1 Introduction 11
1 Introduction
1.1 Motivation
Since computer games started to be commercialized in the 70s, their
popularity has not yet stopped growing. Today, the gaming industry has grown
into an industry worth staggering USD$65 billion in revenues (Reuters 2011),
reaching the level of other well established entertainment industries such as music
and movies.
All this growth has brought huge investments to the development of new
computer technologies, particularly those devoted to increasing audio-visual
realism. Specialized hardware for graphics acceleration has been one of the main
focuses of innovation in the gaming industry. More recently, physics simulation
has also become one of the main disciplines in game development (Kirmse 2004).
The performance required to achieve this kind of realism also require fast
algorithms to run the game logic. For this purpose, game AI has also become one
of the main research fields in computer games (Nareyek 2004). More recently,
with the internet and the increasing adoption of network multi-player games,
much of the research turned to reducing the impact of network delays in player
experience (Smed et al. 2002; Pantel and Wolf 2002). One interesting side effect
of the evolution of graphic cards is that they have become so powerful that there is
currently a research field trying to apply that potential for generic parallel
programming, including complex simulations (Ryoo et al. 2008).
The interactive aspect of computer games is capable of providing the
players with the most joyful experiences, allowing them to be someone they could
never be, do things they could never do and go to places where they could never
be (Sheldon 2004). This kind of experience that entertainment games provide is
certainly appealing and sometimes dangerously addictive. On the other hand, such
kind of immersive experience can also be used for purposes other than mere
entertainment, such as learning and training.
1 Introduction 12
The so-called serious games (Susi et al. 2007) attempt to show that
computer games may have purposes other than mere entertainment. In this kind of
game, it may not be so important to achieve fun in the game experience itself, but
to learn and improve important skills during and after game play. Areas such as
military, medicine, architecture, education, urban planning, and government can
in fact draw many benefits from computer games technology (Smith 2007; Susi et
al. 2007). The class of serious games includes all those designed for non-
entertainment purposes.
Some of the earliest serious games consisted of simulators for military
operations and aircraft flights. They grew in parallel with the entertainment
gaming industry, also receiving considerable amounts of investments. These kinds
of games are technically very similar to computer games, sharing with them the
audio-visual realism requirement. However, they are considerably different with
respect to simulational realism requirements. The term simulational realism
denotes the degree of correctness of simulation methods used behind the games, in
the sense that they correctly simulate real world phenomena. While it is common
practice in entertainment games to sacrifice simulational realism for better playing
experience, the correctness of the simulations is often more important in serious
game design (Michael and Chen 2005). Henceforth, simulational realism shall be
referred to simply as realism throughout the rest of the text.
While computer games used for entertainment usually create their own
reality, simulators must represent realistic situations. This might be the main
reason why computer games have been viewed slightly as a non-serious research
subject by most of the scientific community until recently. In fact, computer
games have been traditionally one of the most informal areas of computer science.
In addition, since the gaming industry is so competitive, top technologies are often
developed in secret and are only published after becoming relatively obsolete.
Until recently, it was hard to find game research literature with the same level of
depth and formalism as other more traditional Computer Science areas.
This thesis focuses on one specific class of serious games, namely training
games. In the scope of this thesis, training is the process of improving decision
making capabilities of some set of humans through the simulation of realistic
situations and training games are games designed to assist in that learning process.
For simplicity, computer games shall be referred to simply as games.
1 Introduction 13
In psychology, a number of sophisticated models for understanding the
learning process have been developed. According to Piaget (1972), people learn
by assimilating knowledge obtained in a new experience to their existing schemas.
Learners build new cognitive structures in this process of assimilation where
previously existing structures are adjusted to incorporate new knowledge. Also
useful here is Kolb's model for experimental learning (Kolb 1984). It sees learning
as a cyclic process of experience and reflection. Experience generates
observations which feed a process of reflection where new mental models are
formed. These new models are then tested through experience again. The main
benefit of training games to that process is that they can serve as a source of
simulated experiences with possibly much lower costs than real experiences. Just
consider as an example the case of disaster management, where field exercises
often demand lots of people, resources and time. Besides that, the use of simulated
situations allows one to control factors that would otherwise be uncontrollable,
such as weather conditions.
In general, games can always be thought of as simulations where the
player(s) take active roles and interfere with the simulation progress during its
execution. Therefore, the notion of interactive simulation comprises the notion of
game. Interactive simulators may have different objectives, such as studying and
predicting the behavior of real dynamic systems. What characterizes an interactive
simulation as a game is the notion of a goal that should be pursuit by the player(s).
Since this thesis focuses on training games, which are simulations of
realistic situations, it will take into consideration previous work developed in the
area of simulation and other correlated areas to try to integrate these existing
techniques in a framework for training games. The following sections provide an
overview of these correlated areas and a list of requisites for training games that
will be addressed in chapters 3 and 4.
1.2 Simulation Overview
This section briefly overviews the areas of Modeling and Simulation,
Agent-Oriented Simulation and Geographic Information Systems. These
1 Introduction 14
Computer Science areas provide simulation techniques that can help achieving
simulational realism in training games.
1.2.1 Modeling and Simulation
Modeling and Simulation is an active research area almost as old as
Computer Science itself. In this context, simulation is not just a tool but a
scientific discipline whose purpose is to study real systems through the
construction of computational models for these systems (Michel et al. 2009). Such
models can then be used either for understanding the behavior of a real system or
helping in the design of a new one. Modeling and Simulation techniques have
been successfully adopted in fields such as military applications, health care,
logistics, construction engineering, supply chain management, electronic circuits
manufacturing, business process modeling, biological sciences and emergency
response, among others. Active research fields in simulation include:
Modeling formalisms, languages and processes. Specific
application domains usually adopt modeling formalisms, languages
and processes in which clear and correct models become simpler to
define (Sánchez 2006; Robinson 2006).
Model validation. Models need to be properly validated with formal
methods so that their results can be trusted (Sargent 2009; Coyne et
al. 2008).
Simulation algorithms and distributed simulation. As the
complexity of simulation models grows, more powerful simulation
platforms are required. This is achieved by making simulation
algorithms more efficient or by parallelizing the execution
(Perumalla 2006).
Reuse and integration of simulation models and systems. From
the software engineering point of view, the challenges are at multiple
1 Introduction 15
levels: the reuse of simulation models (or parts of models) defined in
the same formalisms (Balci et al. 2008; Röhl and Uhrmacher 2008),
the integration of models defined in different formalisms (Eker et al.
2003; Lee and Zheng 2005; Sarjoughian et al. 2008) and, more
broadly, the integration of different simulation systems (IEEE 2000;
Benjamin and Akella 2009).
Traditionally, Modeling and Simulation is more focused on fully automated
simulations than on interactive simulations. Interactive simulations are those in
which one or more human users take part in the simulation dynamics. The
addition of human elements brings some interesting challenges. From the
simulation point of view, it is necessary to: (1) provide communication between
humans and other fully automated simulation elements; (2) find ways to properly
synchronize the actions of these two inherently asynchronous kinds of elements.
Indeed, one of the current topics in the integration of heterogeneous simulation
models is the integration of asynchronous models (Eker et al. 2003). If simulation
techniques are to be integrated with computer games, this is certainly one of the
main issues to work on.
1.2.2 Agent-Oriented Simulation
Agent-Oriented Simulation (AOS) (Uhrmacher and Swartout 2003), or
Agent-Based Modeling and Simulation (ABMS) is in the intersection of the multi-
agent systems (MAS) and the simulation fields. It is a paradigm that represents
not only a specific kind of dynamic systems but essentially a new approach to
modeling these systems by thinking about and designing them as societies of
autonomous agents. Its benefits appear mainly in the simulation of complex
systems (i.e. systems composed of many interacting and autonomous entities)
such as agent-based social simulations (ABSS). More than just simulating the
evolution of a system from a given input set, MAS work as artificial worlds where
experiments can be made. One of the main qualities of AOS is its capacity of
integrating quantitative variables, differential equations and behaviors based on
symbolic rules systems, all in the same model (Michel et al. 2009).
1 Introduction 16
Current challenges in AOS include combining MAS techniques for
modeling cognitive behavior such as the BDI architecture (Rao and Georgeff
1992) and simulation platforms (Bordini and Hübner 2009). Moreover, all the
formalism developed in the simulation field is relatively underdeveloped in MAS.
Ways of creating well-defined MAS models that are platform-independent, model
reuse and formal methods of verification must still evolve (Michel et al. 2009).
Other interesting problem typically found in ABSS is how to model causal
relations among different abstraction levels (Troitzsch 2009).
There is some tendency in the MAS field of anthropomorphizing agents,
giving them human qualities such as intelligence, cooperation and rationality
(Uhrmacher and Swartout 2003). The techniques developed in that direction could
be of great use in training games. Therefore, the effort to incorporate them into the
dynamic models of training games is perfectly justifiable.
1.2.3 Simulation in Geographic Information Systems (GIS)
Geographic Information Systems (GIS) is also a long-lived field in
Computer Science. Traditionally, most of the work in the field is devoted to
storing, querying, processing, analyzing and mining geographical data. In the field
of serious games, GIS are expected to play a major role. Since one of the main
focus of serious games is simulational realism rather than audio-visual realism,
their simulations tend to take place on realistic rather than on imaginary scenarios.
In this situation, GIS can contribute to serious games in different ways (Gonçalves
et al. 2004):
Access to GIS Databases. By providing efficient access to
repositories of geo-referenced spatial data (Güting 1994), GIS helps
building realistic scenarios for games.
Spatial Operators. Spatial querying, distance and topological
operators (Egenhofer 1991) may be used in the simulation logic of
serious games.
1 Introduction 17
Simulation Models. Dynamic models of anthropic and natural
phenomena, such as land use change (Carneiro 2006), traffic control
(Kesting et al. 2009) and socio-economic dynamics (Batty 2001)
have been extensively studied in the GIS field (van Deursen 1995).
They may be quite useful, depending on the game domain.
Visualization. Although GIS visualization tools do not provide top-
quality graphics and interaction as modern games do, some of them
provide interesting ways of visualizing relevant information (Dykes
et al. 2005). They may be used by serious games that do not require
state of art graphics. Some specific techniques were developed for
rendering GIS data in 3D (Schneider et al. 2005).
Real-Time Monitoring. The Global Positioning System (GPS) and
sensor networks provide techniques for monitoring entities in real
time (Akyildiz et al. 2002). These techniques could be useful in
training games in which computer simulation is mixed with real
dynamics.
As a last remark, all the standardization efforts in GIS, such as those
promoted by the Open Geospatial Consortium (OGC) (Percivall 2003), help
making serious game applications interact with multiple Spatial Data
Infrastructures (SDI’s) and, more generally, with any GIS that follow the
standards.
1.3 Requirements for Training Games
Naturally, the requirements for training games may differ from game to
game. For example, while some may demand audio-visual realism, other may
devote their resources to correct simulations. However, that does not mean that
the requirements for training games can not be studied from a generic point of
view. This section aims at identifying the major classes of training games
requirements that are not among the priorities of most entertainment games. These
1 Introduction 18
general requirements will not be present in every training game, but it is expected
that they will appear most of the times.
The definition of training games used in this thesis raises two important
points: they aim at improving human decision making processes and they simulate
realistic situations. The classes of requirements listed in what follows derive
directly from these two properties:
Realism: As a principle, if the training game does not simulate a
realistic situation, it cannot help improving human decision making
capabilities in the real world, at least not directly. This observation
helps in the formulation of the following definition: a realistic
simulated situation is one in which the outcome of the players'
actions is similar to a real situation. Training games must provide
that in order to fulfill their objectives.
Game-Like User Experience: It should be possible to embed the
simulation models into a game engine capable of deploying current
computer games technology.
Dynamically Change the Game Speed: Sometimes, when
simulating a real situation, it is convenient to speed up the game
simulation to help players focus on what is really important in the
simulation (Michael and Chen 2005). Since the focus is on decision
making, the player should be able to fast-forward periods that do not
require decision making, thus saving time and avoid boredom.
Player Performance Evaluation: From the learning models
developed by Piaget (1972) and Kolb (1984), it is clear that the
training process through games involve more than simply game
playing. It is also necessary to provide means for the whole learning
process. If the purpose of training games is to improve decision
making, it seems natural to provide means for analyzing the
correctness of player decisions during the game. Traditionally, most
1 Introduction 19
entertainment games attribute scores to players in order to stimulate
competition. Although useful, collecting game statistics and
transforming them into a single score number may not be enough for
games focused on learning such as training games. Other approaches
considering the whole learning process seems necessary.
Scenario Composition Capabilities: Following the just mentioned
cyclic learning process, an important requirement for training games
that should not pass unnoticed is their ability to serve as a testing
environment for new ideas. This may require that the game
dynamics be applicable to a variety of scenarios. Particularly
important is to be able to run the game on scenarios composed by
end users to test their new ideas and decision procedures, as well as
to test their existing decision procedures on different scenarios. Even
though scenario composition is a well known feature of a number of
entertainment games, it is usually limited to setting the state of
physical objects at the beginning of the game. Training games may
require more sophisticated composition mechanisms capable of
setting the main storyline and other dynamic aspects of the game.
Needless to say, simulation modularity is mandatory in this case.
Integration with Existing Systems and Databases: Corporative
simulators may require interaction with other previously existing
systems and databases, such as those of GIS. The integration can
work in both ways, either for reading data as input or sending the
simulation outputs to them. This requirement may also require a
certain level of simulation modularity so that simulation models can
be defined independently of any external data source.
Simulation Reuse by Different Systems or with Different Player
Configurations: Multiple systems can benefit from sharing the same
underlying simulation. For example, a simulation-based single user
planning system may use the same simulation as a multi-player
training game. Moreover, the same application may require certain
1 Introduction 20
flexibility on its simulation elements. For example, a multi-player
training game may be played with different sets of players, with
fully automated non-playing characters (NPC) taking the roles for
which there are no human players. Simulation modularity is
important to allow this flexibility in simulation use.
The last three requirements mention modularity as a desirable property of
training games. Indeed, this kind of software is likely to be used by corporations
with specific needs. It is unlikely that all necessary simulation elements will be
available in some sort of generic third-party simulation package. Most of the time,
some specific customization is needed. Moreover, corporations usually have needs
that change over time. Therefore, unlike the case of most entertainment games,
there is a need for some sort of continuous development. Therefore, designing
simulators in a modular way is mandatory for the sake of software engineering.
In the simulation area, modular design has been a concern for a long time.
Simulation formalisms such as DEVS (Zeigler 2000) and System Dynamics
(Forrester 1972) are good examples of how to achieve modularity in dynamic
modeling. Modularity has helped to achieve three main goals in simulation. First,
it allows easier model reuse. Second, it makes the dynamic models more
intelligible and easier to change. Finally, it allows one to build more flexible
simulation software where the users can compose their simulations out of small
components.
Probably, the best examples of reuse of software components in computer
games are the so-called game engines. However, they are usually focused on
specific kinds of games and the simulation capabilities they offer, if any, are also
focused on specific kinds of processes, such as physical simulation of mechanics,
collision detection, character movement and animation interpolation. Usually,
they do not offer a formal simulation framework flexible enough to embrace other
interesting simulation formalisms found in other Computer Science fields.
1 Introduction 21
1.4 Objectives and Contributions
This thesis aims primarily at investigating the requirements and proposing
solutions for the implementation of training games, with a specific focus on the
requirements listed in Section 1.3. Since these requirements are usually not
emphasized in entertainment games, it is expected that this work will contribute to
the current expansion of the use of game techniques in application fields other
than entertainment.
It is also an objective of this thesis to contribute to the increase of the level
of formalism in the design of game dynamics by discussing dynamic modeling
paradigms. It is expected that the findings of this discussion will be useful to
provide a more solid foundation both for serious and entertainment games, even
though it is more focused on serious.
More concretely, this thesis presents a formal dynamic modeling framework
that facilitates the integration of games with technologies developed in other
relevant areas of Computer Science, such as modeling and simulation, multi-agent
systems, geographic information systems and knowledge representation. This
integration aims at fulfilling most of the training games requirements by
importing well-founded solutions from these areas. This thesis also expects to
fulfill the remaining requirements by developing specific solutions.
Finally, this thesis presents a concrete game for disaster simulation to
demonstrate how the proposed framework can be applied to a real problem. This
game is part of the InfoPAE system (Carvalho et al. 2001), which is a system
designed for helping managing emergency situations.
In short, the objectives of this thesis are:
Study the general requirements for training games
Investigate which techniques already developed in other
areas of Computer Science would help fulfill these
requirements
1 Introduction 22
Elaborate a conceptual framework to integrate these
techniques and point out the restrictions that should be
obeyed
Develop new techniques to fulfill the remaining requirements
Implement a real training game to test the proposed solutions
The major expected contributions are:
Provide a formal discussion on game modeling paradigms
thereby contributing to increase the level of formalism in the
computer games area
Define a detailed framework for designing the dynamics of
training games, in which it is possible to integrate techniques
from other areas of Computer Science and achieve a high
level of modularity and reuse
Implement a real training game for the InfoPAE system
(Carvalho et al. 2001) based on the proposed solutions
The rest of this thesis is organized as follows. Chapter 1 lists representative
research in other areas that helps fulfilling the requirements, as well as
similar frameworks used as the basis for designing the proposed framework.
Chapter 0 discusses in depth the principles of dynamic modeling in games
and formally defines the proposed framework. Chapter 0 shows how to
integrate some existing formalisms with the proposed framework. Chapter 0
presents a concrete implementation of a training game for managing disaster
situations. Finally, Chapter 0 draws the conclusions of the work and
contains suggestion for future work.
2 Related Work 23
2 Related Work
This section briefly describes a number of techniques and tools developed
across different fields that are potentially interesting to training games, by looking
at the concepts of each major requirement.
Apart from those developed for the computer games field, all techniques and
tools are somehow related to representing, analyzing, generating and simulating
dynamic processes.
2.1 Computer Games
Existing computer games techniques is the first obvious place to look at.
Unfortunately, since the gaming industry is so big and competitive, most
companies usually keep their top technologies secret and only allow them to be
published after becoming relatively obsolete and therefore losing most of their
market value.
Over the last decades, one of the problems that received most attention in
gaming is undoubtedly real-time rendering. With huge investments, modern
graphic cards have been developed specifically for this purpose. Along with that
hardware, software representations for virtual worlds also received considerable
attention from Computer Science researchers. Most three-dimensional real-time
computer games represent their virtual worlds in the form of a scene graph
(Strauss and Carey 1992).
Although interesting and challenging, real-time rendering is out of the scope
of this thesis for its complexity. However, a brief review of scene graphs is given
to show that the rendering requirements of a game may dictate the way the world
should be represented in memory.
Since this thesis focuses on dynamic modeling, a special attention will be
given on how games usually implement their dynamics. A closer look on the
different kinds of game loops may help in that task.
2 Related Work 24
2.1.1 Scene Graphs
Most high performance 3D applications use scene graphs, which are
specialized data structures developed to exploit the full power of modern 3D
rendering hardware.
A Scene Graph is a directed acyclic graph where the nodes represent
geometric shapes, rendering attributes (e.g., colors, materials, textures and so on),
coordinate system transformations, light sources or any other information relevant
for rendering the 3D scene. The rendering process consists of traversing the scene
graph in depth search mode, processing all information and sending commands to
the graphics card as the nodes are visited.
Figure 2.1 illustrates a very simple scene graph for rendering a water
molecule. At the top, there is the root node, which is a group node, so are the
oxygen, hydrogen1 and hydrogen2 nodes. The purpose of a group node is simply
to aggregate a set of scene objects and properties. The sphere nodes contain the
geometric shape of the spheres that will represent the atoms. The material nodes
contain information about the colors and styles with which the spheres should be
drawn. Note that the two hydrogen atoms share the same geometric shape node
and material node because they are visually equal. However, each of them has its
own transformation node. That is because they are drawn at different locations in
the 3D space. The transformation node determines the position of the objects
relative to the position of their parent group node in the graph.
2 Related Work 25
Figure 2.1 – Example Scene Graph for a Water Molecule
This simple example covers the most elementary types of nodes that are
present in basically all scene graphs. Naturally, each scene graph implementation
defines more specialized nodes which are not necessarily present in all
implementations.
Scene graph toolkits usually provide facilities for organizing and spatially
indexing the objects in the graph. This is extremely important for rendering
performance, for it allows the rendering algorithm to prune significant chunks of
the graph that are not visible. Since it is too costly to calculate that information at
rendering time, it is essential that the objects be grouped or indexed by spatial
proximity.
Naturally, there is much more to scene graphs than the simple overview
presented here. However, the details of scene graphs are not important for
modeling the dynamics of training games. It is only necessary to keep in mind the
following observations:
In order to make full use of 3D gaming visual resources, it is
necessary to store the visible 3D objects in highly specialized data
structures.
Scene graphs are oriented towards rendering, not modeling
(Sowizral 2000).
2 Related Work 26
2.1.2 Game Loops
Most computer games are inherently real time interactive applications. Their
execution must be synchronized with the real time flow. Sometimes there is a
need for accelerating the pace of a game. For example, in a training game that
simulates an emergency situation that may last for days, the simulation should
obviously not take the same amount of time. Periods requiring no decision making
should be fast-forwarded. However, in these cases, the game also requires
synchronization with the real time flow, only at a different rate in each game
stage.
Real time applications consist, from the functional point of view, of three
tasks being executed concurrently. First, it must continuously check for player
input and process their commands accordingly. Second, the state of the world
needs to be continuously updated. Finally, it must present the resulting world state
to the player(s). These three tasks shall be referred to as read input, update and
render respectively. The different ways these tasks can be interleaved in running
time will define the game loop models.
As a first attempt, this concurrent execution could be achieved by running
each task on a separate thread. However this approach may encounter difficulties
because some hardware platforms fail to provide adequate thread support when
precise timing is required (Dalmau 2003). Instead, most professional games
simulate this concurrent behavior with regular single-threaded loops and timers.
Game loops can be classified into coupled and decoupled according to the
implemented order of execution of its main tasks (Valente et al. 2005). In coupled
loops, the three tasks are executed sequentially and at the same frequency.
Coupled loops are only useful when the hardware on which the game will run is
fixed and known in advance such as videogame consoles. It is not adequate for
games that need to run on different machines such as computer games. To address
this need, professional computer games usually implement an uncoupled game
loop. This kind of loop has the advantage of allowing the execution of tasks at
different frequencies. This is useful for example to increase the rendering
frequency on powerful machines without changing the frequency of game logic
processing. However, it is not so useful to increase rendering performance without
2 Related Work 27
increasing the frequency of world updates because the same scene would be
rendered multiple times. What most professional games do is to separate their
update task into two subtasks. Usually the game logic and artificial intelligence
algorithms run at a fixed frequency while tasks that determine the positioning of
visible objects into the game scene but do not affect the game logic run at the
highest achievable frequency (Dalmau 2003). Animation interpolation is one
example of such tasks. These types of game loops are exemplified in Figure 2.2.
Figure 2.2 – Examples of Coupled and Uncoupled Game Loops. Source: (Valente et al. 2005).
The coupled loop on the left executes all tasks sequentially and at the same
frequency. The loop model on the right actually has two loops, one that executes
at a fixed frequency and one that executes as frequently as possible. The
executions of the two loops are interleaved according to the speed achieved at
runtime. The important conclusion here is that professional computer games
usually require that their world update frequency be defined at runtime for the
variable frequency update subtasks.
2.2 Modeling and Simulation
According to the realism requirement, training games must simulate the
dynamics of some situations in such a way that the outcome is similar to that of
the real world. Therefore, it seems very logical to look at the techniques
developed to simulate real-world systems (von Neumann 1966; van Deursen
1995; Zeigler et al. 2000). The main purpose of this area is precisely to develop
computational models to simulate reality.
2 Related Work 28
2.2.1 The DEVS Formalism
The Discrete Event System Specification (DEVS) formalism introduced by
Zeigler (1972) provides a way to model dynamic systems. As a discrete
formalism, it models state changes as discrete instantaneous events. For any
period of time where there is no event, the state remains unchanged.
Systems are modeled in DEVS as having input and output interfaces. These
interfaces represent the way the system interacts with other systems. Input is the
interface from which the system receives external stimuli while output provides a
way of observing and receiving stimuli from the system. Therefore, systems are
modular since their inputs and outputs are the only way of interacting with them.
Figure 2.3 illustrates this formalism.
Figure 2.3 – Basic Discrete Event System Specification
A basic DEVS (also called atomic DEVS) is a structure
M = X, S, Y, int, ext, , ta
where
X is the set of input values
S is the set of states
Y is the set of output values
int: S S is the internal transition function
ext: Q X S is the external transition function, where
Q = {(s,e) | s S, 0 e ta(s)} is the total state set
e is the time elapsed since last transition
: S Y is the output function
ta: S [0, ] is the time advance function
2 Related Work 29
In order to describe the interpretation of these elements, we shall assume the
system has just entered some state s. If no input is received, the system will
remain in s for time ta(s). Once this time expires, the system outputs the value
(s) and switches to state int(s). Note that, besides positive reals, ta(s) can also
assume the values 0 and . If ta(s) = 0, the system will immediately go to the next
state without allowing any possible input to intervene. In this case, s is said to be a
transitory state. If ta(s) = , the system will remain in s indefinitely until some
input causes another state transition. In this case, s is said to be a passive state. If
an input x X is received before the expiration time, the system switches to state
ext(s,e,x), where (s,e) with e ta(s) is the total state at the time the input was
received.
In short, the internal transition function defines the next state when no
inputs are received, the external transition function defines the next state in case
of an external input and the output function defines the system’s output whenever
the internal transition function is invoked.
The following example helps illustrate how DEVS works. Consider a
controller system for a safe door that will only open it if it receives the correct
password, which is 12345. If the user does not type anything for more than treset,
the system is reset, the user is signaled of that and he will have to start over. If the
user types the wrong password, he should wait for a system reset to start over.
The model for this system is defined as
M = X, Y, S, int, ext, , ta
where
X = {0,1,2,3,4,5,6,7,8,9}
Y = {“reset”,”open”}
S = {Wrong,Open,Reset,S1,S2,S3,S4,S5}
int(Sn) = Reset
int(Wrong) = Reset
int(Open) = Reset
2 Related Work 30
int(Reset) = S1
ext(S5,e,5) = Open
ext(Sn,e,n), n5 = Sn+1
ext(Sn,e,x), xn = Wrong
ext(Wrong,e,x) = Wrong
(Open) = “open”
(Reset) = “reset”
ta(S1) =
ta(Sn), n1 = treset
ta(Wrong) = treset
ta(Open) = 0
ta(Reset) = 0
X says that the system accepts any digit as inputs. If the user does not type
anything for a long enough period, the system will eventually reach the state S1.
Each password digit typed correctly will take the system from Sn to Sn+1, until S5,
from which it will finally reach the state “Open”. Any digit typed incorrectly will
take the system to state “Wrong” and will only change to “Reset” when it stops
receiving digits for time treset.
It may not feel intuitive for a process to have single inputs and outputs. In
the case of the safe, the “reset” output is directed to the user as feedback while the
“open” output may be directed to a door controller. In order to make modeling
more intuitive, the DEVS with ports formalism was created as a simple extension
to basic DEVS. It is illustrated in Figure 2.4.
The DEVS with ports is defined by the same structure
M = X, Y, S, int, ext, , ta
where
X = {(p,v) | p InPorts, v Xp} is the set of input ports and values
Y = {(p,v) | p OutPorts, v Yp} is the set of output ports and values
all other attributes are defined just as in basic DEVS
2 Related Work 31
Figure 2.4 – DEVS with ports
The DEVS with ports formalism allows the composition of models into
higher level models as illustrated in Figure 2.5. This composition is achieved most
simply by the coupling of input and output ports of different models. The DEVS
coupled model formalizes the composition of different models. This abstraction
capability makes it easier to build complex models part by part.
Figure 2.5 – DEVS coupled models
A DEVS coupled model is defined by the structure
N = X, Y, D, {Md | d D}, EIC, EOC, IC, Select
where
X = {(p,v) | p IPorts, v Xp} is the set of input ports and values
Y = {(p,v) | p OPorts, v Yp} is the set of output ports and values
D is the set of components names
Md = Xd, Yd, S, int, ext, , ta is a DEVS with
2 Related Work 32
Xd = {(p,v) | p IPortsd, v Xp}
Yd = {(p,v) | p OPortsd, v Yp}
EIC {((N, ipN), (d, ipd)) | ipN IPorts, d D, ipd IPortsd}
EOC {((d, opd), (N, opN)) | opN OPorts, d D, opd OPortsd}
IC {((a, opa), (b, ipb)) | a,b D with a b,
opa OPortsa, ipb IPortsb}
Select: 2D - {} D is the tie-breaking function
Each component Md is a DEVS model itself. A component may be another
coupled model, allowing the construction of hierarchical models. EIC defines the
external input coupling, connecting external inputs to components inputs.
Similarly, EOC defines the external output coupling, connecting components
outputs to external outputs. The internal coupling IC connects component outputs
to component inputs. Note that no output of a component may be connected to an
input of the same component, i.e. no direct feedback loops are allowed in DEVS.
Finally, the tie-breaking function defines the order in which to carry out
computations when multiple components receive inputs at the same time.
2.2.2 Cellular Automata
Considering all different formalisms to model spatial dynamic systems,
cellular automata (CA) (von Neumann 1996) are among the most popular.
Despite their simplicity, they are capable of reproducing complex behavior of
systems in several fields, such as land use cover change (Carneiro 2006), urban
growth (Batty 2005) and many other human-driven and natural phenomena.
A cellular automaton works in a world representation where both time and
space are discretized. Time is represented by the sequence of time values t0, t1, …
while space is partitioned into cells. Usually, time values represent a sequence of
equally spaced instants in time and cells are subdivisions of space defined by a
regular grid. Each cell has a well-defined state for each time value. The set of cells
that influence state changes of a particular cell is called the neighborhood of that
cell. A CA is defined as
2 Related Work 33
CA = C, S, N, T
where
C is the set of cells
S is the set of possible cell states
N: C C|N|
, where |N| is the neighborhood size and c N(c), for each cC,
is the neighborhood function that defines the neighborhood of each cell
T: S S|N|
S is the transition function that, given the state of a cell c and
the states of all its neighbors, defines the next state for c
Figure 2.6 illustrates a simple CA for a two-dimensional grid cell space. The
neighborhood of each cell is the well known Moore neighborhood, which is
composed by the eight closest cells. Cells may assume only one of two states, 0 or
1. At each time step, the transition function states that each cell must assume, at
the next time step, the state of the majority of its neighbors and that it should keep
its current state in the case of a tie.
Figure 2.6 – A simple CA
The simplest procedure for simulating a CA is to scan the entire cell space at
each time step applying the transition function for each cell. For this algorithm to
yield the correct result, it must keep a second copy of the data structure that stores
cell states. For each cell, the algorithm should read the necessary cell states from
the first structure and store its next state in the second structure. This is necessary
so that the cells that have not been scanned yet are not affected. Once the scan is
complete, the second structure will hold the next global state.
2 Related Work 34
This simple algorithm has two main drawbacks. First, it cannot handle
infinite cell spaces. Second, it may be inefficient because it keeps scanning and
making calculations for cells that are in the same situation as in the previous time
step.
Another method for CA computation is described by Zeigler et al. (2000) as
the discrete event approach to CA simulation. The idea of this method is to
concentrate on events. In the context of a CA, an event occurs when a cell changes
its state. The algorithm then works as follows: at each time step it keeps track of
the set of cells that actually changed state. Then, it collects the set of all neighbors
of those cells. Finally, the union of these two sets defines the cells that are going
to be scanned at the next time step. All other cells will be left unchanged. This
procedure assures that, if neither a cell nor any of its neighbors have changed state
at a given time step, that cell will not be scanned at the next time step.
Cell Space Models
Cell space models are a more general class of dynamic models that
comprises cellular automata, where the definition of local neighborhood and
transition rules are relaxed (Batty 2005). The main difference from strict CA
models is that cell space models allow action at a distance, which is characterized
by causality relationships between cells at distance. Usually, physical phenomena
are more easily mapped to the strict CA form, while anthropic phenomena often
require some sort of action at a distance.
Cell space models are not formalized in this section because of the lack of
consensus on exactly how and to what extent the CA properties should be relaxed.
Section 4.2 proposes a formalism for this class of models.
2.3 Multi-Agent Systems
Multi-Agent Systems (MAS) are not targeted at any specific kind of
application. Instead, the term stands for any system which is based on the agent
modeling paradigm. In fact, this may be the reason why there is a lack of
consensus among researchers about what are the basic concepts for modeling
agents. Usually, toolkits for building MAS are targeted at one of the following
2 Related Work 35
types of application (Theodoropoulos et al. 2009): (1) MAS for studying complex
systems, such as social models, insect colonies, artificial life and logistics; (2)
MAS for distributed intelligence; (3) development of software MAS, i.e., software
systems that distribute their functionalities among a set of agents, such as
semantic Web agents, cognitive agents in expert systems and agents for network
meta-management (e.g., load balancing or service discovery).
These different kinds of applications have different requisites and, therefore,
impact the functionality offered by the toolkits for building MAS. Since this thesis
is focused on the simulation of realistic situations, the first type of application is
more adequate because it provides an environment which is most recognizable as
a simulation engine.
Even after filtering the available MAS toolkits by the type of application,
they are still too many to allow a complete study. Instead, two of them were
chosen based on their fitness for the requirements for training games enumerated
in Section 1.3 and also on their popularity among researchers. They are described
in the following sections.
2.3.1 Jason
Jason (Bordini and Hübner 2009) is a platform for multi-agent simulation. It
is a good representative of the approaches based on the BDI agent architecture
(Rao and Georgeff 1992), which is one of the most popular architectures for
modeling the cognitive behavior of agents. BDI stands for “Belief-Desire-
Intention”. Beliefs are facts that an agent thinks are true, and together they
constitute its world view. The belief set is dynamically updated as the agent
interacts with its environment and other agents. Desires are the goals of the agent.
Both beliefs and desires constitute the input of the reasoning process. Intentions
are the result of that reasoning process and they determine the next actions of the
agent.
There are two aspects of Jason that makes it an interesting case for this
work. First, it allows an agent to keep a library of plans, which are possible
courses of action to achieve a particular goal. Second, it works with the notion of
2 Related Work 36
events, which are triggered at every change of a belief or a goal. Plans are always
started by events.
Agent reasoning is done in reasoning cycles as depicted in Figure 2.7. Each
cycle starts by the agent updating its belief base by sensing the environment. That
sensing may trigger one or more events which, in turn, may trigger one or more
plans, producing intentions. Then, the set of intentions compete in the choice of
intention of the agent to be further executed in the reasoning cycle. The details of
how an agent chooses which intentions will become actions are beyond the scope
of this work. It suffices to mention that there is a specific procedure for that.
Figure 2.7 – Working Model of Jason Agents. Source: (Bordini and Hübner 2009), p. 457.
The reasoning process is defined in a prolog-like declarative language which
is an extension of the AgentSpeak language (Rao 1996).
In Jason, as in most MAS, there is the notion of an environment, through
which agents can interact. The environment is responsible for: (1) keeping its
current state; (2) simulating how the actions of the agents alter that state; (3)
providing the agents with a symbolic representation of that state when they sense
it. Differently from agent reasoning, environments are specified by a Java code
using the Jason API for the environment. This API is flexible enough to allow the
implementation of different types of environments. For example, the
synchronization of the execution of the actions of an agent is totally flexible.
While some simulations allow an agent to execute multiple actions in parallel,
2 Related Work 37
others may require the notion of a simulation step, in which one action is executed
at each step.
Jason also provides some flexibility in its execution mode, which can be
either asynchronous or synchronous. In the asynchronous mode, each agent
executes its next reasoning cycle as soon as the previous cycle has finished. In the
synchronous mode, each agent performs exactly one reasoning cycle at every
global simulation step.
Comparing the basic aspects of the Jason formalism with DEVS, a few
remarks can be drawn:
In Jason, there are two distinct kinds of elements: environment and
agents. In DEVS, there is only the notion of systems.
In Jason, time is not explicitly modeled. For example, one cannot
specify how long an action takes to be executed.
Jason provides more specific constructs for implementing complex
cognitive agents. DEVS provides a lower level language.
Jason provides multiple ways to execute a simulation model. In
DEVS, the result of a simulation follows entirely from the
simulation model.
2.3.2 SeSam
SeSam (Shell for Simulated Agent Systems) (Klügl and Puppe 1998) is a
generic purpose multi-agent simulation platform. One of its main focuses is to
provide a modeling and simulation tool that is easy to use, not requiring deep
programming knowledge.
Simulation modeling is done with three types of objects: world, agents and
resources, as illustrated in Figure 2.8. All objects have an internal state defined by
a set of variables. In each simulation there may be only one world. The notion of
world is similar to the notion of environment in most MAS. Agents are active
entities which interact with the world by sensing it and acting on it. Finally,
resources are static and passive objects that are accessed and manipulated by the
agents. Note that the world inherits the behavior from the agent type. This means
2 Related Work 38
that the world is active and can change with time, even if there is no agent acting
on it.
Figure 2.8 – Object Types in SeSam. Source: (Klügl 2009), p. 485.
Behaviors are defined by a set of graphs in which nodes represent activities
and edges represent transition rules, as depicted in Figure 2.9. It is interesting that
the behavior of an agent can be composed of multiple activity graphs. This
provides a means of composing behaviors of agents from smaller behavior
definitions. These activity graphs are called reasoning engines in SeSam
nomenclature.
All reasoning engines of a simulation object are executed in parallel. Each
reasoning engine may have only one activity being executed at a time. The
transition rules define the conditions on which the reasoning engine terminates an
activity and starts the next one. Each transition rule defines a Boolean expression
that is evaluated at every simulation step. When a transition rule connecting an
executing activity to another one becomes true, the executing activity is
terminated and the other is started. Note that, when executing an activity, multiple
transition rules may be evaluated as true concurrently. In this case, some sort of
tiebreak rule must be applied.
2 Related Work 39
Figure 2.9 – Behaviors in SeSam
Since activity graphs can become quite large, SeSam provides means for
hierarchical behavior composition, where it is possible to define composite
activity nodes, which contain themselves another activity graph.
An activity encapsulates three sequences of actions: start actions that are
performed when the activity is selected anew, standard actions that are performed
once every time step as long as the agent is executing that activity and termination
actions that are executed for cleaning up when the activity is finished. An action is
basically a nested set of primitive calls. The transition rules are also defined by
Boolean expressions built of primitive calls.
Primitive calls are the basic building blocks of the dynamic models of
SeSam. They connect the model to the underlying programming language, which
is Java in this case. Each primitive is implemented as a Java class with a method
named execute. They also define the input and output argument types. The
primitive categories are: (1) action primitives, which are used to manipulate the
agent’s internal state or environment; (2) sensor primitives, which collect
information from the agent’s environment; (3) computational primitives, which
2 Related Work 40
provides computations of more or less complexity; (4) user primitives, which
consist of macros that combine calls to other primitives.
Although seemingly complex, this behavior structure allows the separation
of dynamic modeling in two levels: the lower level which requires Java
programming skills and the higher level in which the lower level primitives are
used as building blocks and no programming skills are necessary. This is an
attempt by SeSam to make simulation more accessible to a broader class of
researchers and businesses.
SeSam provides a third higher modeling level in which the user defines a
full simulation experiment. Once all the definitions for agents, world and
resources are complete, the user defines a situation. A situation is basically a set
of instance descriptions defining all instances of agents, resources and the world
that are going to compose the simulation. Additionally, the user may define other
properties of his experiment, such as the values that are going to be observed
during the simulation execution. Although interesting, the details of this level of
modeling are out of the scope of this work and will not be detailed further.
Comparing the basic aspects of SeSam with the DEVS formalism, a few
remarks can be drawn:
In SeSam, like DEVS, all simulation elements are specializations of
the same abstract type object.
In SeSam, time is not explicitly modeled. Every action is scheduled
by a global time step mechanism. For example, one cannot specify
directly how long an action takes to be executed. It is necessary to
implement a loop that counts the time steps.
SeSam does not restrict the behavior of agents to any particular
format.
SeSam provides means for composition of object states and
behaviors.
Behaviors are defined intuitively in the form of graphs, which are
similar to workflows, but without parallel activity execution.
2 Related Work 41
2.4 Planning
Artificial Intelligence (AI) planning, or simply planning, is also an area of
high interest to training games. It is also an old Computer Science area. Plans
traditionally refer to plans of action, which are usually represented in the form of
workflows or, more generally, flow charts. A flow chart is basically a graph
notation for representing procedures. It uses boxes to represent events that change
some data and diamonds to represent decisions, which may change the direction
of the process (Sowa 2000), as depicted in Figure 2.10. A workflow is a specific
case of a flow chart in which the events are actions executed by some participant.
Therefore, workflows are basically structured sets of actions (van der Aalst et al
2003).
Figure 2.10 – A Flow Chart
Workflows can be very interesting to training games because business
processes are also usually modeled as workflows and the integration with business
process management (BPM) systems is likely to bring many benefits.
Most of the pioneer work in the planning area, such as STRIPS (Fikes,
Nilsson 1971) and NOAH (Sacerdoti 1977), was devoted to automatic planning in
deterministic domains. These planners take as input the current state of the world,
a set of possible actions with their corresponding pre- and post-conditions and a
goal proposition. Their objective was to output a course of actions that would take
the world from its initial state to another state where the goal proposition is true.
These early planners assumed that the initial world state was entirely
known, that the world state would not be altered by any other factor during the
execution of the plan and that the effects of actions were always deterministic.
2 Related Work 42
Those are rather restrictive assumptions. Later work tried to relax some of them
and to plan under uncertainty (Blythe 1999).
Even though most of the work in planning is devoted to automatic planning
algorithms, there is much more to planning than that. Training games could also
benefit from much of the work that has been done in plan evaluation, plan
recognition (Kautz 1991), hierarchical planning (Erol 1995; Giunchiglia et al
1997), searchable plan repositories and many other interesting planning problems.
2.5 Summary
This chapter briefly overviewed a few techniques and systems, selected
from the areas of gaming, modeling and simulation, multi-agent systems and
planning. The techniques and systems include game loops, scene graphs, DEVS,
cellular automata, Jason, SeSam and workflows, all of which will be referred to
throughout the rest of the thesis. The DEVS simulation formalism serves as the
basis for the Process-DEVS formalism, described in chapter 3. In the same
chapter, the properties of Jason and SeSam platforms are also referred to in the
discussion preceding the formal definition of Process-DEVS. Chapter 4 defines
how workflows, cellular automata and multi-agent systems can be modeled on top
of Process-DEVS. Finally, scene graphs and game loops are used in the InfoPAE
implementation case described in chapter 5.
3 A Framework for Dynamic Modeling in Training Games 43
3 A Framework for Dynamic Modeling in Training Games
3.1 Introduction
This chapter first discusses the desirable characteristics of a framework for
modeling the dynamics of a training game, considering the requirements
enumerated in Section 1.3. Very briefly, the framework must allow:
1. The integration of different dynamic models, expressed in a variety
of formalisms, avoiding the creation of dependency relations among
them as much as possible. This is important to achieve modularity,
allow flexible scenario composition, and facilitate reuse of
simulation models.
2. The inclusion of dynamic models into a game architecture with
minimum performance impact.
3. The communication with external asynchronous systems during
game play. This communication may affect the outcome of the game
simulation.
The results of the discussion are organized in the form of decisions, listed in
Section 3.2. Then, this chapter introduces a novel dynamic modeling formalism,
called Process-DEVS, which is described in detail in Section 3.3.
3.2 A Discussion on the Framework Requirements
This section provides a more detailed discussion on the requirements, which
helps justify the framework design decisions.
The discussion is carried out at a considerably high level of abstraction.
Some decisions are sometimes based on subjective arguments and they are not
intended to suit all possible training games. However, they do intend to produce a
3 A Framework for Dynamic Modeling in Training Games 44
highly general and extensible architecture that will suffice for most cases. This
discussion also aims at helping detect if the proposed framework is actually the
best option for implementing a particular training game.
3.2.1 On the Nature of Time
The requirement for realism in the context of training games raises the
central question of how to model dynamic systems so that they can be simulated
during game play. With respect to how they model state change in time, dynamic
models can be categorized, at the highest abstraction level, as discrete or
continuous. In discrete models, changes are modeled by state transition functions,
which, at a given point in time, are invoked to determine the next state of the
system, taking the previous one as input. An example of a discrete model is a
banking account which, when receiving a deposit, has its value immediately
updated. In continuous models, the state of the systems changes continuously in
time. At each time instant, the model defines a change rates for each numeric
variable that compose the state of the system. An example of a continuous model
is the level of water in a tank, which changes continuously as a function of its
incoming and outgoing water flows.
Discrete models can be further categorized into discrete event models and
discrete time models. The difference is that discrete event models operate in a
continuous time base while in discrete time models, time may only assume values
from a discrete set. In discrete event models, every state change is called an event,
which always happens at one particular time instant. The bank account example
fits in this category. Discrete time models consist of a stepwise mode of execution
where the state transition functions are invoked at each time step. Cellular
automata are an example of such kind of model.
Since the state changes in discrete time models are modeled by state
transition functions that happen at specific points in time, discrete time models
can be seen as a specialization of the more general discrete event model class.
Figure 3.1 illustrates the major classes of models.
3 A Framework for Dynamic Modeling in Training Games 45
Figure 3.1 – Continuous, Discrete Event, Discrete Time and Quantized Process Models
Continuous models are usually described as systems of differential
equations. Although these differential equation systems (DES) represent
continuous processes, they may be simplified into discrete models either by
discretization of time or discretization of the variables domains. While the first
leads to a discrete time model, the later leads to a discrete event model, called a
quantized model (Zeigler et al. 2000), which is also illustrated in Figure 3.1.
Continuous models provide potentially unbounded precision with respect to
time. Ideally, DES simulators should be able to solve their models analytically.
However, the great majority of the available simulators use numerical methods
because of performance and scalability issues. Instead of solving models, these
simulators employ numerical methods to run their models, generating an artificial
history of the system and collecting observations to be analyzed (Banks et al.
2005). All that suggests that, even if a continuous formalism is used for describing
the dynamic models, its underlying simulation machine should be of a discrete
nature.
Although some traditional DES formalisms such as Systems Dynamics
(Forrester 1972) and Bond Graphs (Paynter 1961) have been popularly used in
areas such as physics, business, economics, and social modeling, several problems
3 A Framework for Dynamic Modeling in Training Games 46
remain with these continuous approaches (Michel et al. 2009): (1) Only a global
perspective is possible, which hurts modularity; (2) It is hardly possible to
consider micro-level interactions, as in multi-agent systems; (3) It is not possible
to model individual actions; (4) Integrating non-quantitative aspects is hard.
Even though continuous and discrete models are distinct in their nature, it is
not always necessary to make an exclusive choice between them. The creation of
hybrid models (Cellier 1986; Praehofer 1991; Deshpande et al. 1997; Lee and
Zheng 2005) made it possible to simulate both kinds simultaneously. However, as
Zeigler et al. (2000) points out, this incurs in performance loss. What is
commonly seen in practice is the use of discrete formalisms to model continuous
systems (Banks et al. 2005), easing the modeling and simulation tasks at the cost
of some precision.
Considering the two main classes of discrete models, namely discrete time
models and discrete event models, the discrete time class is clearly more specific
and restricted. On the other hand, it is more intuitive and easier to modeling
(Zeigler et al. 2000). However, there are two problems with the discrete time
approach: (1) The granularity of time is fixed, which makes it difficult to integrate
processes modeled with different time granularities (Banks et al. 2005); (2) In
some cases where the state of most simulation elements is changed sparsely in
time, the performance of a discrete time model can be rather poor, as compared to
a corresponding discrete event model. Zeigler et al. (2000) illustrates well this
problem in the domain of cellular automata.
Given this process modeling background, it is now possible to justify the
first decision behind the framework for dynamic modeling in training games:
Decision 1: The class of discrete event models will provide the basis on
which to build all dynamic elements of the game.
This decision is grounded on the following arguments:
Pure continuous or hybrid models were found to be more
performance-costly.
There are approaches to build platforms on which it is possible to
integrate multiple models defined in any discrete sub formalism in a
3 A Framework for Dynamic Modeling in Training Games 47
scalable and parallelizable way (Praehofer et al. 1993; Vangheluwe
2000). This is much more difficult to be accomplished with
continuous or hybrid models.
Differential equation models can still be used in pure discrete
simulation through discretization. The infinite precision of
continuous systems may not be so important since the error can be
controlled by increasing the granularity of value discretization.
During the simulation of discrete models, it is easy to make the state
always ready to be rendered for the players. In continuous or hybrid
models, in order to render the state at time t, it is necessary to solve
the equations for t, making rendering less immediate. One option
would be to determine t in advance, but it was shown in Section
2.1.2 that this is not possible for uncoupled game loops. Therefore,
rendering performance will be hurt in that case.
3.2.2 On the Nature of Simulation Elements
The types of elements required for simulation in a training game may vary
in some aspects. The complexity of their behavior may range from a simple
inanimate physical object to sophisticated artificial intelligence (AI) algorithms
capable of simulating human reasoning in some context. Another aspect is the role
of each game element. Simulation elements may play distinct roles such as parts
of the game environment, proactive actors or natural phenomena.
We focus the discussion on two main classes of simulation approaches
based on discrete-event time representation: object-oriented simulation (OOS) and
agent-oriented simulation (AOS). Uhrmacher and Swartout (2003) provide a good
introduction to the main concepts of these approaches.
In OOS, a simulation is typically defined by a network of objects, which
have hidden internal states and interact with each other by sending and receiving
messages. The term object is not consensual in the simulation field. Some specific
frameworks give their simulation components different names such as models
(Eker et al. 2003) or systems (Zeigler et al. 2000).
3 A Framework for Dynamic Modeling in Training Games 48
Although the notion of object in OOS is distinct from that in object-oriented
programming languages, they share some common principles. In object-oriented
programming languages, objects are software entities with an internal state and a
set of operations, through which they can interact with each other. The idea of
object orientation is to increase modularity relative to plain procedural
programming. Ideally software pieces with similar concerns should be brought
together and organized in a single software entity. OOS follows the same principle
by modeling simulation elements as objects with a definite boundary and a hidden
internal state. Typically, OOS approaches also provide the notions of classes and
inheritance, which are important properties to achieve code reuse.
Almost all of the main OOS formalisms support composition. Objects may
be composed of other objects that are kept internal to it. This is the ground for
multi-level modeling in object-oriented models. The DEVS formalism, as
described in Section 2.2.1, is a good representative of these object-oriented
simulation concepts.
On the other main group of simulation approaches, agents are defined as
autonomous entities which also have a hidden internal state. They are usually
embedded in a multi-agent system which provides an environment that they can
observe through sensors and change through effectors. They also communicate
with each other by exchanging messages. These basic characteristics are present
in most agent-oriented formalisms. More specialized characteristics of agents are
not entirely consistent among researchers. However, a considerable number of
them analyze the behavior of agents in mental terms such as beliefs, goals and
desires. Therefore, the notion of an agent intuitively communicates the idea of
something more complex than an object.
With respect to their acting, agents are classified as deliberative, purely
reactive or both. Purely reactive agents base their present actions only on stimuli
received in a recent past. Agent deliberation is the act of predicting the future with
the objective of planning its actions. Acting accordingly to plans based on an
internal model of the world is what distinguishes between deliberative and
reactive agents. Since objects typically intend to reproduce reactive rather than
deliberative behavior by means of their internal states and transitions, the notion
of deliberative agents seems more aggregative to the discussion on simulation
elements for games. Therefore, in the context of this discussion, the term agent
3 A Framework for Dynamic Modeling in Training Games 49
will denote an entity characterized by cognitive properties such as intentions,
beliefs, desires and plans that are responsible for its goal-oriented rational
behavior.
While OOS is aimed at modularity and reuse, AOS intends to improve
interoperability by focusing on the interaction between agents and with a dynamic
environment. It was not by chance that objects have grown as a standard way to
model knowledge about dynamic systems and agents are usually used for the
investigation of distributed AI phenomena such as cooperation and emergent
behavior. Both objectives are useful for training games. Therefore, a framework
capable of incorporating the main benefits of both approaches would be
appreciated.
The two kinds of dynamic systems are illustrated in Figure 3.2. In both
approaches, entities have a definite boundary, an internal state and interact with
others through message exchanging mechanisms. The difference is that agent-
oriented approaches tend to work with relatively more specialized state and
message sets. This suggests the view of agents as specialized objects (Uhrmacher
1997). Indeed, objects represent individual entities with some degree of autonomy
who exchange messages when events are triggered. As an example, in the DEVS
formalism, introduced in Section 2.2.1, an agent can be modeled as an atomic
DEVS model perceiving and effecting the environment through its input and
output ports. It can act reactively to external perturbations using its external
transition function and also proactively with its internal transition function.
Figure 3.2 – Object- and Agent-Oriented Simulation
3 A Framework for Dynamic Modeling in Training Games 50
Several toolkits have been implemented using the AOS approach on top of
OOS, such as JAMES II (Himmelspach and Uhrmacher 2007), SeSAm (Klügl
2009), RePast (North et al. 2006) and Swarm (Minar et al. 1996). Those toolkits
also benefit from the greater maturity of OOS engines, since OOS is considerably
older than AOS in the simulation field. Particularly, pure AOS toolkits tend to use
equidistant time steps for all simulation elements, neglecting the fact that complex
realistic simulations often use different time scales in different sub-elements of it
(Troitzsch 2009). The toolkits mentioned above overcome this simplicity by
inheriting the discrete-event time representation of their underlying OOS
approaches.
By embedding agents into object-oriented simulation systems, it is possible
to overcome some typical restrictions of multi-agent test beds by combining
agents and other types of objects in the same simulation. This flexibility helps
integrating existing dynamic models within multi-agent systems, thereby
producing a more realistic simulation (Uhrmacher 1997). These observations lead
to the following decision:
Decision 2: The proposed framework will adopt the basic characteristics of
object-oriented simulation. Its simulation elements will keep their internal states
hidden and they will be organized as a network. The simulation elements will
interact with adjacent elements by sending events to each other. Additional
libraries will provide support for more specialized elements, such as agents.
The following arguments further justify the decision:
If agents are modeled as objects, then the simulation platform will
impose fewer restrictions on the nature of its elements.
OOS models exhibit greater modularity than AOS models and
thereby facilitate reuse. Additionally, OOS models make it easier to
integrate other dynamic modeling formalisms, such as State Charts,
Petri Nets and Cellular Automata (Himmelspach and Uhrmacher
2007).
It is still possible to provide agent-oriented or other higher level
formalisms by creating libraries on top of the basic OOS layer.
3 A Framework for Dynamic Modeling in Training Games 51
The adoption of these basic characteristics does not necessarily mean that
the proposed framework is a special case of OOS. Indeed, non-OOS
characteristics will still be considered in what follows. Therefore, it is still useful
to further analyze other characteristics of AOS.
Unlike AOS, OOS lacks the notion of a global environment. Each object has
its own environment defined by its input and output couplings. This is not an issue
specific to OOS, but to all kinds of systems aiming at modularity and code reuse
(Uhrmacher 1997). However, the notion of environment is present in most gaming
frameworks. This happens naturally because one of the main features of games is
precisely the simulation of the interaction of actors that takes place in some
environment.
In OOS, common environments, such as spatial structures, can be modeled
as a specialized object or a composition of objects. Models such as Timed Cell-
DEVS (Wainer and Giambiasi 2001) have been tested in the domain of cellular
spaces. Indeed, if agents are modeled as specialized objects, it seems natural that
their environment is also modeled as specialized objects. This is a good example
of how object-orientation strives for uniformity, treating communication and
interaction with the environment indistinctly as discrete events, as in Figure 3.2.
Although keeping high levels of modularization, modeling the environment
as a regular object has a potentially substantial drawback. Consider a large set of
agents which need to sense a large volume of environmental data with some
frequency. Since the internal states of all objects are hidden, the agents cannot
read all that data directly. The environment has to copy and transmit all necessary
data to each sensing agent via messages, which may be unacceptable, depending
on the number of agents and the frequency they need to sense the environment. In
fact, this was one of the reasons for the rise of AOS, when OOS was already a
well established modeling paradigm.
In the context of training games, one may have to sacrifice modularity in
favor of better performance, which in turn implies direct access to the game
environment state. This observation leads to the following decision:
Decision 3: The environment is modeled as a simulation element whose
internal state will be directly queried by other simulation elements. In all other
3 A Framework for Dynamic Modeling in Training Games 52
aspects, it will be treated as a regular simulation element. Since the environment
is an exception to encapsulation, it will not be provided in additional libraries,
like specialized agents. Instead it will be part of the framework specification.
The following argument further justifies the decision:
If the environment is treated as a regular object, the performance
overhead will be prohibitive, especially in the case of multi-agent
systems.
This decision does not mean that the environment will be modeled
monolithically as a single data structure. Modularity can still be achieved through
composition of smaller data structures. This form of composition will depend on
the kind of data structure of a particular environment implementation. Therefore,
it is not included in the framework, which is on a more general level of
abstraction.
In OOS, each object is responsible both for keeping its state and defining its
behavior. They interact directly by input-output coupling relations, which are
usually organized in a fixed structure. However, in computer games, as commonly
happens in AOS, interaction between game elements is often determined by
spatial proximity. That interaction may be direct or indirect, through the
environment. If two elements are sensing and acting on the same piece of
environment, they will have established an indirect causal relation among their
actions. That environment in most cases represents a physical space where these
causality links happen by proximity of the physical location of the game elements.
This justifies the high importance given to collision detection in the area of
gaming and, more generally, to spatial algorithms and spatial indexing in AOS.
There are three main reasons why most games use the centralized physical
environment approach. First, as already mentioned in Section 2.1.1, it is important
to group all rendered game elements in a specialized data structure to improve
rendering performance (Sowizral 2000; Metello et al. 2007). Second, it makes it
more natural to model actions that depend on spatial relationships. Lastly, object-
oriented approaches usually offer little support for dynamic changes to the
3 A Framework for Dynamic Modeling in Training Games 53
coupling structure of its objects, which is necessary in the case of moving objects
that interact by proximity, as in computer games.
All these arguments suggest that the environment object should keep not
only the spatial structure of the game but also the physical representation of its
elements. This approach differs from traditional OOS by splitting the physical and
behavioral states of game elements into two different objects. Physical states
should be part of the environment and be deprived of their proactivity. Elements
representing behaviors should be responsible for animating their physical
counterparts. These elements shall be called processes.
Decision 4: There will be two types of simulation elements: environment
and processes. The space and physical state of simulation elements will be
implemented as parts of the environment, which is deprived of any proactive
behavior. Any kind of behavior will be implemented in processes, which are
responsible for providing behavior to all physical elements in the simulation.
The main arguments for this decision are the following:
It allows the grouping of all physical game elements in a specialized
and possibly spatially indexed data structure, which helps improve
the performance of the rendering algorithms as well as of the spatial
algorithms.
Depriving the environment of proactivity does not impose any
restrictions to the supported simulation models. If a particular model,
for some reason, considers an environment that evolves in time, that
environment should be divided in two parts: one physical state and
one process that will interact with the physical state and implement
its behavior.
Both agents and other types of simulation objects can be modeled in
this framework by splitting them into physical and behavioral parts.
The physical part is modeled as part of the environment and the
behavioral part is modeled as a process.
3 A Framework for Dynamic Modeling in Training Games 54
3.2.3 On the Interaction between Elements
Decision 4 stated that a simulation is composed of an environment and a set
of processes. Therefore, the possible types of interaction between elements are
inter-process interaction and interaction between a process and the environment.
Considering the case of inter-process interaction, decision 2 defined that the
simulation elements interact with each other as in OOS, by sending events to each
other. This form of interaction allows the modular design of complex processes as
a composition of interacting sub-processes as, for example, in the coupled DEVS
formalism, described in Section 2.2.1. The same form of interaction could also be
used to implement the communication between cognitive agents, where the
processes that model the behavior of the agents need to exchange messages.
The case of process-environment interaction needs further discussion
because processes are allowed to access the internal state of the environment, as
stated in decision 3.
Since the framework provides the notion of a global environment, the
traditional modularity of OOS approaches becomes less characterized. In OOS,
the global state is typically distributed across the objects, which have dependency
relations only with their immediate neighbors in the coupling structure. Therefore,
some special care is necessary to design the interaction between the environment
and other elements in order to keep a reasonable degree of modularity and reuse.
In order to reuse a process in different simulations with different
environments, it is necessary that this process perceives those different
environments in the same way, with the same set of possible states. One simple
approach to accomplish this is the notion of environment views, which is similar
to the concept of interface in object-oriented programming languages. An
interface basically defines a type of object with a definite set of possible states. An
object that implements that interface can be perceived as an object of that type.
The same notion applies to environment views. Different environments providing
the same view can be perceived in the same way. That helps reducing dependency
relations, improving modularity and reuse of simulation models.
Up to this point, the discussion covered the topic of environment perception,
concluding that external elements should have read access to the internal state of
3 A Framework for Dynamic Modeling in Training Games 55
the environment. This raises the question of whether external elements should also
have write access, which in turn inevitably leads to the problem of concurrency.
This problem also arises in AOS and, more generally, in the broader field of
multi-agent systems (Michel et al 2009). Practically all OOS formalisms provide
some mechanism to deal with it. It has even led to the creation of new formalisms,
such as Parallel DEVS (Zeigler et al. 2000).
Since our approach adopts the main characteristics of OOS, the
straightforward solution is to adopt a well-established concurrency mechanism
from some OOS formalism. In this case, any simulation element that intends to
change the environment state has to do it by sending events to it and not by direct
writing. Modeling actions that alter the state of the environment as regular OOS
events certainly makes the simulation framework more uniform, in the sense that
state transitions are always propagated in the same way throughout the simulation
elements, including the environment.
One might argue that this approach could lead to performance loss due to
the fact that a process is forced to make a copy of the information it sends to the
environment, in order to send it as an event. This is the same case as the
performance problem discussed in decision 3, only in the opposite direction.
However, we believe that this will not be performance-costly in most cases
because of the semantics of what it means to send information to the environment.
A process sends an event to the environment when it acts on it, causing changes. It
is reasonable to assume that a process will know exactly what it wants to change
when it decides to act on the environment. Therefore, it can send only the
necessary information to perform the change. The problem that has leaded to
decision 3 is that, when the information is flowing in the opposite direction, the
environment hardly knows precisely what information the process will really
need. Therefore, in that case, it is better to let the process query the environment
state.
Decision 5: Processes interact with each other by exchanging events, as in
pure OOS. Processes also affect the environment by sending events to it.
However, to observe the environment state, processes will directly access an
environment view. The environment will provide a set of views, each one defined
by a set of perceivable states.
3 A Framework for Dynamic Modeling in Training Games 56
The following arguments further support this decision:
The propagation of state transitions in the network of simulation
elements is uniformly done by events, as in pure OOS. Hence, many
formal properties of OOS are incorporated, such as concurrency
control.
Accessing the internal state of the environment indirectly via
environment views increases modularity. The same set of processes
can be reused in different simulations with different environments,
as long as the environments provide the necessary views.
3.2.4 The Process–Oriented Simulation Paradigm
The decisions taken in sections 3.2.2 and 3.2.3 lead to a process-oriented
simulation (POS) paradigm, which is a hybrid paradigm with notions adopted
both from OOS and AOS.
Figure 3.3 – Process-Oriented Simulation
If we consider only the operational characteristics of the simulation
elements, the POS paradigm is very close to OOS, with the exception of the
environment read access method. However, this exception can be abstracted as if
each read access query were composed of two regular events, one for the query
and one for the answer. However, it should not be implemented that way for the
performance reasons discussed in Section 3.2.2. This abstraction gives POS the
3 A Framework for Dynamic Modeling in Training Games 57
possibility of inheriting many interesting formal properties from OOS, such as
universality and closure under coupling/composition (Zeigler et al. 2000).
If we consider the semantics of the simulation elements, the POS paradigm
inherits the notion of environment from AOS. However, there is a conceptual
difference between both. In AOS, the boundaries of the simulation elements are
usually defined by the boundaries of entities in the real system they attempt to
simulate. For example, in the popular case of the simulation of an insect colony,
there is usually an agent for each insect. Likewise, in social simulations, there are
agents representing people or institutions. In POS, the simulation elements are
defined first by their nature – physical or behavior – then they are further divided
according to their complexity in order to achieve modularity. For example, in an
insect colony, there may be one single process responsible for implementing the
behavior of all insects in the simulation. Likewise, in a social simulation, the
behavior of a person could be divided into different parts such as production,
consumption and leisure, each one implemented by a different process.
These characteristics of POS aim clearly to isolate the physical
representation of whatever is being simulated, while keeping traditional
simulation properties. That makes POS, just like AOS, suitable for simulations
with highly specialized forms of environment representation, such as scene graphs
and GIS-based spatial structures and databases (Gimblett 2002; Gonçalves et al.
2004).
3.2.5 Process Creation and Destruction
Predicting in advance what is going to happen in a simulation is usually
hard. In fact, that is one of the reasons simulations exist. On top of that,
simulations may involve sources of non-determinism such as coin flips or human
interactions. That raises the relevant question of whether processes should be
allowed to be created and destroyed during the execution of a simulation. If all
simulation activity could be easily predicted in advance, all necessary processes
could be instantiated at the beginning of the simulation and it would not be
necessary to create or destroy them at execution time.
3 A Framework for Dynamic Modeling in Training Games 58
If we take for a fact that no process can be created at execution time, a
reasonably complex simulation with many different possible outcomes could
potentially produce one (or both) of the following situations:
The appearance of complex processes, which can act in a number of
different ways, according to the evolution path of the simulation.
This would hurt modularity.
A large number of smaller processes instantiated at the beginning of
the simulation to handle every possible scenario that arises during
execution. This potentially leads to huge inefficiencies, because it is
not known in advance which processes will actually play some role
in the simulation.
Allowing processes to be created at execution time makes it possible to
avoid these situations. In fact, AOS-based toolkits typically allow the creation and
destruction of agents at execution time. In opposition, many traditional OOS
approaches, such as the basic DEVS, do not consider this kind of structural
change during simulation execution. This limitation has been felt in a number of
research works and has lead to extensions to some OOS formalisms to support
variable object structures (Uhrmacher 2001).
In the context of discrete-event simulations, the creation of a process may be
considered as an event. In fact, it can be abstracted as an instantaneous state
transition, where the process leaves the state of non-existence and assumes the
initial state of its lifetime. Likewise, its destruction may also happen
instantaneously as another event. This observation suggests a simple mechanism
for process creation and destruction. Instead of sending events to other processes
or to the environment, a process can output a special type of event causing the
creation or destruction of another process. By analogy with execution threads, one
can say that a process can fork other processes. In fact, processes can be seen as
threads in a multithreaded programming environment. If we continue the analogy,
a process that has forked another process is referred to as its parent process.
Likewise, the forked process is called the child process.
The hierarchical structure of processes induced by the process forking
model provides the additional benefit of allowing abstraction levels when
reasoning about processes. For example, a workflow may be represented as a
single parent process which forks a child process for each action that is executed
3 A Framework for Dynamic Modeling in Training Games 59
in the workflow. Hence, the whole workflow may be seen as one single process or
as a set of actions according to the desired abstraction level.
Decision 6: Processes may fork and destroy other processes by outputting a
special type of event, which is part of the framework definition. The creation and
destruction of processes happen instantaneously with respect to simulation time,
just like regular events.
The following arguments further support this decision:
Allowing a dynamic simulation structure helps keeping the
simulation models modular and simple.
Parental processes hierarchies allow reasoning about and designing
processes in multiple abstraction levels.
3.3 The Process-DEVS Formalism for Process Modeling
The result of the discussion in Section 3.2 was an abstract framework to
model the dynamics of training games. This section describes an instantiation of
the framework, called Process-DEVS, which extends the DEVS formalism
(Zeigler 1972) to work with the process-oriented simulation paradigm. DEVS was
chosen as basis for our framework because of its formal properties, such as
universality and closure under composition (Zeigler et al. 2000).
3.3.1 Formal Model
We start the description of the formal model with the definition of an
abstract simulation element or, simply, an element. Then, we define two classes of
elements, process and environment. Finally, we introduce two specializations of
processes, input processes and output processes, designed for interacting with
external asynchronous entities.
An operational semantics of these concepts is described in Section 3.3.2.
3 A Framework for Dynamic Modeling in Training Games 60
Figure 3.4 – The Hierarchy of Simulation Elements in Process-DEVS
Processes and environments have very similar behaviors. Therefore,
defining how they are simulated in terms of the abstract notion of simulation
elements leads to a more concise way of describing the simulation mechanisms.
Simulation Element
A simulation element is defined by a tuple of the form (an intuitive
explanation of the components follows the formal definition):
S, V, X, Y, E, P, int, ext, , ρ, ta
where
S is the set of possible internal states
V = {(Vi, i) | i=1, … n} is a set of views that provide external read
access to the internal state of this element, where
Vi is the set of view states of the ith
view
i: S Vi is the view mapping function of the ith
view
X is the set of acceptable input events
Y is the set of possible output events
E is a set of environment view states
P is the set of elements that this element can create and destroy
int: S E S {finished} is the internal transition function
ext: Q E (X {finish}) S {finished}
is the external transition function, where
Q = {(s, e) | s S, 0 e ta(s)} is the total state set
e is the time elapsed since last transition
: S 2Y is the output function
3 A Framework for Dynamic Modeling in Training Games 61
ρ: S 2P 2
P is the process structure transition function
ta: S [0, ] is the time advance function
The terms S, X, Y, int, ext, and ta have basically the same meaning as in
the basic DEVS formalism, introduced in Section 2.2.1. The set S defines all
possible internal states an element may assume. The sets X and Y define all
possible input and output events of the element, at any time in the simulation.
These three sets have the exact same meaning as in DEVS.
The functions int and ext define all state transitions of the element. They are
basically the same functions defined in the basic DEVS formalism, with some
minor differences.
The function int is the internal transition function. It is responsible for
defining the proactive behavior of the element. This function is invoked by the
simulator after ta(s) units of time have passed since the last state transition, where
s is the internal state that resulted in that last transition. The output of this function
defines the next state of the element. The special output value finished means that
this will be the last state transition of this element, which will cease to exist in the
simulation from that time instant on. In order to compute the state transition, int is
allowed to read the current state of the element, as well as the environment state,
according to how this element perceives the environment.
The function ext is the external transition function. It is responsible for
defining the reactive behavior of the element. This function is invoked by the
simulator whenever the element receives an event from another element. The
output of this function is handled exactly in the same way as in int. However, its
input is quite different. It is allowed to read the event that the element is receiving.
If it receives the special event finish, it means that this will be the last state
transition of this element, which will cease to exist in the simulation. This last
transition function call allows the process to finish in a friendly way, releasing
resources and sending events to inform other processes. The external transition
function is also allowed to access the environment state, the current element state
and the time elapsed since the last transition, which is not necessary in int because
it can be computed by ta(s).
3 A Framework for Dynamic Modeling in Training Games 62
The terms and ta have the exact same meaning as in DEVS. (s) defines
which events are output by the element after any state transition is performed.
ta(s) defines the time delay the simulator will wait to call int again, if no events
are received until then.
The terms V and E define the way an element may access the internal state
of another element. The set V defines the views that this element provides,
through which other elements can access a particular view of its internal state.
Note that the internal state is not accessed directly. Instead, external elements are
only allowed to access view states of the views defined by V. Those view states
are determined by the view mapping functions applied to the current internal state.
The set E defines the environment view states in which the element can perceive
its environment. The formalism only allows an element to access the state of
exactly one other element. This follows because there will be only one
environment in a simulation, and this is the only element that will have its internal
state accessed through its views.
Finally, the terms P and ρ define the mechanisms for dynamic creation and
destruction of elements. The set P contains all elements that this element can
create and destroy. The function ρ outputs two sets of elements, one for the
elements that are created and one for those that are destroyed, whenever a state
transition has been performed.
As defined here, simulation elements do not have input and output ports, as
in the DEVS with ports formalism (Zeigler et al. 2000), described in Section
2.2.1. This is merely for notational simplicity, since this decision is unimportant at
a conceptual level. It is understood here that the main benefit of ports for games is
to improve performance by allowing a more efficient event routing method. Since
performance is extremely important for games, the model should be easily
extendable to embrace port support, even though it is not relevant to the
discussion on an abstract level. Ports could be easily added to the framework with
minor changes in this notation: simply by representing inputs and outputs by pairs
(port, event), exactly as in DEVS with ports.
Environment
3 A Framework for Dynamic Modeling in Training Games 63
An environment is an element S, V, X, Y, E, P, int, ext, , ρ, ta that
satisfies the following constraints:
(1) E = {nil}
(2) P = ˄ (sS)(ρ(s) = (,))
(3) (sS) (e1+) (e2
+) (x(X {finish}))
(ext((s,e1),nil,x) = ext((s,e2),nil,x))
(4) (sS) (e+) (x(X {finish}))
(int(s,nil) ≠ finished ˄ ext((s,e),nil,x) ≠ finished)
(5) (s S)(ta(s) {0, })
Constraint (1) just states that the environment does not directly access the
internal state of any other element. Constraint (2) states that the environment does
not alter the simulation structure. Constraint (3) states that state transitions do not
depend on the elapsed time since the last transition. Constraint (4) guarantees that
the environment never finishes. Constraint (5) deprives the environment of
proactive behavior, which means that its state does not change with the flow of
time, if no events are received. The ta function is allowed to output the value 0 in
order to allow transient states (Zeigler et al. 2000). Transient states are states that
do not have duration. When reached, they are immediately changed again. They
are commonly used as intermediary states in a state transition to produce different
outputs according to the previous state of a given system. Function ta is also
allowed to output the special value , meaning that its internal transition function
will not be invoked at least until the next received event, when the external
transition function will be invoked and ta will be evaluated again.
Even though the environment does not act proactively, it is still allowed to
output events in response to state changes, which are always caused by the arrival
of another event. The purpose of these output events is to alert processes about
state changes in the environment, which is analogous to the observer pattern in
object-oriented design patterns (Gamma et al. 1995). If we deprived the
environment of the ability to send events to processes, any process that needs to
respond to changes in the environment would have to check it with a minimum
frequency, which would lead to inefficiencies.
For easiness of notation, the environment is defined by the simplified
structure E = S, V, X, Y, int, ext, , ta, subject to constraints (3) and (5),
3 A Framework for Dynamic Modeling in Training Games 64
representing the element S, V, X, Y, {nil}, , int, ext, , ρ, ta, where
(sS)(ρ(s) = (,)).
Processes
A process is a simulation element S, V, X, Y, E, P, int, ext, , ρ, ta such
that V = . This means that the internal state of a process is not directly accessible
by any other element. For easiness of notation, a process is defined by the
structure S, X, Y, E, P, int, ext, , ρ, ta, representing the simulation element S,
, X, Y, E, P, int, ext, , ρ, ta.
Intuitively, a process represents an activity carried out in a simulation.
Processes may be created and destroyed dynamically during a simulation. In order
to reason about processes in time, it is possible to determine their start times and
finish times. Process creation and destruction are usually defined by the function
ρ, which is invoked right after every state transition, and before the simulation
time is advanced any further. Therefore, the start and finish times of processes is
determined by the time instants of the state transitions that triggered their creation
and destruction. There is still another way to destroy a process, which is by
suicide. Whenever a state transition leads to the special state finished, the finish
time is naturally defined by the time instant of that transition. Having well-defined
start and finish times allows some interesting analytical properties such as, for
example, the representation of process execution histories in a very similar way as
in Sowa’s discrete event process model (Sowa 2000).
Whenever a process creates another process, it is said that the parent
process, which is the creator process, has forked a child process, which is the
created process. Process forking, besides providing simulations with structural
dynamism, also provides abstraction level capabilities to process modeling. The
capability of process forking is considered a form of abstraction and modularity of
process modeling in the sense that a process may delegate some of its sub-tasks to
its children. Hence, modularity and abstraction is achieved in a different way, as
compared with the coupled DEVS formalism (Zeigler et al. 2000), described in
Section 2.2.1.
I/O Processes
3 A Framework for Dynamic Modeling in Training Games 65
Input processes and output processes are processes dedicated to manage the
communication with asynchronous entities external to the simulation. This
communication is modeled as exchange of events between a process and an
external entity. Hence, any process can communicate with some external entity in
the same way it communicates with other processes. The input and output
processes act as one-way channels. They receive events from the sender side and
store those events in their internal state until the receiving side requests the events
to be flushed. Therefore, I/O processes act as streams of events.
In order to represent event streams, it is necessary to use lists, instead of
sets. The following notation is used to represent lists: S* is the set of all possible
lists formed with elements of S; [e1, e2, … , en] represents the list formed by the
elements e1, e2, … , en, in that order; [] represents the empty list; [head | tail]
represents a list that has the element head as its first element, followed by all the
elements in the list tail, in the same order. For example,
[e1 | [e2, e3]] represents the list [e1, e2, e3].
An input process receives events from external entities, where the events are
taken from a set I, and sends them to other processes. An input process over I is
formally defined as pin = S, X, Y, E, P, int, ext, , ρ, ta, where
S = I* (I {nil})
X = I
Y = I
E = {nil}
P =
int((list, out), nil) = (list, nil) if list = []
= ([e1, … ,en-1], en) if list = [e1, … ,en] ≠ []
ext(((list, out), e), nil, x) = ([x | list], nil) if x ≠ finish
= (list, nil) if x = finish
((list, out)) = {out} if out ≠ nil
= if out = nil
ρ(s) = (,)
ta((list, out)) = if list = []
= 0, if list ≠ []
3 A Framework for Dynamic Modeling in Training Games 66
When an event is received by the input process, it is stored in the internal
state. As soon as possible, the input process outputs the events stored in its
internal state to other processes in the same way as any other simulation element.
An output process receives events from other processes, taken from a set O
of events, and sends them to external entities. An output process over O is
formally defined as pout = S, X, Y, E, P, int, ext, , ρ, ta, where
S = O*
X = O
Y =
E = {nil}
P =
int(list, nil) = list
ext((list, e), nil, x) = [x | list] if x ≠ finish
= list if x = finish
(s) =
ρ(s) = (,)
ta(s) =
The output processes store the events they receive in a list. This list is used
to generate a stream of events for entities which are external to the simulation.
This will be formally defined in Section 3.3.2.
Simulation
Environments and processes are parts of the broader notion of simulation. A
simulation is basically a container of simulation elements with some additional
information. Besides the environment and a set of processes, it also defines the
event coupling structure between these elements and a view-process coupling
map. The event coupling structure defines the recipients of events generated by
any element, while the view-process coupling map defines which environment
view is accessible to each process.
3 A Framework for Dynamic Modeling in Training Games 67
In the formal definition of a simulation, the operator “∙” (dot) is used to
access a property of a given structure. Therefore, “S∙p” should be interpreted as
“property p of structure S”.
A simulation is formally defined as
SIM = SE, s0, P0, cs, vmap, τ
where
SE is the set of all simulation elements, which must include a single
environment element. We define the following subsets of SE and single out
the environment in SE:
Pin is the set of input processes in SE
Pout is the set of output processes in SE
ε = S, V, X, Y, int, ext, , ta is the (only) environment in SE
s0: SE SEe
Se
· , where s0(elem) elem∙S, is the initial state map
P0 is the initial set of running processes, which must be a subset of the set of
processes in SE
cs: SE SE {true, false} is the event coupling structure
vmap: SE – {ε} ε∙V is the view-process coupling map
τ: 2FC
– {} FC is the tiebreak function, where FC = {itf_call(e) | eSE}
{etf_call(e, evt) | eSE ˄ evt(e∙X {finish})} is the set of all
possible transition function calls, which is explained below
During the lifetime of a simulation run, a number of elements are
simultaneously simulated. An element can be either a process or the environment.
The special predicate isEnvironment(e) will be used when it is necessary to
differentiate between both types. For any given element e, isEnvironment(e)
implies that the constraints of the environment definition apply to e. Likewise,
¬isEnvironment(e) implies that the constraints of the process definition apply to e.
The set SE contains, besides the environment, all processes that can be
executed in a simulation run. It is possible that some of the processes in SE are
never started, depending on the course of the simulation run. Function s0 maps
each element into its initial internal state. The set P0 defines the processes that are
3 A Framework for Dynamic Modeling in Training Games 68
started exactly at the simulation start time. Each simulation is allowed to have
only one environment ε.
When an element outputs an event, the coupling structure cs determines
which elements are receiving it. cs(Esend, Ereceive) = true means that the element
Ereceive should receive events from the element Esend. The view-process coupling
map serves a similar purpose with respect to the capabilities of processes to query
the internal state of the environment. vmap(p) returns the environment view that the
process p is allowed to access.
The tiebreak function τ defines the order in which concurrent events are
processed. State changes in simulation elements are caused either by the receiving
of an event from another element or by the expiry of the time returned by the time
advance function (ta) of the element in its last state transition. In the first case, the
external transition function (ext) of the element is called while in the second case,
the internal transition function (int) is called. In both cases, the call to the
transition function returns the next state of the element. The set FC in the
definition of the tiebreak function τ contains all possible calls to any transition
function of any simulation element, where itf_call(e) and etf_call(e, evt) represent,
respectively, calls to the internal and external transition functions of e with their
respective parameters. Given any set of transition function calls, the tiebreak
function defines a total ordering over it.
Let SIM = SE, s0, P0, cs, vmap, τ be a simulation. Recall that
Pin is the set of input processes in SE
Pout is the set of output processes in SE
ε = S, V, X, Y, int, ext, , ta is the (only) environment in SE
The simulation SIM must obey the following constraints:
(1) (P0 SE)
(2) (e SE)(e∙P (SE – {e, ε}))
(3) isEnvironment(ε)
(4) (e SE)(isEnvironment(e) e = ε)
(5) (efrom SE) (eto SE)(cs(efrom, eto) = true efrom∙Y eto∙X)
(6) (pin Pin)(p SE)(cs(p, pin) = false)
(7) (pout Pout)(p SE)(cs(pout, p) = false)
(8) (p SE)(vmap(p) = (Vi, i) Vi p∙E)
3 A Framework for Dynamic Modeling in Training Games 69
(9) (S 2FC
)(c FC)(τ(S) = c)
(cS ˄ (S′ 2FC
)((S′ ≠ ˄ S′ S) τ(S′) = c))
Constraint (1) assures that the initial processes are all simulation elements of
SE (the I/O processes and the environment are simulation elements in SE, by
definition). Constraint (2) assures that all dynamically-created elements are
processes of this simulation. Constraints (3) and (4) determine that there must be
only one environment in a simulation. Constraint (5) assures that any element that
receives an event from another element will know how to handle it. Constraints
(6) and (7) state that no process can send events to input processes or receive
events from output processes. The I/O processes are one-way event streams.
Constraint (8) assures that all processes will receive an understandable state when
they query their environment view. Constraint (9) assures that the tiebreak
function τ represents, in fact, a total ordering over the set of all possible transition
function calls.
The basic working model of a simulation is illustrated in Figure 3.5. Process
P1 has two child processes P11 and P12. All of them, including the parent P1 can
send events to the environment and to other processes. Processes Pout, Pin1 and Pin2
are I/O processes. They are responsible for the communication between human
players and the rest of the simulation. Through environment views V1 and V2, the
processes P11 and P12 observe the state of the environment.
3 A Framework for Dynamic Modeling in Training Games 70
Figure 3.5. The Simulation Model
The two environment views act as interfaces, providing a mechanism for
processes to get information about the internal state of the environment at any
time. The idea is to allow processes to access the environment through simplified
views. Hence, modularity can be increased because processes need not understand
the full environment state. The same process can work on any environment that
provides the view used by that process.
3.3.2 Operational Semantics
This section presents the operational semantics of simulations in Process-
DEVS. Before the introduction of the model, it is necessary to define some basic
notation that will be used throughout the rest of this thesis.
As in the previous section, the operator “∙” (dot) is used to represent a
property of a given object. Therefore, “O∙p” should be interpreted as “property p
of object O”. Lists are represented in the form [e1, e2, … , en], where [] is the
empty list.
The operator is used in expressions of the form f = g (d,r), where f
and g are functions with the same domain and range sets and f(x) is equal to g(x)
for all the values of x, with the exception of the value d, for which f(d) = r.
The abstract notion of simulation element will help simplify the definition of
the operational semantics because most of the time processes and the environment
are treated in the same way. Whenever it is necessary to distinguish them, the
special predicate isEnvironment(e) will be used, with the same semantics as in the
definition of simulation, in Section 3.3.1.
Each element has a definite start time and a definite finish time. No element
can start before the simulation run starts and no element continues to execute after
it has finished. The environment is always in execution during the simulation run
and it does not make sense for it to have start and finish times different from the
simulation start and finish times.
Let SIM = SE, s0, P0, cs, vmap, τ be a simulation.
The execution of SIM is determined by a sequence of simulation states in
time. The foundations of discrete-event based simulation require that all changes
3 A Framework for Dynamic Modeling in Training Games 71
to the simulation state be instantaneous. A simulation state change is caused either
by the creation, destruction, or state transition of any of its simulation elements.
A simulation execution state of SIM is defined as
SS = t, Eactive, Estate, Elast_t, EQ
where
t is the current simulation time
Eactive SE is the set of active simulation elements
Estate: SIM∙SE SEe SIM·
e·S
{finished}
where Estate(e) (e∙S {finished}) is a function that maps each
element eSIM∙SE into the current internal state of that element
Elast_t: SIM∙SE + is the last transition time map
EQ is the event queue, which stores the next scheduled events
The initial state of a simulation SIM is
0, P0, s0, lt0, eq0
where
(eSE)(lt0(e) = 0)
eq0 = {(tinit, itf_call(e)) | eP0}
The initial simulation time is 0. It is increased as the simulation advances.
Each simulation state ss stores the current simulation time t. Naturally, a
simulation run does not produce a different simulation state for each possible time
instant. Instead, the state ss may jump directly to another state ss′, with current
time t+Δt, provided that no event is scheduled to happen in that time interval. The
simulation state also includes the current execution state of all simulation
elements, given by Eactive, Estate and Elast_t. The set Eactive contains all elements that
are currently active. The functions Estate and Elast_t give the internal state and the
timestamp of the last state transition of each element. Finally, the simulation state
keeps an event queue, which stores all events currently scheduled to happen.
3 A Framework for Dynamic Modeling in Training Games 72
The event queue is the main component of most discrete-event simulators.
In our case, it contains scheduled calls to the transition functions of simulation
elements. Each element of the queue assumes one of the two forms
(ts, itf_call(e)), for internal transition function calls, or (ts, etf_call(e, evt)), for
external transition function calls, where ts is the time instant of the scheduled call,
e is the simulation element and evt is the event that is passed as parameter in the
case of external transition function calls.
All transition function calls are serialized with respect to time. If two calls
are scheduled to happen at the same time, the tiebreak function of the simulation
defines the order in which they are called. Given an event queue EQ, the next
transition function to be called is given by the next_call operation:
(1) next_call(EQ) = (, nil) if EQ =
= (t, c) if EQ ≠
where (t, c)EQ ˄ ((t′, c′)EQ)(t′ ≥ t) ˄ (t, c) = τ({(t, c″)EQ})
This assures that the next call always has the least timestamp in EQ. If there
is more than one call with that timestamp, the tiebreak function defines
which one is the next call. If EQ is empty, it returns a nil call with an
infinite timestamp.
The next operators are defined in order to provide means of manipulating
the event queue. Each operator returns another event queue, which is the result of
the operation.
(2) remove_calls(EQ, e) = {(ts, c)EQ | c∙e ≠ e}
This operator removes all transition functions calls of simulation element e.
(3) schedule_itf_call(EQ, e, ts) =
(EQ – {(t, c)EQ | c = itf_call(e)}) {(ts, itf_call(e))}
This operator schedules an internal transition function call for simulation
element e at ts, replacing all other calls to that function in EQ.
(4) send_events(EQ, Events, Eto, ts) =
3 A Framework for Dynamic Modeling in Training Games 73
EQ {(ts, etf_call(eto, evt)) | etoEto ˄ evtEvents}
This operator schedules calls to external transition functions generated by
the act of sending a set of events to a set of simulation elements. That means
scheduling calls to all receiving elements, one for each event.
(5) destroy(EQ, e, ts) =
send_events(remove_calls(EQ, e), {finish}, {e}, ts)
This operator performs the changes in EQ when a simulation element is to
be destroyed. It removes all calls to e and sends a finish event to it. This is
done so that, when receiving the finish event, the process has a chance of
releasing resources and informing others of its destruction.
(6) create_destroy_elements(EQ, Ecreate, Edestroy, ts) =
schedule_itf_call( … schedule_itf_call(DTQ, ec1, ts) … , ecn, ts)
where
DTQ = destroy( … destroy(destroy(EQ, ed1, ts), ed2, ts) … , edm, ts)
Ecreate = {ec1, … , ecn}
Edestroy = {ed1, … , edm}
This operator performs the changes in EQ relative to the creation of the
elements in Ecreate and the destruction of those in Edestroy. When creating an
element, it is only necessary to schedule an initial internal transition
function call at the time the element is created.
(7) schedule_transition_events(EQ, e, snext, ts, Events, Eto, Ecreate, Edestroy) =
schedule_itf_call(send_events(create_destroy_elements(EQ, Ecreate, Edestroy,
ts), Events, Eto, ts), e, ts + e∙ta(snext))
This operator performs all changes in EQ generated by a state transition of a
simulation element. First, it creates and destroys the elements defined by
Ecreate and Edestroy. Then, it propagates the events in the set Events to the
processes in Eto. Finally, it schedules the next internal transition function of
e.
Now that the operations on the event queue are defined, we can define the
operators to manipulate the simulation state. The element_state_transition
3 A Framework for Dynamic Modeling in Training Games 74
operator defines how the simulation state is changed in the case of a state
transition of an element.
(8) If snext ≠ finished:
element_state_transition(SS, e, snext, ts) =
ts, SS∙Eactive Ecreate, SS∙Estate (e, snext), SS∙Elast_t (e, ts),
schedule_transition_events(SS∙EQ, e, snext, ts, Events,
{ptoSIM∙SE | SIM∙cs(p, pto) = true} SS∙Eactive, Ecreate, Edestroy)
where
Events = e∙(snext)
(Ecreate, Edestroy) = e∙ρ(snext)
This operator produces a new simulation state, after a state transition of
element e to the state snext, at the time instant ts. The set of elements created
by the transition is added to the set of currently active elements. However,
the set of destroyed elements is not subtracted yet, since they still need to
receive and treat the special event finish, before they are deactivated. The
new state snext is assigned as the new internal state of e and ts becomes the
timestamp of its last transition. Finally, it is only necessary to update the
event queue with the effects of this state change by invoking the proper
operator.
(9) If snext = finished:
element_state_transition(SS, e, snext, ts) =
ts, SS∙Eactive – {e}, SS∙Estate (e, finished), SS∙Elast_t (e, ts),
remove_calls(EQ, e)
This operator computes the new simulation state when an element performs
a transition to the special state finished. This operation is relatively simple
and consists basically of removing all execution information about the
element e.
The following operators perform transition function calls on elements of the
simulation:
3 A Framework for Dynamic Modeling in Training Games 75
(10) remove_call(SS, call) = SS∙t, SS∙Eactive, SS∙Estate, SS∙Elast_t, SS∙EQ – {call}
This simply removes a transition function call from the event queue EQ.
(11) process_call(SS, ts, itf_call(e)) =
element_state_transition(remove_call(SS, (ts, itf_call(e))), e, snext, ts)
where
snext = e∙int(SS∙Estate(e), view)
view = SIM∙vmap(e)∙(SS∙Estate(SIM∙ε))
This operator executes an internal transition function call on element e, at
time ts. It first removes the scheduled call from the event queue. Then, it
computes the state change caused by the function call.
(12) If evt ≠ finish:
process_call(SS, ts, etf_call(e, evt)) =
element_state_transition(remove_call(SS, (ts, etf_call(e, evt))), e, snext, ts)
where
snext = e∙ext((SS∙Estate(e), ts – SS∙Elast_t(e)), view, evt)
view = SIM∙vmap(e)∙(SS∙Estate(SIM∙ε))
This operator executes an external transition function call on element e, at
time ts.
(13) If evt = finish:
process_call(SS, ts, etf_call(e, evt)) =
element_state_transition(element_state_transition(remove_call(SS,
(ts, etf_call(e, finish))), e, snext, ts), e, finished, ts)
where
snext = e∙ext((SS∙Estate(e), ts – SS∙Elast_t(e)), view, finish)
view = SIM∙vmap(e)∙(SS∙Estate(SIM∙ε))
This operator executes an external transition function call on element e, at
time ts, when the special event finish is received by e.
The basic procedure for computing how the simulation state changes with
time consists basically of retrieving transition function calls from the event queue
3 A Framework for Dynamic Modeling in Training Games 76
and executing them in the right order. To determine the simulation state at
simulation time t, it is necessary to execute all state transitions scheduled to
happen between the current time and t. The advance function provides a recursive
procedure for advancing the simulation state.
(14) advance(SS, Δt) =
SS∙t + Δt, SS∙Eactive, SS∙Estate, SS∙Elast_t, SS∙EQ, if nc∙ts > SS∙t + Δt
advance(process_call(SS, nc), Δt – (nc∙ts – SS∙t)), if nc∙ts SS∙t + Δt
where
nc = next_call(SS∙EQ)
The simulation, as defined so far, does not interact with any external
entities. The advance operator is responsible for updating the simulation execution
state considering solely the internal dynamics of the simulation. In order to make
the simulation interactive, it is also necessary to describe how the simulation state
changes when receiving input, as well as when generating output to some external
entity.
All input and output are handled by the input processes and output
processes. They are part of the simulation definition. Each input process receives
events from an external entity and stores them in its internal state. As soon as
possible, it transmits those events to their recipients. The output processes work in
the opposite direction. They receive events during the simulation advance and
store them in their internal states. When the simulator decides to flush the output
events, the internal states of the output processes are read and cleared. The
simulation inputs and outputs are represented as lists of the form
[(e1, p1), (e2, p2), ... , (en, pn)]
where ei is an event and pi is its corresponding I/O process, as defined in Section
3.3.1. In the case of processing an input, the simulation state is changed as defined
by the flush_input operator:
(15) flush_input(SS, Input) =
process_call(…process_call(SS, SS∙t, etf_call(p1,e1))…, SS∙t, etf_call(pn,en))
where
Input = [(e1, p1), ... , (en, pn)]
3 A Framework for Dynamic Modeling in Training Games 77
{p1, ... , pn} (SIM∙Pin SS∙Eactive)
This operator basically generates one external function call on the
corresponding input process for each received event.
In the case of processing the output generated by the simulation, it is
necessary to read the information stored in the output processes and clear them
afterwards, so that the same information is not read again in the next output:
(16) read_output(SS) = CONCAT(SS∙Estate(p1), SS∙Estate(p2), … , SS∙Estate(pn))
where
CONCAT(l1, l2, … , ln) is the concatenation of lists l1, l2, … , ln
{p1, p2, ... , pn} = SIM∙Pout SS∙Eactive
This operator reads all information stored in the output processes. Note that
the state of an output process is a list of events. Therefore, the elements
from SS∙Estate(p) can be concatenated directly.
(17) clear_output_processes(SS) =
element_state_transition( … element_state_transition(SS, p1,
[], SS∙t) … , pn, [], SS∙t)
where
{p1, p2, ... , pn} = SIM∙Pout SS∙Eactive
This operator clears all information stored in the output processes by forcing
a transition to the state.
The flush_io function consolidates all the input and output operations. It
receives the current simulation state and an input set, and outputs the next
simulation state and the output set.
(18) flush_io(SS, Input) =
(flush_input(clear_output_processes(SS), Input), read_output(SS))
The advance function describes how the simulation state is changed in time
considering only the internal simulation mechanisms. The flush_io function
3 A Framework for Dynamic Modeling in Training Games 78
describes how it changes when the communication with external entities is
synchronized, without changing the simulation time. A full simulation run with
external communication is described by a sequence of interleaved calls to these
two functions, depending on when the external messages were exchanged. There
are several ways to define how to interleave simulation time advance with
external communication synchronization. This definition of the Process-DEVS
operational semantics does not restrict implementations in that sense. Some
examples of how to implement different interleaving mechanisms are discussed in
Section 5.5.
3.4 Summary
In order to design a framework for modeling the dynamics of training
games, Section 3.2 discussed the identified requirements. That discussion led to
the conception of the process-oriented simulation (POS) paradigm. Even though
the requirements originated from the domain of training games, the decisions do
not contain any specific semantics of this domain. Therefore, it is quite possible
that the decisions that resulted from the discussion also apply to other simulation
domains, especially those that require highly specialized data structures for the
environment and those that involve processes of different nature interfering with
each other.
The abstract framework was instantiated as the Process-DEVS dynamic
modeling formalism, formally presented in Section 3.3.1 as an extension of the
original DEVS [Zeigler 2000]. Finally, Section 3.3.2 formally defined the
operational semantics of Process-DEVS.
4 Integrating Existing Formalisms 79
4 Integrating Existing Formalisms
The previous chapter introduced a framework for modeling interactive
simulations based on a discrete-event approach. This chapter shows how to
implement some of the common dynamic modeling formalisms on top of this
framework. The formalisms were chosen according to the requirements
enumerated in Section 1.3.
Section 4.1 describes how to define a simulation process from a workflow
description. Section 4.2 provides a modular way for modeling processes that act
on cell spaces. Section 4.3 explores how to model multi-agent systems. Finally,
Section 4.4 extracts some knowledge from these three sections about interesting
patterns in which to structure processes in a simulation for greater modularity.
Section 4.5 summarizes the chapter.
4.1 Workflows
As it was mentioned in Section 2.4, action plans are usually represented as
workflows in AI planning, which is increasingly being adopted in game AI, not
only for serious games but also for entertainment games (Nareyek 2004).
Therefore, it would be interesting to be able to simulate workflows on the
Process-DEVS formalism thereby allowing the use of automatic planners within
the simulation logic.
Additionally, most of the so-called Business Process Management (BPM)
systems represent business processes as workflows (Weske 2007). In fact,
business process simulation (BPS) provides a more powerful way of analyzing the
performance of business processes than static analysis tools and methods (Tumay
1996; Modarres 2006). Therefore, being able to simulate workflows would allow
training games based on Process-DEVS to participate in the optimization and
reengineering of business processes.
4 Integrating Existing Formalisms 80
Since this work is focused on the requirements of training games which are
not so common in entertainment games, the ability to simulate business processes
will be used as the main motivation.
4.1.1 Motivation: Business Process Modeling
In business administration, almost all kinds of organizations have developed
the need for formally expressing their activities. In fact, standardization of
business processes becomes crucial for organizations to keep control of what is
happening as they become larger and more complex. In order to meet that goal,
the adoption of the so-called Business Process Management (BPM) systems has
grown significantly over the years (Weske 2007).
Testing the quality of business processes and the performance of the teams
responsible for executing them is important to ensure efficiency. Beside other
initiatives, such as field exercises, the use of computational simulation can be a
cost-effective and efficient mechanism to help testing, validating, improving and
reengineering business processes. Simulation can be used for different purposes
such as estimating, in advance, if there will be enough resources and time for
executing a specific action or supporting complex decisions when there are too
many possibilities. One additional benefit of simulation is the possibility of
simulating how the environment and other entities will respond to the execution of
the business process. As an example, it is extremely important to anticipate the
behavior of physical phenomena, such as the dispersion of leaked chemical
products in the environment, considering a business process to handle this kind of
emergency scenario.
In the context of computer training games, it is also important to evaluate
whether a given player has taken the proper decisions during play. Therefore, for
this kind of player performance evaluation, it seems natural to model player
activity as workflows. This would help comparing his actions with predefined
business processes or action plans considered as the right way to handle the
situation.
One other direct benefit of integrating workflows with simulation is the
possibility of detecting flaws in established business processes and help
4 Integrating Existing Formalisms 81
improving them. In fact, simulation could be integrated in a cyclic way with the
business planning process as illustrated in Figure 4.1.
Figure 4.1 – The role of simulation in the planning process
In the context of this work, integrating workflows with other simulation
formalisms mean to model workflows on top of Process-DEVS, introduced in
chapter 0. Since previous research work has suggested that discrete-event
simulation is the most adequate tool for simulating business processes (Tumay
1996), Process-DEVS should be adequate for the task.
Since there are numerous formalisms for workflow representation (van der
Aalst 2003; Weske 2007), it is necessary to select one first.
4.1.2 A Discussion on Workflow Representation
Since there are many different languages and representations for workflows,
this section starts by describing the workflow representation used here.
A workflow is essentially defined by a set of actions and a control structure.
The actions define what should be done and the control structure defines in which
order the actions should be executed in a given situation (van der Aalst 2003). The
control structure is usually defined in the form of a graph. Although some
representations restrict this form to a tree structure, this is clearly a specific case
of the graph structure. Therefore, for the sake of generality, we shall represent
workflows as graphs. The nodes of the graph represent either an action or a
control flow pattern. The most common types of patterns are splits and joins, in
various flavors (van der Aalst 2003). The edges of the graph are connections that
inform, for any given node, which nodes should be triggered next when its
execution finishes. There are many different workflow representations which
differ from each other in some of the patterns they allow. For the sake of
simplicity, we shall consider only the five basic patterns defined in (van der Aalst
et al. 2003): sequence, parallel split, synchronization, exclusive choice and simple
merge. Figure 4.2 shows a very simplified version of a contingency plan for a
4 Integrating Existing Formalisms 82
situation where some oil has leaked into the sea. This example is composed with
these five basic patterns.
Figure 4.2 – Workflow for an oil-leak situation with the five basic patterns
At the start of the plan, there are two parallel split nodes (PS), meaning that
the actions of stopping the leak, installing containment barriers and detecting the
oil type should be started in parallel. After the leak has been stopped, the sequence
pattern states that the action of finding the causes should be started. After the
actions of installing the barrier and detecting the oil type have finished, the
workflow reaches a synchronization point (Syn). That means that both actions
must finish before the workflow execution can continue through that path. After
both actions have finished, the exclusive choice (EC) pattern queries the
environment state to find out whether the barrier actually prevented the oil from
reaching the coast. If the oil has not reached the cost, the recovery (of the oil from
the sea) procedure is started. Otherwise, a cleaning (of the oil from the) coast
procedure should be executed. Either the recovery procedure or the coast cleaning
action will be executed, after which, the simple merge (SM) pattern allows the
workflow to continue through its outgoing path. After both the proper procedure
has been executed and the causes of the leak have been found, a final report is
produced. In order to guarantee that these two preconditions are met, there is a
synchronization point right before the final report production action.
Both actions and each of the basic patterns, except for the sequence pattern,
are nodes in the workflow graph. Each node connects some incoming nodes to
some outgoing nodes. When a node finishes its execution, it may trigger some of
its following nodes. Likewise, a node is triggered only when some incoming node
has finished its execution. Table 4.1 lists some of the characteristics of each basic
pattern.
4 Integrating Existing Formalisms 83
pattern number of
incoming nodes
number of
outgoing nodes
trigger condition following activity
sequence 1 1 completion of
incoming node
trigger outgoing
node
parallel split 1 n completion of
incoming node
trigger all
outgoing nodes
synchronization N 1 completion of all
incoming nodes
trigger outgoing
node
exclusive choice 1 2 (or n) completion of
incoming node
trigger one of the
outgoing nodes
simple merge N 1 completion of any
incoming node
trigger outgoing
node
Table 4.1 – Characteristics of basic workflow patterns
The number of incoming and outgoing nodes indicates the number of
incoming and outgoing connections each pattern may have. The trigger condition
indicates the condition upon which the pattern is triggered. Finally, the following
activity describes which of the outgoing nodes should be triggered after the
pattern is triggered.
Optionally, the parameters or inputs of the individual actions are also
represented. Likewise, an action may also produce some data as output. That data
could be consumed either by another action executed after it or by some
conditional split operator in the control flow. Therefore, in order to represent
action input and output, a data flow may also be represented. Note that the data
flow does not follow the same paths as the control flow. However, it should
obviously obey the ordering restrictions imposed by the control flow, since an
action cannot consume output data from another action that has not yet been
executed.
The simplest way to model the data flow is to define an environment state
which is accessible from the workflow process and its actions. Each time an
action or a control operator needs some input data, it can get it from this
environment state. Likewise, when an action produces some data, it should store it
in the environment state so that later actions can read it. Hence, any information
stored in the environment state can be used by the exclusive choice operators to
evaluate their conditions when they are triggered. Environment states are most
commonly defined as a set of variables. Figure 4.3 depicts a workflow with an
4 Integrating Existing Formalisms 84
environment state of this kind. It shows actions writing and reading from variables
and one exclusive choice operator reading from them.
Figure 4.3 – Workflow with an environment state defined by the variables v1, v2 and v3
Most workflow representations do not define the time at which the actions
should be executed, and how long they will take to finish. In fact, many
representations assume that actions are atomic.
Workflows do not necessarily define how actions affect the environment.
One approach is to define actions through their pre- and post-conditions,
following the tradition of AI planning systems (Fikes and Nilsson 1971). But note
that this representation still assumes actions to be atomic. However, the
assumption that actions are atomic may be too restrictive. For example, consider
the action of walking. It may not be realistic to change the position of the
character from the origin to its destination in one single instantaneous step.
Instead, it is more realistic to simulate the trajectory of the character to the
destination point through multiple state changes, so that his trajectory may be
observed. If the environment model requires that actions have duration and make
changes to the environment during their executions, it is necessary to model
actions as processes in time. For this purpose, the definition of a process in
Section 3.3 welcomes in hand. In fact, process-oriented simulation, on which
Process-DEVS is based, provides an excellent basis for simulating workflows.
Some previous work based on object-oriented simulation had to extend the basic
formalism to accommodate workflows (Wagner et al. 2009).
Modeling workflow actions as processes makes the workflow itself a form
of process composition. Going one step further, the whole workflow may also
4 Integrating Existing Formalisms 85
itself be represented as a process. Since the Process-DEVS framework allows
processes to fork children processes during their execution, the workflow process
can be modeled as a process which orchestrates the execution of its children
processes, namely the action processes.
Figure 4.4 – Workflow and action processes
The workflow process needs to be informed when an action process is
finished executing its action, so that it can continue with the workflow execution.
In order to implement that, all action processes must output an action_finished
event, informing the workflow process when they are finished.
By taking advantage that Process-DEVS also provides the notion of
environment, it will be naturally used to represent the notion of environment state
of the workflow processes. A specific environment view is provided by the
environment to serve as the environment state, as the workflow process can
perceive it. The action processes may, but are not limited to, perceive the
environment through the same view. As any other kind of process, the action
processes change act on the environment by sending events to it.
This way of representing workflows has two main advantages:
By separating the workflow control logic from the execution of
actions, a higher level of modularity is achieved.
4 Integrating Existing Formalisms 86
Representing both workflows and actions as processes allows
hierarchical workflow composition. A workflow action may be a
sub-workflow.
4.1.3 A Formal Workflow Model
This section introduces a formal workflow model in three stages. First, it
introduces the simpler notion of workflow. Then, it defines the notion of
execution states. When a workflow is executed, it produces a sequence of
execution states as its nodes are triggered. Finally, it defines the notion of
workflow process as a process in the sense of Section 3.3.1.
Workflow Definition
A workflow is defined as a tuple WF = A, ES, N, E, entry, exit, type, where
(1) A is the set of action processes that WF can execute
(2) ES is the set of environment states that WF can perceive
(3) N is the set of nodes of the workflow graph of WF
(4) E N N is the set of edges of the workflow graph of WF
(5) entry N is the entry point of WF
(6) exit N is the exit point of WF
(7) type: NT is a function that assigns to each node n in N a node type in T
The set A define the actions WF can execute. These actions are actually
action processes that are forked by the workflow process when an action is
executed.
The situation in which the workflow process is embedded is the simulation
environment, as defined in Section 3.3.1. The set ES defines the states in which
the workflow process can perceive the environment. The environment state is
queried, for example, in the exclusive choice pattern (van der Aalst 2003), to
determine which actions are executed next.
4 Integrating Existing Formalisms 87
The workflow control structure is defined by a graph, whose vertices are
nodes that belong to the set N and whose edges belong to the set E. The entry
point of the workflow is the node in which the execution starts. Likewise, the exit
point is the node where it finishes.
Two operators are defined to access the incoming and outgoing nodes of
any given node in the workflow graph:
(8) outgoing_nodes(n) = { mN | (n, m) E }
(9) incoming_nodes(n) = { mN | (m, n) E }
Given a node n N, type(n) assumes one of the following values:
(10) action(ap), where ap A, associates action process ap with node n. In this
case, n is called an action node of WF. There must not be two different
action nodes with the same action process:
(n1,n2 N) (type(n1) = type(n2) = action(ap) n1 = n2)
(11) parallel_split, which indicates that n is a parallel split node of WF.
(12) syncronization, which indicates that n is a synchronization join node of WF.
(13) exclusive_choice(ϕ), which indicates that n is a parallel split node of WF,
with choice function ϕ: ES outgoing_nodes(n).
(14) simple_merge, which indicates that n is a simple merge join node of WF.
As defined above, the workflow graph may have an arbitrary structure,
which poses considerable difficulties when it comes to defining the notion of
workflow process and its operational semantics. We therefore introduce the
concept of well-formed workflows and, at the same time, a convenient notation to
express them.
Let A be a set of action processes. The set of well-formed workflow
programs over A and the set of well-formed workflows over A are inductively
defined as follows:
(15) An action process a in A is a well-formed workflow program over A that
defines the well-formed workflow
WF = {act}, ES, {act}, , act, act, t
4 Integrating Existing Formalisms 88
where t(act) = action(act)
In the next definitions, let wf1 and wf2 be two well-formed workflow
programs and let WF1 = A1, ES1, N1, E1, entry1, exit1, type1 and
WF2 = A2, ES2, N2, E2, entry2, exit2, type2 be the well-formed workflows
they define. Assume that ES1=ES2, that is, WF1 and WF2 have the same set
of environment states. Define ES=ES1=ES2. Then:
(16) wf1 wf2 is a well-formed workflow program that defines the well-formed
workflow
WF = A1 A2, ES, N1 N2, E1 E2 {(exit1, entry2)}, entry1, exit2, t
where t(n) = type1(n) if n N1
= type2(n) if n N2
(17) wf1 // wf2 is a well-formed workflow program that defines the well-formed
workflow
WF = A1 A2, ES, N1 N2 {sp, syn}, E1 E2 {(sp, entry1), (sp,
entry2), (exit1, syn), (exit2, syn)}, sp, syn, t
where
t(sp) = parallel_split
t(syn) = synchronization
t(n) = type1(n) if n N1
= type2(n) if n N2
(18) Let Φ: ES {true, false} be a choice function on ES. Then, Φ ? wf1 : wf2 is
a well-formed workflow program that defines the well-formed workflow
WF = A1 A2, ES, N1 N2 {ec, sm}, E1 E2 {(ec, entry1), (ec,
entry2), (exit1, sm), (exit2, sm)}, ec, sm, t
where
t(ec) = exclusive_choice(ϕ), where ϕ: ES {first1, first2} and
ϕ(es) = first1 if Φ(es) = true
= first2 if Φ(es) = false
t(sm) = simple_merge
t(n) = type1(n) if n N1
4 Integrating Existing Formalisms 89
= type2(n) if n N2
The well-formed workflows are informally depicted in Figure 4.5. Their
entry and exit nodes are indicated by entering and leaving arrows respectively.
Figure 4.5 – Workflow definition operators and their graphical representation
As an example, the well-formed workflow illustrated in Figure 4.2 is
defined by the expression:
(("Stop Oil Leak" "Find Leak Causes") // (("Install Barriers" //
"Detect Oil Type") ("If Oil is Contained" ? "Recovery Procedure"
: "Clean Coast Procedure"))) "Evaluate Damage ..."
A well-formed workflow has the property informally stated as follows: no
node in the workflow is reached by more than one execution thread, with the
exception, of course, of synchronization nodes. This is easily verifiable because
the only kind of node that forks execution threads is the parallel split, which is
only produced by the parallel sub-graph composition operator, which always put a
synchronization node where the two threads meet. This is necessary because it is
an assumption of the simple merge pattern that none of its incoming branches is
ever executed in parallel. Also, most workflow systems do not allow multiple
concurrent execution instances of the same action (van der Aalst 2003).
From now on, we assume that all workflows are well-formed.
Workflow Execution States
4 Integrating Existing Formalisms 90
Before defining the workflow process in the format of the Process-DEVS
formalism, we shall define first the notion of workflow execution states, which
will serve as basis for the definition of the workflow process.
The execution of a workflow is formally defined as a sequence of execution
states and may yield different results according to the execution environment,
which we shall refer to simply as the environment. During workflow execution,
the environment may be altered by an external process. Therefore, the current
environment state is also part of the workflow execution state.
Let WF = A, ES, N, E, entry, exit, type be a workflow. An execution state
of WF is a triple WS = Aexec, Synstate, ω, where
Aexec A
is a set that contains all action nodes that are in execution in WS.
Synstate: Syncs 2N
is a function that defines the internal states of all synchronization
nodes, where
Syncs = { s N | type(s) = synchronization } is such that
(sSyncs) (Synstate(s) incoming_nodes(s))
ω ES {not_started}
is the environment state of WS.
The set Aexec keeps all action nodes whose corresponding processes have
been forked but have not finished yet. Synstate is responsible for informing the
internal state of all synchronization nodes. These nodes need to keep an internal
state because they only trigger their outgoing node when all of their incoming
nodes have finished executing. Therefore, their internal state consists of a subset
of their incoming nodes, informing which ones have already finished executing.
This makes it possible to know precisely at which execution states the outgoing
node of a synchronization node is triggered. Finally, ω is the current environment
state, which is used to determine the right outgoing path of an exclusive choice
node when it is triggered. The special value not_started is used in the initial
execution state, when the workflow process has not yet started.
The initial execution state of WF is a state of the form WS0 = , Syn0,
not_started, where Syn0(s) = , for any synchronization node s of WF.
4 Integrating Existing Formalisms 91
The execution state of a workflow changes when one of its nodes is
triggered. When a node in a workflow is triggered, it may cause other subsequent
nodes to be triggered in cascade. As an example, triggering a parallel split node
causes its outgoing nodes to be triggered. Once the cascade of node triggering has
finished, the workflow reaches a new execution state. The whole cascade of
triggers fired by the initial node trigger is considered atomic and characterizes one
single execution state transition.
In what follows, let WF = A, ES, N, E, entry, exit, type be a workflow and
WS = Aexec, Synstate, ω be an execution state of WF.
The function trigger: WS N (N {nil}) WS, where WS is the set of
all states of WF, formalizes the effects of triggering a node. The function is
recursive to represent the triggering cascade. Intuitively, if trigger(WS, n, nprev) =
WSnext, then n represents the node that is being triggered, nprev is the incoming
node that caused the trigger and WSnext is the next execution state. Note that nprev
may assume the special value nil. That happens when n is the entry point of the
workflow and therefore has no incoming nodes.
The function trigger is defined according to type of n (note that equations
(21) and (23) assume a specific cardinality of outgoing_nodes(n), which is
guaranteed by the constraints imposed on the workflow graph structure):
(19) If type(n) = action(ap), then
trigger(WS, n, nprev) = Aexec {ap}, Synstate, ω
(20) If type(n) = parallel_split, then
trigger(WS, n, nprev) = trigger(…trigger(trigger(WS, n1, n), n2, n)…, nn, n)
where outgoing_nodes(n)={n1, n2, … , nn}
(21) If type(n) = synchronization, then
trigger(WS, n, nprev) = WS if n = exit
= chg_sync_state(WS, n, ψ) if n ≠ exit ˄ ψ ≠ incoming_nodes(n)
= trigger(chg_sync_state(WS, n, ), nnext, n) otherwise
where
chg_sync_state(WS, n, s) = Aexec, Synstate (n, s), ω
4 Integrating Existing Formalisms 92
ψ = Synstate(n) {nprev}
outgoing_nodes(n) = {nnext}
(22) If type(n) = exclusive_choice(ϕ), then
trigger(WS, n, nprev) = trigger(WS, ϕ(ω), n)
(23) If type(n) = simple_merge, then
trigger(WS, n, nprev) = WS if n = exit
= trigger(WS, nnext, n) if n ≠ exit
where outgoing_nodes(n) = {nnext}
Now that the semantics of node triggering is defined, it is possible to specify
how a workflow is executed in the discrete-event simulation environment
described in Section 3.3.
Workflow Process
The workflow process keeps track of the workflow’s execution state and
forks action processes in order to simulate the actions described in the workflow.
Therefore, the responsibility of the workflow process is to determine the time each
action should be executed, according to the workflow control logic. The actual
execution of the actions is delegated to the action processes, which, in turn, have
the responsibility of notifying the workflow process of the exact time they finish
their execution by sending an action_finished event to it.
When the workflow process is notified about the completion of an action, it
computes the next execution state of the workflow and forks the corresponding
action processes if some action was started by this execution state transition.
These execution state transitions occur instantaneously with respect to simulation
time. Therefore, it is assumed that an action starts at the same time instant its
previous action finished. Just before computing a transition, the workflow process
needs to update its internal perception of the environment state, so that it always
considers the right environment state when computing a transition. When a
transition produces an execution state WS with Aexec = , the workflow process is
finished. In this situation, there are no more executing actions and, therefore, no
4 Integrating Existing Formalisms 93
more actions will be started because the workflow process will not receive any
more action_finished events that could possibly trigger them.
Let WF = A, ES, N, E, entry, exit, type be a workflow. Using the formalism
introduced in Section 3.3.1, the workflow process for WF is the tuple
WP = S, X, Y, E, P, int, ext, , ρ, ta
where
(24) S = WS 2A, where WS is the set of all possible execution states of WF
The internal state of the workflow process has the form (ws, new_acts),
where ws is the current execution state and new_acts is the set of actions started in
the last execution state transition.
(25) X = {action_finished(ap) | ap A} is the set of input events
(26) Y = {action_finished(WP)} is the (unitary) set of output events
(27) E = ES
(28) P = A
The definitions of these components are straightforward. The workflow
process can receive events of the form action_finished(ap) from its children
processes when they finish executing their actions. Likewise, considering that this
workflow process can be a child of another workflow process, it should send an
event of the same type when the workflow has finished its execution. The
environment view of the workflow process is defined by the set of environment
states that the workflow can perceive. Finally, the set of possible children
processes is defined by the set of workflow actions.
The internal transition function int is defined as follows:
(29) For each state WS = Aexec, Synstate, ω of WF,
int((WS, out), env) = finished if ω ≠ not_started
= (WS’, A’exec) if ω = not_started
where WS’ = A’exec, Syn’state, ω’ is the execution state of WF such that
WS’ = trigger(Aexec, Synstate, env, entry, nil)
4 Integrating Existing Formalisms 94
Recalling the operational semantics defined in Section 3.3.2, the internal
transition function int is called at the time a process is started. This function
defines the first execution state transition of WP. It triggers the entry node of the
workflow. When int is called for the second time, which is characterized by
ω ≠ not_started, it finishes the process. During all workflow execution, only the
external transition function ext is used.
The external transition function ext is defined as follows:
(30) For each state WS = Aexec, Synstate, ω of WF,
ext(((WS, out), e), env, action_finished(ap)) = (WS’, A’exec – Aexec)
where
WS’ = (Aexec – {ap}, Synstate, env, ) if ap = exit
= trigger(Aexec – {ap}, Synstate, env, nnext, ap) if ap ≠ exit
where {nnext} = outgoing_nodes(ap)
This function is called when WP receives an action_finished(ap) event.
When that happens, the action that has just finished is removed from the set of
executing actions in WS and the environment state is updated to the current value
env. If ap is not the exit point of the workflow, its outgoing node is triggered.
Besides changing the execution state of the workflow, this node triggering may
also cause one or more actions to start execution. The set of started actions is
found by subtracting the sets of executing actions of the next execution state
WSnext from WS.
(31) For each state WS = Aexec, Synstate, ω of WF,
((WS, new_acts)) = {action_finished(WP)} if Aexec =
= if Aexec ≠
This function makes the workflow process send the action_finished(WP)
event when the set of currently executing actions becomes empty, which is the
condition for finishing the workflow process.
(32) For each state WS = Aexec, Synstate, ω of WF,
4 Integrating Existing Formalisms 95
ρ((WS, new_acts)) = (new_acts, )
The action processes corresponding to the actions started in the last
execution state transition are forked.
(33) For each state WS = Aexec, Synstate, ω of WF,
ta((WS, new_acts)) = if Aexec ≠
= 0 if Aexec =
This function states that the internal transition function int should be called
for the second time (the first time is at the start of WP) only when the workflow
has reached its finish condition Aexec = .
This process models all the workflow control logic. When the workflow
process is started, the int function (definition (29)) is invoked. Then, as the
workflow executes, the ext function (definition (30)) is invoked multiple times
until the workflow has finished its execution. Each of those calls produces a new
execution state. The ρ function (definition (32)) informs which action processes
are forked after each execution state transition. When the workflow has finished
its execution, the int function is invoked again to finish the process and the
function (definition (31)) outputs the event action_finished(WP). The time-
advance function (definition (33)) assures that the int function is only invoked for
the second time when the workflow execution has finished (i.e. when Aexec = ).
4.1.4 Workflow Composition
As defined in the last section, a workflow is a form of process composition,
where the definitions of multiple processes are combined into the definition of one
larger process, namely the workflow process.
As a consequence of the way workflows were modeled, it is trivial to
compose a workflow with sub-workflows. Since a workflow process is a process,
it can be used as an action process of another workflow, just like any other kind of
process. Hence, it is possible to compose workflows hierarchically. Note that, in
order to allow this form of composition, it is essential that the workflow process
4 Integrating Existing Formalisms 96
outputs an action_finished event when the workflow execution has finished. In
fact, its output function (definition (31)) does precisely that.
4.2 Cell Space Processes
It was mentioned in Section 2.2.2 that cellular automata (CA) has been
extensively used to model the dynamics of anthropic and natural phenomena. A
large variety of those models can be found in the GIS literature. The ability to
execute such models in a training game framework goes in the line of integrating
different formalisms and giving these games well founded simulational realism.
Cell space models are a more general class of dynamic models that
comprises cellular automata, where the definition of local neighborhood and
transition rules are relaxed (Batty 2005). A cell space is a space representation
where the space is partitioned in a discrete set of cells. A cell is an atomic unit of
space which has a unique state at any given time. The idea of cell spaces is to
provide a discrete space representation for modeling dynamic spatial phenomena.
4.2.1 The Modularity Problem of Cellular Automata
Recalling the definition presented in Section 2.2.2, a CA is defined by a
tuple C, S, N, T, where C is the cell set, S is the state set, N is the neighborhood
function and T is the state transition function. This monolithic structure contains
both the representation of space and the representation of a dynamic phenomenon
that happens in that space.
Despite its simplicity, this way of representing dynamic processes on cell
spaces presents some limitations. Particularly, it does not allow external process
to interfere with it. For example, imagine a CA that models the dispersion of oil
leaked into the sea. If a containment barrier is installed, it must interfere with the
dispersion process. In order to model this phenomenon with the strict CA
formalism, one has to combine both the logic of dispersion and the logic of
containment in the transition function of the CA. This makes the whole process
monolithic and therefore hurts the modularity and reusability of the model. The
4 Integrating Existing Formalisms 97
notion of cell space models (Batty 2005), although more flexible than strict CA,
still is not capable of addressing that problem.
In order to accomplish that kind of modularity, it is necessary to break the
logic of state transitions of cells. For this purpose, the principles of process-
oriented simulation discussed in Section 3.2.4 come in handy. The next section
describes how it can help solving the modularity problem in the context of the
Process-DEVS formalism.
4.2.2 Separating Behavior from Cell Space
In order to model cell space phenomena in Process-DEVS, it is necessary to
break the monolithic representation of cellular automata into two parts: the cell
space (CS) and the cell space process (CSP). The CS represents the physical
aspect of the CA and is, therefore, modeled as part of the simulation environment.
The CSP represents the behavioral aspect of the CA and is modeled as a process in
the sense of Section 3.3.1. The idea is that the CSP periodically perceives the state
of the CS through an environment view and generates one or more events
manifesting its intentions of changing the CS state. When these events reach the
environment, they cause a state transition of the CS. This whole procedure is
depicted in Figure 4.6.
Figure 4.6 – Breaking a CA into physical state (environment) and behavior (process)
The cell space is formally defined as
CS = C, S, I, eff
where
C is the set of cells
S is the set of cell states
4 Integrating Existing Formalisms 98
I is the intention set
eff: Φ 2(C I)
Φ is the effect function, where Φ = { φ: C S }
The set C contains all cells from the CS. At each point in time, every cell
must have a definite state from the set S. The cell space state is the state of the
entire CS, and it is defined by a function φ: C S. For each cell c C, its state is
given by φ(c). The set Φ, containing all possible cell space states, defines the
environment view of the CSP, which is the way the CSP perceives the CS.
The I and eff properties define the way the CSP acts on the CS. The set I
defines the possible intentions the CSP can manifest for any given cell. Each time
the CSP intends to change the CS state, it manifests its intentions by sending an
event its 2(C I)
. That event consists of a set of pairs (c, i) in C I. Each pair
indicates an intention i for a cell c. Once the its event reaches the environment, it
causes a state transition on the CS. The effect function defines the next cell space
state as eff(φcurr, its), where φcurr is the current cell space state.
Let CS = C, S, I, eff be a cell space. Let N, B, Δt be cell space process
parameters, where
N: C C|N|
, where |N| is the neighborhood size and (cC)(c N(c)) is the
neighborhood function
B: S S|N|
2I is the behavior function
Δt is the time period
The cell space process for CS with parameters N, B, Δt is defined as
CSP[N, B, Δt] = S, X, Y, E, P, int, ext, , ρ, ta
where
S = 2C I
X =
Y = 2C I
E = Φ
P =
int(s, φ) =Cc
{(c,i) | i B(St(c), (St(n1), St(n2), … )) and N(c)={n1, n2, … )}
ext((s, e), evt, φ) = s
4 Integrating Existing Formalisms 99
(s) = {s}
ρ(s) = (, )
ta(s) = Δt
The neighborhood function N is defined exactly as in the basic CA,
described in Section 2.2.2. The behavior function B defines the CSP’s intentions
for a cell, given the state of that cell and its neighbors. For notation simplicity, the
term |N| is used to denote the neighborhood size, which is assumed constant for
all cells, as in the basic CA definition. Finally, Δt defines the periodicity the CSP
sends its intentions to the CS.
Considering the CSP dynamics in the Process-DEVS formalism, the CSP
periodically manifest its intentions by taking as input a CS state φ. The time
advance function ta is a constant function that always outputs Δt. This will cause
the internal transition function int to be invoked every Δt time units. This function
takes as input the CS state and generates a set of intentions. These intentions are
stored in the internal state of the CSP. The output function makes every set of
intentions produced by int be sent as an event to the environment. Note that the
output function returns the internal state itself, which is possible in this case
because S = Y.
The CS and the CSP interact in a cyclic way. In each cycle, the CSP reads
the CS state to compute its intentions, which are sent to the environment as an
event. When the environment receives this event, a new state is computed for the
CS. This cyclic procedure produces a series of cell space states φ0, φ1, φ2, … ,
which is given by the recurrence relation φt+1 = eff(φt, int(s, φt)).
Besides increasing modularity, separating a cellular dynamic model into a
CS and CSP does not lose expressivity power when compared to traditional CA.
The following theorem proves that.
Theorem 1: For any CA = C, S, N, T, one can define an equivalent CS-
CSP pair that produces the same sequence of cell space states.
Proof: Let us define a cell space CSca = C, S, S, eff, where
eff(φt, itts) = φt+1 | φt+1(c) = s, if !(c, s) itts
4 Integrating Existing Formalisms 100
= φt(c), otherwise.
Additionally, let us define a cell space process CSPca[N, bhv, 1], where
bhv(φt(c), (φt(n1), φt(n2), …)) = { T(φt(c), (φt(n1), φt(n2), …)) }.
Lemma: The CSca-CSPca pair produces the same sequence of cell space
states as CA.
Proof:
Let CSca=C, S, I, eff and CSPca[N, bhv, 1]=S, X, Y, E, P, int, ext, , ρ, ta
Given that CSca is at state φt, the next state φt+1 is given by
φt+1 = eff(φt, int(s, φt))
Since the CSPca behavior function bhv always returns a unitary set, it
follows from the definition of int for CSP’s that int(s, φt) always contains exactly
one pair (c, s) for each cell c C, therefore, it follows from the definition of the
eff function:
φt+1(c) = s | (c, s) int(s, φt)
From the definition of int and bhv:
φt+1(c) = T(φt(c), (φt(n1), φt(n2), … )), where N(c)= { n1, n2, … }
This is exactly the same recurrent relation that defines the state sequence of
CA, as defined in Section 2.2.2.
The dissociation between physical state and behavior increases the
modularity of a cell-based simulation. This modularity brings reuse benefits, since
the same CSP can be used with different CS and vice-versa. Another
modularization benefit of this approach is the clear separation between the
behavior implementation and the internal data structures of the CS. This
separation is somehow imposed by Process-DEVS and is in accordance to the
principles of process oriented simulation, discussed in Section 3.2.4.
The separation between behavior and physical representation is also an
important step towards integrating CS-based dynamic models with other modeling
formalisms. This is achieved by having multiple processes interacting with the
same CS, as in some agent oriented simulations. The next session discusses the
possibility of using multiple CSP’s with the same CS. A full example of how a
4 Integrating Existing Formalisms 101
CSP can be integrated with other kinds of processes in a modular way is given in
chapter 1.
4.2.3 Composition of Cell Space Processes
In order to meet the requirement of realistic simulation models, cell space
processes (CSP) tend to become more complex. However, it is not desirable that
complex phenomena be modeled by monolithic complex CSP’s. Instead, it would
be much better if they were defined by a composition of simpler CSP’s. This kind
of modularity has three main benefits: (1) it facilitates model reuse; (2) it makes
models more intelligible; (3) it makes models easier to change and maintain. All
these features are important in the context of training games design.
In order to exemplify the problem, let us consider the case of an emergency
situation where some amount of oil has leaked into the sea. In this specific
example, the oil position is modeled as a CS, where each cell has a real number
property, indicating the amount of oil in it. There are several factors that may alter
the oil configuration in the CS, such as the leak itself, dispersion on water,
evaporation, containment by barriers, recovery by pumps at sea, coast hitting,
coast cleaning procedures, and so on. All these factors could be modeled as a
single monolithic CSP. However, it is more desirable that each of them is
modeled as an individual CSP and then composed to produce the overall behavior.
As an illustrative example, consider the case of integrating two CSP’s that
represent two of the above factors: CSPdisp, which models the oil dispersion, and
CSPcont, which models the oil containment caused by containment barriers. For
compatibility issues, we shall assume that both CSP’s use the same time step Δt
and both are started at the same time instant. Therefore, they always produce
events at the same time instants.
CSPdisp models the oil dispersion by producing intentions of the form
(c, move(a, d)), where c is the cell where the oil is moving from, a is a positive
real indicating the amount of oil and d {N, NE, E, SE, S, SW, W, NW} is the
direction that indicates which of the eight cells in the Moore neighborhood will
receive the oil. The function dest: C {N, NE, E, SE, S, SW, W, NW} C
computes the destination cell from the origin cell and a direction. Each intention
4 Integrating Existing Formalisms 102
causes the amount of oil at cell c to be decreased by a, and the amount of oil in
cell dest(c, d) to be increased by the same amount.
CSPcont models the oil containment. It produces intentions of the form
(c, block), where c is a cell which is blocked by a containment barrier and,
therefore, should not receive any amount of oil.
The problem in this example is how to compose these two processes in a
way they produce the right oil behavior while keeping them independent and, if
possible, unaware of each other.
Parallel Composition
The simplest way to compose these two processes is to arrange them in a
parallel pattern, as illustrated in Figure 4.7. This way, each CSP updates the CS
independently, one after the other. Since both CSP’s send events to the
environment at the same time instants, the tie breaking function of the simulation
will determine which one gets executed first. The resulting series of CS states is
determined as in Figure 4.7. The first CSP reads the CS state φt and applies a
transition to the cell space, leaving it in an intermediate state φint. The second CSP
reads this intermediate state and generates another transition that will finally
produce the next state φt+1.
Figure 4.7 – Parallel composition of CSP’s
This simple form of CSP composition has three limitations:
4 Integrating Existing Formalisms 103
There is no guarantee that a third process will not interfere with the
intermediate CS state φint.
If the events of the two CSP’s interfere with each other, it may not
be possible for the second CSP to undo the effects of the first CSP
because the initial CS state φt was lost in the transition to the
intermediate CS state φint.
This form of composition is not closed. Given two CSP’s composed
in parallel, it may not be possible to define one single CSP that will
produce the same effects.
Considering this simple parallel composition of CSPdisp and CSPcont, and
assuming that CSPdisp alters the environment before CSPcont in each cycle, the
weakness of this composition pattern is felt immediately. For instance, consider
that, in a given cycle, CSPdisp generates an intention (c, move(a, d)) and, on the
same cycle, CSPcont generates an intention (dest(c, d), block). These intentions are
conflicting, since CSPdisp wants to move oil into a cell that is blocked by CSPcont.
In this case, since CSPdisp has a higher priority, the oil would move to cell
dest(c, d), causing an inconsistent intermediate state. That may generate a
considerable problem because a third process might access this intermediate state.
Another problem is that this inconsistency must be resolved when CSPcont
generates the event (dest(c, d), block). The best solution would be to move the oil
back to its original cell, but that is not possible because the initial state is no
longer known, and it is impossible to determine the origin cell.
This brief example illustrates two of the problems identified for this parallel
form of CSP composition. The third problem is that it is not closed. This is easily
proved by the following counter-example:
Consider a CS = C, S, I, eff where:
C = {c} is the set of cells, where c is the only cell in this cell space
S = is the state set. The cell c has a real number defining its internal state
I = {increase} is the intention set with only one possible intention
eff(φ, ) = φ
4 Integrating Existing Formalisms 104
eff(φ, (c, increase)) = (φ (c, φ(c)+1))
This CS has a single cell c, which has a real number as its state, and accepts
a single intention increase. When received, this intention increases the cell state
by one.
Consider also a CSP[N, B, Δt], where N(c) = [], B(s, ns) = {increase} and
Δt = 1, where [] represents a tuple of size zero. This CSP always outputs the
intention increase, regardless of the previous CS state.
Now construct a simulation with the just defined CS as part of the
environment and two exact copies of this CSP, composed in parallel, as in Figure
4.7. It is easy to check that, at each time step, the state of c will be increased by
two. However, it is impossible to write a single CSP that will cause the state of c
to be increased by two because the CS definition only allows one possible
intention increase, which increases the value of c by only one.
Composition with a Conflict Resolver
In order to overcome the problems with pure parallel composition of CSP’s,
we propose the use of a conflict resolver (CR), as depicted in Figure 4.8. In order
to formally define CR, we use the same notation for lists introduced in Section
3.3.1 (for the definition of I/O processes). The CR is a process, defined as CR[n,
rf] = S, X, Y, E, P, int, ext, , ρ, ta, where the parameters are
n is the number of CSP’s in the composition
rf: ITTn ITT is the conflict resolver function, where ITT = 2
C I
and the Process-DEVS process properties are
S = ITT* {0, 1, … , n}, where ITT* is a list of elements of ITT
X = Y = ITT
E = Φ
P =
int((itts, i), φ) = ([], 0)
ext(((itts, i), e), φ, ittsnew) = ([ittsnew | itts], i+1)
(([itt1, itt2, … , itti], i)) = rf(itt1, itt2, … , itti) if i = n
4 Integrating Existing Formalisms 105
if i ≠ n
ρ(s) = (, )
ta((itts, i)) = 0 if i = n
if i ≠ n
This process stores the intentions issued by different CSP’s. When it has
received the intentions from all n CSP’s, it applies the conflict resolver function to
determine the final intentions of this set of CSP’s. Since the CR processes needs
to receive n intention events before computing the final result, it is important that
all CSP’s work at the same frequency (i.e. all of them must have the same Δt).
Hence, it is guaranteed that, in a sequence of n received events, there will be one
from each CSP.
Figure 4.8 – Parallel composition of CSPs
In this form of composition, the CSP’s are totally unaware of the CR. This
helps keeping a high level of modularity. All of them take as input the same CS
state to produce their intentions. No intermediate CS states are produced.
Therefore, they act as a single CSP from the point of view of the CS, which
receives a set of intentions every Δt time units.
The use of a CR provides a way of solving the previously mentioned oil
dispersion and containment problem, while keeping the logic of both CSP’s
separate. This CR is defined as CR[2, oil_cr], where
oil_cr(ittsdisp, ittscont) =
4 Integrating Existing Formalisms 106
{(cfrom, move(a, d)) ittsdisp | ((cto, block) ittscont)(cto ≠ dest(cfrom, d))}
where ittsdisp and ittscont are the intentions generated by the oil dispersion and oil
containment processes respectively. This CR will act as an intention filter and will
let pass only the oil move intentions that do not attempt to put oil on blocked
cells. We have therefore created the abstraction of a CSP that handles both
dispersion and containment of oil and that is defined by composition of two
individual CSP’s, which are totally unaware of each other.
In addition to allowing a higher degree of modularity, the composition of
CSP’s via CR is also closed. This means that, for any composition of n CSP’s, it is
possible to write one single CSP with an equivalent behavior. This is easily
verifiable:
Theorem 2: For any set of CSP’s {CSP1[N1, B1, Δt], CSP2[N2, B2, Δt], … ,
CSPn[Nn, Bn, Δt]} composed with a conflict resolver CR[n, crf], it is possible to
write a single CSPcomp[Ncomp, Bcomp, Δt] with equivalent behavior.
Proof:
Let us define
Ncomp(c) = (nc11, nc12, ... , nc21, nc22, … , ncn1, ncn2, ... )
where Ni(c) = (nci1, nci2, ... )
Bcomp(sc, (s11, s12, ... , s21, s22, … , sn1, sn2, ... )) =
crf(B1(sc, (s11, s12, ... )), B2(sc, (s21, s22, ... )), … , Bn(sc, (sn1, sn2, ... )))
where all states needed as inputs to all n behavior functions are also inputs to
Bcomp. This is easily verified because, by definition, Ncomp contains all cells of any
of the n neighborhood functions.
For any CS state S, CSPcomp outputs the same intentions as the composition
of the individual CSP’s using CR as conflict resolver. This is so because the
behavior function of CSPcomp receives as input precisely the intentions of each
individual CSP, and outputs those intentions with conflicts resolved by the crf
function, which is precisely the definition of how the conflict resolver works.
Closure under composition is indeed an interesting property of any
simulation formalism that strives for modularity and reuse (Zeigler et al. 2000).
4 Integrating Existing Formalisms 107
Besides the reuse of sub-models, it also allows cascading composition in several
levels of abstraction. In fact, the composition of CSP’s with a CR has solved all
three problems identified with pure parallel composition. Therefore, it should be
seen as a more reliable way of CSP composition.
4.3 Multi-Agent Systems
In the discussion of Section 3.2.2, it was mentioned that agents should be
modeled as specialized simulation elements. Agents are commonly modeled as
cognitive entities which sense their surrounding environment through sensors and
act on it through their actuators. In the middle of this process is the reasoning
phase, which can be further divided into smaller pieces, such as in the Jason
toolkit described in Section 2.3.1. Most multi-agent simulation toolkits provide
some degree of modularization. Therefore, when implementing agents on top of
Process-DEVS, it is highly desirable to keep this modularity.
However, there is no consensus on what should be the internal elements of
the agent reasoning process. Therefore, the next sections will describe simply a
framework for modeling the basic notions of sensing, acting and reasoning. More
detailed structures can be implemented on top of that.
Multi-agent simulations are often modeled in discrete time formalisms
(Theodoropoulos et al. 2009). However, this is often pointed as a limitation
(Michel et al. 2009). Therefore, the proposed framework will keep the more
flexible discrete event paradigm of Process-DEVS.
4.3.1 Modular Agent Architecture
Agents interact with their environment. The interaction cycle is often
composed of three main steps: sensing, reasoning and acting. Usually, multi-agent
simulation frameworks decompose the reasoning stage into more detailed parts.
However, there is no consensus on which is the right way of doing so. The main
reason for this is the different kinds of behaviors intended for agents. Sometimes
agents have a very simple reactive behavior and sometimes they are required to
reason logically, remember facts, formulate beliefs and try to achieve goals.
4 Integrating Existing Formalisms 108
Therefore, since agent reasoning is not the focus of this work, we shall consider it
a black box that receives information from sensors and sends its intentions to
actuators.
Reasoning, sensing and acting are modeled by processes, which are named
respectively, reasoning processes, sensor processes and actuator processes, as
depicted in Figure 4.9. Sensor processes read the state of its environment and
produces events representing sensations. Reasoning processes take these
sensations as input and produce intentions. The intentions are sent to actuator
processes, which will perform the actions and alter the environment.
Figure 4.9 – An agent with its behavior decomposed in sensor, reasoning and actuator processes
The sensor process is defined as
Psen[E, Y, σ] = S, X, Y, E, P, int, ext, , ρ, ta
where
E is the set of states in which the sensor can perceive the environment
Y is the set of sensations, which is the same as the output set
σ: E 2Y is the function that, given an environment state, returns a set of
corresponding sensations
S = 2Y
X = {trigger}
P =
int(s, env) =
ext((s, e), env, trigger) = σ(env)
(s) = if s =
4 Integrating Existing Formalisms 109
s if s ≠
ρ(s) = (, )
ta(s) = if s =
0 if s ≠
In its internal state, the sensor process stores the last perceived environment
state. Whenever it is triggered, it invokes the σ function to produce a set of
sensations, which are output in the same time instant as it was triggered. The
sensor process is triggered when it receives the trigger event. Any process can
send a trigger event to the sensor process. For example, consider a proximity
sensor which periodically checks the distance of a given object to the agent.
Whenever this distance is less than a threshold radius, it produces a sensation of
nearby object presence. In this case, there might be a clock process, which only
behavior is to periodically send a trigger event to the sensor process.
This triggering mechanism allows one to build a simulation where
sensations are only computed when they are really needed. For instance, an agent
may be in such a situation where the sensations of a particular sensor do not affect
its decisions. In that case, the simulation may be modeled in such a way that
specific sensor is not triggered in such situations, thereby avoiding unnecessary
processing.
The sensations produced by sensor processes are used as input by the
reasoning process, which shall not be defined formally because there are many
different ways to model the internal reasoning of agents. The important fact here
is that the reasoning process produces intentions, which are sent to the actuator
processes. The role of an actuator process is to take intentions and produce the
actual actions that are performed in the environment. The actuator process is
defined as
Pact[Y, E, A] = S, X, Y, E, P, int, ext, , ρ, ta
where
Y is the set of events that this actuator may generate to the environment
E is the set of states in which the actuator can perceive the environment
A = {it1, … , itn} where iti: E 2Y is the set of intentions
S = 2Y
4 Integrating Existing Formalisms 110
X = A
P =
int(s, env) =
ext((s, e), env, it) = s it(env)
(s) = if s =
s if s ≠
ρ(s) = (, )
ta(s) = if s =
0 if s ≠
The behavior of actuator process is very similar to the behavior of the sensor
process, only with the flow of events in the opposite direction. It is triggered when
it receives an intention event. When that happens, it computes the proper events to
send to the environment and send it at the same time instant. As the sensor
process, it uses a transient internal state to implement that behavior.
Each intention defines a function that takes as input the environment state.
This is necessary so that the actuator may check the preconditions that need to
hold before a particular intention can be materialized into a concrete environment
state change. For example, an agent cannot move to a location which is occupied
by another agent, even if it intends to do so. Another reason for checking the
environment is that the effects of an intention may depend on external factors. As
an example, consider an agent whose movement is affected by the wind. Each
time it issues a move intention on a given direction, the actual displacement of its
position will depend on the wind direction and intensity.
The use of actuators, as well as sensors, helps isolating the core logic of
agent reasoning. This way of modularizing the behavior of an agent allows one to
express the agent reasoning only in terms of sensations and intentions, without
worrying about the interaction with the environment. That does not mean that the
agent reasoning cannot be further modularized. It has been presented here as a
unique black box process, but it could very well be decomposed into a set of more
specialized processes.
This modular architecture, besides allowing reuse of agent parts, can also
help improving simulation performance by sensor and actuator sharing among
4 Integrating Existing Formalisms 111
agents. For example, if there are lots of agents that need to sense the same
environment data with the same periodicity, they can share a single sensor
process. This is accomplished simply by changing the process coupling structure
in the simulation, without touching the definition of the sensor process behavior.
This is also true for the actuators.
4.3.2 Simulation of Multi-Agent Systems
A multi-agent system is a system where multiple agents interact with each
other. They interact either through the environment or directly, via message
exchanging. Considering the interaction through the environment, there is no
difficulty in composing a set of agents in a multi-agent system. It is only
necessary to put them in the same simulation, sensing and acting on the same
environment. On the other hand, in order to allow message exchanging, it is
necessary to provide some means of taking a message from an agent to another
one. There are many ways to accomplish this. Figure 4.10 depicts the simplest
case, where the agents exchange messages directly, by sending events to each
other.
Figure 4.10 – A multi-agent system simulation
Another possible way of message exchanging is to provide each agent a
mailbox as part of the environment state. This has the advantage of keeping agents
uncoupled, i.e. not sending events directly to one another, which makes it easier to
implement systems where agents are created and destroyed frequently. However,
this way of designing a multi-agent system can be counter-intuitive, as messages
are usually not thought of as part of the physical environment.
4 Integrating Existing Formalisms 112
A more structure message exchanging mechanism is to design a specialized
process or set of processes to handle message routing between agents. However,
since there is no consensus on how messages should be exchanged in multi-agent
systems, it is difficult, and maybe impossible, to point a generic solution that will
fit all cases. More likely, the best design will depend on the objectives of the
targeted system.
4.4 An Informal Discussion on Process Patterns
In general, modularity solutions lead to structure issues. Indeed,
modularization is the process of dividing objects into smaller pieces, which should
then be structured somehow to produce the desired result. This chapter has
presented ways to implement some popular dynamic modeling formalisms on top
of Process-DEVS, with emphasis on process modularity and composition. Based
on those cases, this section describes some generic solutions for process
composition that may also apply to a number of other different cases.
The solutions are presented in the form of process patterns, by analogy with
object-oriented design patterns (Gamma et al. 1995). These patterns will describe
a number of ways in which to structure processes in a simulation to solve a
particular problem while keeping a high degree of modularity and flexibility.
These patterns are designed for Process-DEVS, but some of the ideas are generic
enough so that they can be applied to other simulation frameworks as well.
The patterns in this section are not formalized as in the previous sections.
Indeed, since these patterns are expected to be applicable to simulations in
general, it would be necessary to have a broader experience with Process-DEVS
on various simulation fields. At this point, it is not clear to which extent the
patterns should restrict the nature of the processes.
Chapter 1 describes examples and discusses the benefits of the presented
patterns in the field of emergency simulation.
4 Integrating Existing Formalisms 113
4.4.1 Parallel Pattern
The parallel pattern is the most straightforward pattern. It consists of
isolated processes making changes to the environment without any direct
interaction between them, as depicted in Figure 4.11. This pattern has been
extensively used for multi-agent systems simulation, where agents interact only
through the environment, such as insect colonies (Bonabeau et al. 1999) and
pedestrian behavior (Bandini et al. 2009).
Figure 4.11 – Parallel pattern
The main benefit of this pattern is that the processes do not directly depend
on each other. There is no direct communication between them. Therefore it is
easier to reuse any of them in another simulation with a different structure.
This pattern can be used only if the logic of the modeled phenomenon can
be broken into simple and independent processes. Note that the processes do not
necessarily represent the behavior of physically separate entities. They may also
represent different aspects of the behavior of a single physical entity. For
example, consider an oil slick on the surface of the sea. This oil moves according
to the weather conditions and it also gets recovered by pumps placed on recovery
boats. These two aspects, namely the dispersion and recovery, can be modeled as
completely independent processes.
4.4.2 Interference Pattern
Special care should be taken when using the parallel pattern. Some
conceptual problems may arise if some behavior logic of the processes is put into
the environment only to eliminate direct communication between them.
4 Integrating Existing Formalisms 114
Consider for example the case of two characters moving through the
environment. One of them is stronger than the other. When both want to move to
the same place, the stronger prevails and the weaker is pushed to some adjacent
place. One could model this situation with the parallel pattern where each
character sends events to the environment when they wish to move. In this case,
the environment should treat the case of collision and apply the rule of the
stronger. This design is not so good because some of the logic that controls the
position of the characters was modeled inside the environment.
However, it is still possible to keep these two interfering processes separate
and unaware of each other with the interference pattern. This pattern attempts to
increase the modularity of processes that directly alter the environment by sending
events to it. The idea is to treat the output of the interfering processes as intentions
instead of effects on the environment. If the intentions of two or more processes
conflict with each other, a resolver process will define which effects should be
applied. This is accomplished by redirecting the outputs to this resolver process as
depicted in Figure 4.12. Hence, the events are intercepted by the resolver, which
treats them as mere intentions. It then applies the interference logic, produces the
resulting effects and sends them as events to the environment.
Figure 4.12 – Interference pattern
In the example of the two moving characters, both processes would send
their movement intentions as events to the resolver. Whenever the resolver detects
that both want to move to the same place, it treats the conflict and sends the
correct movement effects to the environment. Thus, the environment is kept
unaware of any logic specific to the movement of the characters.
The benefit of this pattern is that, although the processes interfere with each
other, they are still kept totally unaware of each other. The only element that is
4 Integrating Existing Formalisms 115
aware of all interfering processes and the interference logic is the resolver process.
For the other processes, no dependencies between them are introduced and they
can be reused at will. Therefore, this pattern is useful when two or more processes
with complex behavior interfere with each other in the way they alter the
environment.
The interference pattern was already used in Section 4.2.3 to model the
composition of cell space processes. Here the pattern is made generic to any kind
of process. In fact, a cell space process can be composed with different kinds of
process using the interference pattern.
The trick of this pattern is to transform an acting behavior into an
intentional behavior. The events output by the processes represent not actions that
alter the environment state, but intentions that are sent to the resolver process.
Once the conflicts are treated, the resolver process outputs the actions to the
environment. This concept is also used in multi-agent system simulation
frameworks, where the acting behavior of an agent is separated from its
intentional behavior by actuators, as shown in Section 4.3.1.
4.4.3 Composite Pattern
If a reasonably complex process cannot be broken into fully independent
parts, the composite pattern may be used to break it into less complex
interdependent parts. The idea is to model complex behavior in hierarchical form
and distribute its tasks among a hierarchy of processes. One root process
represents, at the highest level of abstraction, a whole set of processes to the
external world. However, it does not implement all the complex behavior, but
rather delegates lower level tasks to its children. Figure 4.13 illustrates the pattern.
One simple and common example is a process that controls a moving object in the
environment. Among other behavioral aspects, the controller process is
responsible for implementing the object’s motion. Once it decides that the object
should move to a given location through a specific trajectory, it forks a move
process that will actually change its position from time to time until the destiny is
reached. Hence, the parent process can focus on higher level primitives and the
lower level details of displacement are handled by its child, the move process.
4 Integrating Existing Formalisms 116
Note that the structure of the hierarchy of processes is not rigid. It may
change with time. In the just mentioned example, when the object has reached its
destination, the move process can be finished.
Figure 4.13 – Composite pattern
The workflow process defined in Section 4.1.3 is a good example of this
pattern. In that case, the parent process represents the workflow itself and it
delegates the execution of the actions to children processes. The workflow process
simply defines which processes to fork and when, by following the workflow
logic.
The benefit of this pattern is modularization and reuse. Although we cannot
say that the processes in the hierarchy are totally independent, they certainly help
in breaking a complex dynamic model in smaller and simpler sub models. The
reuse of those simpler sub models is achieved when composing another complex
behavior. For example, in the case mentioned above, the move process can be
reused by another controller process that also implements the motion of some
other kind of object.
This way of composing a complex behavior is somewhat different from that
of the coupled DEVS model described in Section 2.2.1. In that case, composition
is achieved through aggregation, while here it is achieved through process forking.
It is not clear which one is better, if any. This comparison could be the object of
future work. However, child forking is certainly more flexible because its
structure is mutable and defined only at the time the simulation is actually
running.
4 Integrating Existing Formalisms 117
4.5 Summary
This chapter formalized a number of ways to implement some common
dynamic modeling formalisms on top of Process-DEVS. Section 4.1 presented a
way in which workflows can be mapped to Process-DEVS. In fact, in the way
workflow processes were defined, they represent a form of process composition
where the workflow logic defines which processes are created and when. Section
4.2 discussed the issue of modularity in the domain of cell space processes and
presented a formalism for dealing with it on top of Process-DEVS. It also showed
how to compose cell space processes out of smaller pieces. Section 4.3 presented
a formal framework in which it is possible to model multi-agent systems on top of
Process-DEVS, with support for sensors and actuators.
The solutions presented in the first three sections of this chapter suggested
some patterns in which to structure processes with interesting modularity
properties. Section 4.4 informally discussed these patterns, leading to interesting
conclusions that still need further experiments to be fully validated.
5 The InfoPAE Use Case 118
5 The InfoPAE Use Case
This chapter describes two software systems implemented with the objective
of improving the efficiency of emergency response actions in the oil and gas
industry. The first system is a plan simulator system that is responsible for
simulating the results of contingency plans stored in a database of emergency
response plans. It allows its users to import emergency scenarios and associated
response plans. The simulation engine is used to test the efficiency of the response
plans. The second system is a training game, which simulates emergency
situations in order to train people to make efficient decisions in such situations.
The two systems are based on the same simulation engine, which
implements the Process-DEVS framework described in chapter 0. This
architecture shows how simulation elements can be reused by different systems.
Section 5.1 gives an overview of planning for emergency situations. Section
5.2 describes the domain of contingency planning for oil leaks, for which the two
systems were designed. Section 5.3 describes the simulation models in terms of
the Process-DEVS formalism. Section 5.4 describes the architecture of the two
systems. Section 5.5 describes the time management technique developed for the
systems. Finally, Section Error! Reference source not found. concludes the
chapter and reports the achieved results.
5.1 Planning for Emergency Situations
An emergency situation occurs when an unexpected incident has occurred
and its potential consequences involve damage to human health and to the
environment. In such situations, it is not only important to respond quickly in
order to minimize the damages, but the response must be conducted in a well
organized manner. The complexity of emergency management, coupled with the
growing need for multi-agency and multi-disciplinary involvement on these
situations, increased the need for standardized methodologies. Particularly, the
5 The InfoPAE Use Case 119
Incident Command System (ICS) (Bigley and Roberts 2001) standard is being
increasingly adopted by public safety and private sector organizations.
In the ICS methodology, the initial response steps consist of notifications,
initial assessment, command meeting, initial response and incident briefing using
specific ICS forms. After this initial response period, the emergency handling
process becomes cyclic. This kind of process is called Planning “P”. Each cycle
consists in a planning phase and an operational phase. The planning phase
consists of situation assessment meetings, objective updates, tactics definition,
planning, elaboration and approval of the incident action plan (IAP). The
operational phase consists of executing a response plan and assessing its progress,
after which a new cycle begins.
The InfoPAE system (Carvalho et al. 2001) was designed as a tool for
managing this complex emergency handling process, making incident response
quicker and more effective. It has been in use at Petrobras, a large Brazilian oil
company, for more than ten years. It also proved to be a valuable training tool.
The system offers a sophisticated database for response action plans and easy
access to vital information and resources allocated for different types of scenarios.
One of the difficulties of such systems is that, even though it is possible to
describe an emergency action plan at a reasonably detailed level, this is somewhat
limited with respect to the representation of dynamic aspects. In (Frasca 2003),
the author discusses two different approaches for modeling knowledge about
dynamic phenomena: representation and simulation. According to the author, the
main difference between both forms is that simulation attempts to model the
behavior of the elements involved in the phenomenon, while representation is
limited to retaining the perceptual characteristics of it. To make it clear, the author
gives the example of a plane landing procedure. A representation of a specific
landing could be a film where an observer would be incapable of interfering. On
the other hand, a flight simulator would allow the player to modify the behavior of
the system in a way that simulates the real plane. This flexibility is only possible
due to the simulation characteristic of modeling the behavior of the elements
independently of any specific scenario.
Traditionally, response action plans take a more representational form,
usually adopting workflows. Although response action plans contain response
strategies planned for different type of scenarios, one cannot tell whether the plans
5 The InfoPAE Use Case 120
are well suited for all the possibilities of evolution of an emergency situation. For
example, a plan can describe the action of sending two boats to intercept an oil
slick. However, it may not be possible to do that before the oil reaches the coast
under some specific conditions. If emergency managers were able to simulate the
whole process in a more realistic way, it would certainly make the emergency
plans more reliable.
Testing the quality of response action plans, as well as the performance of
emergency response teams, is mandatory to ensure minimum impact of the
incident. In addition to other initiatives such as field exercises, the use of
computational simulation can be a cost-effective and efficient mechanism to
validating action plans and training response teams. Simulation can take into
consideration many details that are difficult to consider if the planning is done
exclusively by humans. For example, it can take into consideration the location of
the needed resources and the specific spatial characteristics of the emergency
scenario to estimate, in advance, if there will be enough time to get the necessary
resources in place for executing a specific action.
Specifically, the main benefits that simulation may bring to the InfoPAE
system are:
Simulation helps finding flaws in emergency plans.
The spatial configuration of available resources can be evaluated and
optimized so that they can be deployed to handle any scenario
requirements as quickly as possible.
Simulation-based games provide training that help improve
personnel performance.
Computer simulation cost is significantly lower than functional or
full scale exercises.
Simulations are commonly used for investigating physical phenomena, such
as those involving dispersion of chemical products in the environment
(Karafyllidis 1997; Chinmoy and Abbasi 2006). However, the pure simulation of
physical processes does not take into consideration the effects of contingency
actions. More generally, it is not enough to simulate a specific process of an
emergency situation in isolation. Essentially, one must concurrently simulate all
5 The InfoPAE Use Case 121
relevant processes, considering the interferences between them. For instance, a
response action plan modeled as a workflow may significantly interfere with the
dispersion of chemical products that could be modeled as a cell space process.
The main problem is how to combine simulations of different processes modeled
in different formalisms. That is precisely how Process-DEVS and the techniques
described in chapter 0 can be of great help in equipping the InfoPAE system with
the necessary simulation capabilities. It can combine simulations of physical
phenomena with others processes related to Planning “P”.
5.2 A Motivating Example - Contingency Plans for Oil Leaks
Oil leak emergency situations constitute a common scenario that the
InfoPAE system has been used for. Accidents involving the spill of a considerable
volume of oil into the ocean are critical because of their potential environmental
impact. Additionally, oil removal from the environment is a costly process,
ranging from USD$20 to USD$200 per liter (Fingas 2000). This kind of scenario
is also interesting because it involves processes of different nature such as oil
dispersion on water and response action plans. For these reasons, oil leak
emergency situations were chosen as the first simulation experiment using the
InfoPAE system.
In oil leak situations, the response plan, at the highest level of abstraction,
consists of three phases: (1) finding and stopping the leak; (2) restricting the oil
propagation; (3) recovering all possible oil from the environment.
The first phase relates mostly to plants and installations. In this phase, the
response plans are usually simple and response effectiveness depends mostly on
the availability of engineering information and of quick communication. After the
leak has been detected and proper measures for stopping it have been taken, the
focus of the response plan is on containment and recovery of the leaked oil.
The highest environmental impact usually occurs when some amount of oil
hits the shore, which also causes the oil removal to grow more expensive. Oil is
usually lighter than water and it does not dissolve in it. When leaked into a water
body, it remains concentrated on the surface, forming one or more oil slicks.
These oil slicks are shaped and moved by external forces, such as wind and water
5 The InfoPAE Use Case 122
currents. If these forces push an oil slick towards the coast, it is almost certain that
it will cause a large concentration of oil along some particular coastal segments.
There are many types of oil, but their dispersion and evaporation rates are usually
too small to prevent coast hits.
A coastal segment is environmentally sensitive to oil because of the
concentration and diversity of animal species and ecosystems found on the
segment. Each type of coast has its own particular characteristics and sensitivity to
oil. For example, the InfoPAE project divides the Brazilian coast into discrete
segments, classified according to their environmental sensitivity characteristics.
Each point in the coast belongs to exactly one segment, which in turn belongs to
one sensitivity class.
The main method for preventing coastal damage is to restrict the oil
propagation by employing floating containment barriers, which are also called
containment booms (Fingas 2000). The barriers are usually deployed in U-shape
in an attempt to trap the oil slicks according to the direction they are moving. The
main resources needed to place a containment barrier are the barrier itself, one or
two boats with a minimum crew and some source of information about the
location of the oil. The spatial configuration of all those resources is very
important to determine the time necessary to install a containment barrier at a
given location. Therefore, planning in advance the locations where the resources
are kept is crucial. For instance, in a badly planned resource configuration,
depending on that time and on the velocity of the oil slick, it may not be possible
to prevent the oil from reaching a critical coastal segment.
In order to optimize spatial resource planning, it is necessary to consider
various factors, such as the set of likely locations of possible oil spills, the set of
likely climate conditions relevant to the movement of oil slicks (mainly wind and
water currents) and the location of the most vulnerable nearby coastal segments.
As a general rule, recovering oil from the coast usually takes more time and
money than from water. Therefore, resource planning and speed of response is
critical for minimizing coast hits.
Finally, the last phase of the response strategy is to remove as much oil as
possible from the environment. The recovery of oil starts when a stable situation
is reached, namely when the oil stops moving, either because it is trapped in
containment barriers, or because it has hit the coast. The oil recovery process is
5 The InfoPAE Use Case 123
carried out by a number of different processes, with different equipments. The
choice of the process depends on the type of oil and on the characteristics of the
situation. For example, for oil slicks trapped in containment barriers, the use of
skimmers from a boat is usually appropriate. As for recovering oil from the coast,
there are many different procedures. Usually the best procedure depends on the
type of oil and on the characteristics of the coast. Common procedures include
manual removal, flooding or washing, use of vacuums, mechanical removal,
tilling and aeration, sediment reworking or surf washing, and the use of sorbents
or chemical cleaning agents (Fingas 2000).
5.3 Simulation Dynamics
This section explains the dynamics of the InfoPAE simulation, which was
modeled on top of the Process-DEVS framework.
5.3.1 The Environment
The environment is depicted in Figure 5.1 and contains all data necessary
for the simulation. This data is mostly geospatial in nature and can be classified
either as static or dynamic.
Figure 5.1. The environment with its elements
5 The InfoPAE Use Case 124
Static data is retrieved from the InfoPAE database and consists mostly of
two-dimensional GIS data. It includes all coastal segments in a given area, with
their sensitivity classification, the plant installations and other information
relevant to the logistics of resource displacement, such as the location of piers. It
should be noted that the coast segments are also used to determine the extension
of the water bodies. Presentation information, such as satellite images, will not be
listed here. Even though they are important for the final user of the system, they
are relevant only to its user interface and not to its underlying simulation.
Dynamic data includes the weather conditions, the location of oil slicks and
the location of resources, such as containment barriers, recovery boats and coastal
cleaning teams. The relevant weather conditions include water currents and the
direction and velocity of the wind. Oil slicks are represented in a regular grid cell
space, where each cell contains a value that represents the amount of oil in it.
Containment barriers are represented as lines which are basically sequences of
points. Finally, recovery boats and coastal cleaning teams are represented simply
as single points, and they are able to remove oil from any location within a fixed
radius of their position.
According to the framework definition presented in chapter 0, the processes
in a simulation access the state of the environment through environment views.
The main views this environment provides are the vector view and the cell view,
as depicted in Figure 5.2. In the vector view, all data is read in vector format, such
as points, lines and polygons. In the cell view, everything is represented in a
rectangular grid of cells. Although these two views are different in nature, both
represent the same data. Elements that are fundamentally represented as vectors,
such as those just mentioned, are presented in the cell view as if they occupy all
cells that intersect their vector geometry. Likewise, cellular elements, such as oil
slicks, are represented in the vector view as a set of points. In the case of oil
slicks, each cell that contains some amount of oil is presented as a 2D point placed
at the center of the cell, with an attribute indicating the amount of oil in that cell.
5 The InfoPAE Use Case 125
Figure 5.2. The vector view (a) and cell view (b)
Those two views will feed processes of different nature, modeled in
different formalisms. For example, the process that models the oil dispersion may
be modeled as a cell space process using the cell view as input, and a process for
barrier placement may use vector algebra, based on the vector view.
Besides those two main views, the environment also provides the properties
view, through which the processes can access all non-spatial data, such as the
weather conditions. This view is accessed as a set of property-value pairs.
It is generally a good practice to put as little intelligence as possible in the
environment. For this reason, the environment described here behaves like a
database. It stores data, serves that data in the form of views, according to the
needs of its clients, and processes transactions. The transactions consist of events
sent by the processes. The events this environment can receive are:
OilLeakEvent(cell, amount) – adds the given amount of oil to the given cell.
OilRecoverEvent(cell, amount) – subtracts the given amount of oil from the
given cell.
OilMoveEvent(origin cell, destination cell, amount) – moves the given
amount of oil from/to the given cells. It subtracts the amount from the origin
cell and adds to the destination cell.
ChangeResourceLocationEvent(resource, geometry) – changes the location
of the given resource. The resource can be a containment barrier, a recovery
boat or a coastal cleaning team. The new location is defined by the given
geometry. The geometry must be checked against the type of resource. For
recovery boats and coastal cleaning teams, the geometry must be a point. As
5 The InfoPAE Use Case 126
for containment barriers, the geometry must be a line with length no greater
than the total length of the barrier.
ChangeWindEvent(direction, velocity) – changes the wind.
ChangeWaterCurrentEvent(direction, velocity) – changes the water current.
For simplicity, it is assumed that the wind and water current are uniform
fields, with the same value at all points. This simplification may cause the
simulation to behave unrealistically, if the simulated area is large enough.
However, a detailed model for those conditions is out of the scope of this work.
This simplification was made in order to keep the text more didactic with respect
to the simulation mechanisms.
There are two modes in which this InfoPAE environment may operate
during a simulation. In memory mode, the states of all elements are kept in main
memory, i.e., there is no communication with any persistence device during the
course of the simulation. The other mode is the saving mode. In this mode, every
time an element has its state changed, the environment feeds a spatio-temporal
database with the new state of that element. Hence, for each dynamic element in
the simulation, there will be a time series in the database. With those time series,
the sequence of world states of the simulation can be replayed after the simulation
has finished.
The memory mode is used to achieve better performance. For example,
when a user is designing a response plan for a given situation, he may run a great
number of simulations until he is satisfied with his plan. It is not necessary to save
them all. His work will be more efficient if the simulations are executed in a faster
way. On the other hand, the saving mode is important when one must replay the
simulation for analysis. A good example is a multi-player training game.
5.3.2 Processes
The set of processes is what gives life to the dynamic elements in the
simulation, and they are responsible for modeling all kinds of behavior, from oil
dispersion to containment barriers installation.
Recall that, in a simulation, processes act by sending events to each other
and to the environment, and the coupling structure of the simulation defines the
5 The InfoPAE Use Case 127
connections through which the events are sent to other processes and to the
environment. The overall structure of the processes in the InfoPAE simulation is
detailed in Figure 5.3. Every circle in the figure represents a process. The arrows
represent either parental relations between processes or connections in the
coupling structure of the simulation, through which the events flow. For
simplicity and easiness of understanding, less important details were omitted from
this figure.
Figure 5.3. The process structure
The process structure is not fixed. The number of resources in a simulation
may vary, and so does the number of processes to manipulate them. Additionally,
the kind of process that controls the god avatar and command avatar processes
5 The InfoPAE Use Case 128
can also change, depending on the kind of simulation and the number of human
players. In this context, an avatar represents a role in a simulation that is to be
played either by a human player or a fully automated process. The different types
of processes are described next:
oil_leak(cell, leak_amount, leak_rate, frequency) – This is a very simple
process which starts the whole simulation activity. The idea is that there is
an oil leak at the given cell, which leaks at rate leak_rate. The total amount
of oil to be leaked is given by leak_amount. This process updates the
environment with the given frequency by periodically sending events in the
form OilLeakEvent(cell, amount) to it. Since it is a periodic process, its
time-advance function is constant ta(s) = 1 / frequency. It keeps generating
these events until the total amount of oil leaked reaches leak_amount.
Therefore, the total number n of generated events is equal to leak_amount /
(leak_rate / frequency). The parameter amount of each event is given by
(leak_rate / frequency), except for the last event, for which it is given by
leak_amount – (n – 1) * (leak_rate / frequency), where n is the total number
of events. Once all those events are sent to the environment, the process is
finished.
oil_dispersion – This process models the movement of oil slicks,
considering the wind conditions and water currents. This process also
generates events periodically. At each time step, it reads the weather
conditions from the properties view and searches the cell view for all cells
that contain some amount of oil. Then, it invokes a function that takes as
input all this gathered data and outputs a set of events in the form
OilMoveEvent(origin cell, destination cell, amount). As the coupling
structure indicates, those events are sent to a resolver process. This function
is complex and its internal details are out of the scope of this work. For the
matter of understanding the simulation logic, it suffices to specify the
format of its input and output.
oil_block – This is also a periodic process. At each time step, it checks the
vector view for the location of all containment barriers. After that, it
calculates which cells intersect the geometries of the barriers and generates
5 The InfoPAE Use Case 129
one event in the form OilBlockEvent(cells), where cells is the set of cells
that intersect some installed containment barrier.
resolver – The three processes oil_leak, oil_block and resolver are arranged
in an interference pattern, which is described in Section 4.4.2. The resolver
process receives events of types OilMoveEvent and OilBlockEvent. It
outputs only events of type OilMoveEvent. In its internal state s 2C, where
C is the set of all cells in the cell space, it keeps the set of cells that are
blocked. Each time it receives an OilBlockEvent(cells), its internal state
becomes s = cells. Each time it receives an OilMoveEvent(origin cell,
destination cell, amount), it forwards it immediately as output only if
destination cell s, otherwise the event is ignored. Therefore, this resolver
process acts like a filter of events, retaining all movement of oil that is
contained by the barriers. This way, the logic of oil containment is separated
from the complex logic of oil dispersion.
oil_recovery(resource_id, action_radius, recovery_rate, max_capacity,
frequency) – This process removes oil from the environment. This process is
used both by recovery boats and by coastal cleaning teams. The resource_id
indicates which resource is recovering oil. The location of the resource is a
point in a 2D space and can be obtained from the vector view at any time.
The action_radius indicates the maximum distance from the resource’s
location where oil can be recovered. The recovery_rate indicates the rate at
which this resource can remove oil from the environment. The
max_capacity is the maximum amount of oil that can be recovered. Finally,
the frequency has exactly the same semantics as in the oil_leak process. It
indicates the frequency at which the environment is updated.
The internal state of this process is defined by the variable
remaining_capacity +. Its initial value is max_capacity. At each step,
this process invokes a function which outputs a finite set O of events of the
form OilRecoverEvent(cell, amount). The internal details of this function are
omitted for simplicity. It is only important to know that this function must
obey the following restrictions:
5 The InfoPAE Use Case 130
1. sum({a / OilRecoverEvent(c,a)O}) =
min((leak_rate / frequency), remaining_capacity)
2. (oO)(o=OilRecoverEvent(c,a) cR)
where R is the set of cells intersecting the circle centered at the
resource’s current location with radius action_radius
The first restriction imposes that the oil must be recovered at a rate equals to
recovery_rate, and also that the process does not recover more oil than its
capacity. The second restriction states that all recovering must be done
within the action area of this recovery process. After the events have been
generated, the internal state is updated. From the remaining_capacity, it is
subtracted the value min((leak_rate / frequency), remaining_capacity).
When remaining_capacity = 0, the process is finished. Hence, it is
guaranteed that the process will not recover more oil than its max_capacity.
displacement(resource_id, trajectory, speed, frequency) – Moves the
resource defined by resource_id along the line defined by trajectory with
the given speed. The frequency parameter defines the frequency at which
this process will update the position of the resource. This is a periodic
process with ta(s) = 1 / frequency. Consider a parametric function
d: [0, length] 2, where length is the total length of the trajectory and
d(x) is the point in the trajectory reached by walking x space units along the
trajectory, starting from its origin. The internal state of this process is
defined by the variable current_location [0, length], for which the initial
value is 0. At each step, this process outputs one event in the form
ChangeResourceLocationEvent(resource_id, new_position), where
new_position = d(min(current_location + speed / frequency, length)). After
generating this event, its current_location is updated to
min(current_location + speed / frequency, length). When current_location =
length, the resource has reached its destiny and the process is finished.
barrier_installment(resource_id, location, frequency) – Installs the barrier
defined by resource_id at the given location. Of course, the resource with
the given resource_id must be a containment barrier. The location is defined
5 The InfoPAE Use Case 131
by a polyline in the 2D space with length no greater than the total length of
the given barrier. The detailed procedure for installing a containment barrier
involves two boats, which should meet at a particular point, set the barrier
on water and start moving in opposite directions, each one holding one end
of the barrier. According to the climate conditions, some complex
movement may be required to keep the barrier in the desired shape.
However, since the focus of the simulation is training and resource
planning, it is not necessary to model this process in such a level of detail.
Instead, this process just calculates and waits for the total time spent until
the boats meet at the location desired for the barrier. It then starts sending
periodic events in the form ChangeResourceLocationEvent(resource_id,
location) as new segments are added to the geometry of the installed barrier.
The periodicity of these events is given by 1 / frequency. When the barrier is
totally installed, this process finishes. The internal calculations of this
process are complex and are omitted here for simplicity.
recovery_boat_controller(resource_id) and
coast_clean_controller (resource_id) – These processes control the
resources that are responsible for removing oil from the environment.
Unlike containment barriers, which are treated as passive objects, those
resources are active elements in the sense that they perform actions that alter
the environment, hence the need for controllers. Each recovery boat and
each coastal cleaning team must have one controller process. The controller
processes receive commands in the form RecoverOilAtEvent(location) from
the command avatar process and orchestrates its children, namely
displacement and oil recovery processes, in order to execute those
commands.
Initially, the controller process reads the attributes of its controlled
resource, which is defined by resource_id. Those attributes define values for
properties such as speed, recovery capacity and recovery rate. Then, it waits
for a RecoverOilAtEvent(location) command. Once it is received, it checks
the location and traces a route to it by using some routing algorithm, whose
details are omitted for simplicity. Then, it forks a displacement process and
waits for it to move the resource to the desired location. Once the resource is
5 The InfoPAE Use Case 132
properly located, the oil recovery process is used to perform the recovery
action. If some other RecoverOilAtEvent is received, all current activity, if
any, is cancelled and the operation starts over. This way, the controller
process provides a high-level abstraction for the recovery resources by using
the composite pattern, as described in Section 4.4.3.
command_avatar – This process provides the abstraction of an avatar for the
response command in the simulation. Its functionality consists basically in
receiving commands and delegating them to its children. It provides one
additional level of abstraction with the composite pattern. In fact, the whole
tree of processes below the command avatar represents the execution of the
emergency response. This process receives events of the form
DeployResourceAtEvent(resource_id, location). If the resource identified by
resource_id is a containment barrier, it forks a new process
barrier_installment(resource_id, location) if that barrier has not been
deployed yet. If the resource is a recovery resource, it simply sends an event
RecoverOilAtEvent(location) to the appropriate controller.
god_avatar – This process provides an avatar for manipulation of the
weather conditions. It receives commands as events in the form
ChangeWindEvent(direction, velocity) or ChangeWaterCurrentEvent
(direction, velocity) and simply forwards them directly to the environment.
The purpose of this process is merely to make the process structure more
uniform. It is analogous to implementing an avatar interface where one can
plug either a human player interface or another fully automated process.
human_player_interface and non-playing character (NPC) – These are the
processes that can be attached to the avatars. A human player interface is an
input process, as defined in Section 3.3.1, which is able to receive
commands from a human-computer interface (HCI), which is external to the
simulation. This way, a human can interfere with the simulation. The
capabilities of the avatar will define the human’s role in the simulation.
Another possibility is to attach a fully automated process to the avatars. In
this case that process would be a non-playing character (NPC). In the
computer games field, this term is used to denote a fully automated
5 The InfoPAE Use Case 133
character that plays a specific role in the game. The NPCs implemented for
the InfoPAE simulation are processes that act based on workflow
definitions, as described in Section 4.1. For each action in the workflow, the
workflow process, as defined in that section, forks a child process which
communicates with the avatar process by sending the events relative to that
action.
Hence, the set of human players is flexible. Each avatar may be
controlled either by a human or by a predefined workflow. In a multi-player
game, all avatars may be controlled by humans. In a fully automated
simulation, all avatars may be controlled by predefined workflows.
Most processes in the simulation are periodic, i.e., they could be defined in a
discrete time formalism. However, they work with different time steps, as listed
below:
oil_leak – 10seg oil_dispersion – 5min
oil_block – 1min oil_recovery – 10seg
displacement – 10seg barrier_installment – 30seg
Some processes are rigid with respect to their time steps. For example, the
oil_dispersion process only works correctly with the right time step. However,
most of them are somehow flexible with respect to the time step because they use
their frequency parameter to do their calculations. For example, if we double the
time step of the oil_leak process, it will automatically double the amount of oil
that is leaked at each time step. Such processes can have their time steps adjusted
to optimize the simulation performance. However, there is a minimum granularity
required by each of them so that the simulation results remain correct, according
to a given criteria.
It should be reminded from the simulation operational semantics that all
events are time-stamped and totally ordered. Therefore, although the environment
acts like a database, there are no concurrency issues, since all events are always
processed in the same order.
5 The InfoPAE Use Case 134
5.4 The InfoPAE Plan Simulator and Training Game
Two different systems were implemented with the InfoPAE simulation. The
first one is a simulator for the InfoPAE planning module, which provides an
environment in which InfoPAE users can test the response action plans they
design with the InfoPAE plan editor. The second system is the InfoPAE training
game, which provides an environment to simulate an emergency situation with
which multiple humans can interact.
Both systems are based on the same simulation model. The only difference
between them is that, in the planning module, NPC processes are used to control
the avatars, while in the game, these processes are replaced by human player
interface processes, as described in the previous section. All other processes are
reused with the same configuration.
The following sections describe the architecture and functionality of each
system.
5.4.1 The InfoPAE Plan Simulator
As already mentioned in Section 4.1.1, simulation can be a valuable tool in
the process of business process planning. The idea of the plan simulator is to act
as a fast and low cost tool for simulating response plans designed in the InfoPAE
system. Hence, the plan designer may quickly detect flaws in his plans and test
different alternatives, searching for more efficient plans. The architecture of the
plan simulator is depicted in Figure 5.4.
5 The InfoPAE Use Case 135
Figure 5.4. The Plan Simulator Architecture
In the InfoPAE planning module, which is not illustrated in the architecture,
the user defines the emergency scenario and designs a response action plan for
that scenario. The result is then stored in the InfoPAE database. The plan
simulator reads this information from the InfoPAE database to build its simulation
with the structure defined in Section 3.3.1. A response action plan is modeled as a
workflow in the InfoPAE database. During simulation execution, this workflow is
used by an NPC to control the command avatar. In addition to the scenario and
response plan information, the plan simulation also needs geographical
information such as the coast geometry with its oil sensitivity data, which is
necessary for simulating coast hits and calculating the total environmental impact.
One interesting point here is that the emergency scenario and the response
plan in the InfoPAE database do not provide all the information needed for a
simulation. Detailed information about the emergency, such as the exact oil leak
coordinate, the leak rate and the total amount of leaked oil, are often missed from
the scenario definition, so are the exact weather conditions, such as the wind
direction and speed. That happens because scenario definitions are required to be
a little abstract so that they can represent a larger number of concrete emergency
situations. Otherwise, if the user was always forced to provide complete details,
the number of scenario definitions in the InfoPAE database would grow beyond
the reasonable. The same happens with response plans, which rarely define all the
exact parameters for every action.
When the user imports a scenario definition and a response plan to build the
simulation, the plan simulator provides an interface for defining all the missing
detailed information. However, the user may still leave some information
undefined. In this case, once the simulation has started executing, as soon as a
simulation process needs missing information, the simulation is automatically
paused and the plan simulator queries the user for that information so that the
simulation can proceed. This is accomplished by the exchange of events between
the user interface and the simulation through I/O processes, as specified in Section
3.3.1.
The user interface provides controls for the user so that he can play, pause
and set the speed of the simulation whenever he wishes. Those requests are sent to
5 The InfoPAE Use Case 136
the loop component, which implements the StableGameLoop described in Section
5.5.2. The current situation is presented to the user in a 3-dimensional scene,
which is rendered by a 3D engine similar to those used by entertainment games. In
order to optimize the rendering performance, the environment implementation
used by this plan simulator stores all its data in main memory and in data
structures specialized for rendering by the 3D engine, as discussed in Section
3.2.2 (decision 4). Screenshots of this graphical interface are shown by Figure 5.5.
Figure 5.5. Screenshots of the Plan Simulator User Interface
The 3-dimensional interface provides the users with a realistic view of the
situation evolution. Besides the position of moving objects such as oil and the
resources, some additional information is rendered on the map. The action radius
of some resources such as recovery boats and coast cleaning teams helps the user
visualize the efficiency of their deployment and think about alternatives. The 3D
visualization also allows the users to check what is visible to the people on
specific points in the action field, such as helicopters, boats and coastal points.
5.4.2 The InfoPAE Training Game
The second implemented system was a training game. Its architecture is
depicted in Figure 5.6.
5 The InfoPAE Use Case 137
Figure 5.6. The Multi-Player Training Game Architecture
This game uses a multi-touch table as a device where several players can
work together to handle the simulated emergency situation. The table is provided
with a horizontal screen capable of processing multiple touch inputs
simultaneously. Below that screen is a regular PC-like computer that is connected
to a game server via a network. This computer hosts a small interface program
that translates the inputs of the players into commands for the game server. This
game server contains the simulation, the game loop and a Web Map Service
(WMS) (Percivall 2003), which implements a standard way of serving map
images on the Web. One of the benefits of the multi-touch table is that it
facilitates collaboration between players.
Before the game starts, one player has to choose one emergency situation
out of a number of predefined ones. These predefined emergency situations differ
from the scenarios stored in the InfoPAE database in the sense that they contain
all the detailed information needed to run a simulation.
Once the initial situation is chosen, the game server builds the simulation
and starts executing it. The simulation of the game is basically the same as in the
plan simulator. Only the NPC processes are replaced by human player interface
(HPI) processes, as described in Section 5.3.2. The simulation receives from the
table both response action commands and requests for changing the weather
conditions. Response action commands are forwarded to the command avatar,
while requests for changing weather conditions are forwarded to the god avatar.
Finally, there is one last type of input from the table, which consists of requests
for changing the game speed. Those requests are sent to the game loop and
5 The InfoPAE Use Case 138
handled as described in Section 5.5. Speeding up the game speed may be desirable
when there is no decision making by the players.
The game loop component implements the StableGameLoop described in
Section 5.5.2. It keeps a thread that continuously advances the simulation time
and provides the multi-touch table with updates on the simulation environment.
These updates contain the state of the elements in the environment that are
rendered to the players, such as the oil position and the locations of the resources.
All this information is rendered on top of a map, which is provided by the Web
Map Service. A screenshot and a picture of the game are shown in Figure 5.7.
Figure 5.7. The Multi-Player Training Game in Action
The multi-touch table added considerable value to the game. The players
can talk to each other and discuss the correct strategy while allocating the
resources to mitigate the oil leak. It is interesting that, although this game was not
designed for entertainment, its users found it fun to play with. This shows the
power of games to engage people, which can be exploited by companies to
stimulate discussions, to develop solutions and to propagate knowledge about a
given problem.
5.5 Time Management
During the implementation of the two InfoPAE modules, problems with
time management were detected in the context of current game loop techniques.
None of them seemed to handle properly changes in the simulation speed and the
processing peaks generated by complex simulation models. This section
5 The InfoPAE Use Case 139
informally discusses the principles involved in dealing with those requirements
and presents the loop model developed for the InfoPAE system.
The problem of time management lies in that computer games and, more
generally, interactive simulations, need to implement some way of
synchronization between the speed at which the simulation advances and the real
time flow. This problem is not as simple as it might look. The game loop
techniques described in Section 2.1.2 show how entertainment games usually deal
with this problem. However, these loops do not take into consideration the
requirements of changing the simulation speed during play and handling
simulation processing peaks.
Since training games attempt to simulate real situations, it is natural that the
simulation time represents the real time of those situations. However, simply
synchronizing the simulation time with the real time flow may not be enough for
all training games. The ability to accelerate and slow down the pace of the game
may be quite important for the usability of training games. For example, consider
a game which simulates an emergency situation which may last for days. The
game simulation should obviously not take the same amount of time. Periods
requiring no decision making should be fast-forwarded. Likewise, periods of
intense decision making could be slowed down for training purposes.
Serious games often make use of complex simulation models. This easily
becomes a time management issue because, unlike entertainment games, these
models cannot be tricked or simplified when they produce processing peaks.
Therefore, game loops designed for training games cannot assume that their
simulation models will not exceed certain processing time limits.
5.5.1 Simulation Speed and Game Loops
The human beings always work in real time. It cannot be accelerated or
slowed down. Therefore, simulations that interact with humans must implement
some sort of synchronization mechanism. The synchronization problem consists
of adapting the simulation of automated elements to the real time flow by
monitoring the speed at which the simulation is running and adapting its advance
policy accordingly. The average simulation speed is calculated by speed = Δtsim /
5 The InfoPAE Use Case 140
Δtreal, where tsim is the simulation time and treal is the real time. The desired value
of the speed will vary unpredictably in time depending on the will of the user. One
example of how the speed can be changed during play is shown in Figure 5.8.
When speed = 1, the simulation is synchronized with the real time flow. Greater
values mean that it is in accelerated mode and lesser values in slow motion.
Pauses obviously have speed = 0.
Figure 5.8 – Simulation speed being changed during play
As already mentioned in Section 2.1.2, time management in single-player
games is traditionally done by a loop which interleaves calls to the three functions
input, update and render. The term frame rate is used in gaming to denote the
frequency in real time that the render function is invoked. Both input and render
functions represent simulation I/O. For simplicity, we shall consider only two
functions: update and process_io.The update function is responsible for advancing
the simulation time, while process_io is assumed to handle all I/O, including
rendering. The idea is that the simulation system alternates between advancing its
internal simulation and communicating with external entities. In a single player
desktop game, it means to receive user input and render the user view. However,
considering the case of a network game, it could mean exchanging update
messages with its peers instead of rendering to the screen.
To maintain consistency with gaming terms, we shall use the term frame
rate to denote the frequency in which the process_io function is invoked, even if
this function does not render a frame for the user as, for example, in the network
case just described.
Both functions are defined as
5 The InfoPAE Use Case 141
process_io()
{
current_state := current_state.flush_io()
render() //if necessary
}
update(dt)
{
current_state := current_state.advance(dt)
}
where dt is the simulation time advance, current_state is the current simulation
execution state and the advance and flush_io functions are as defined in Section
3.3.2. After the call update(dt), the simulation time is increased by dt and its state
is update accordingly.
Some game loops define their update function without the dt parameter,
considering a fixed predefined time increase. This kind of loop assumes a discrete
time simulation model, which is not enough to handle discrete event simulation
formalisms, such as Process-DEVS.
Other more sophisticated and highly interactive game loops divide the
update tasks between two update functions. One that is executed in a fixed
frequency and another one that runs at a variable frequency. The first is used for
tasks that do not present relevant results in brief time intervals such as the game
logic. The second is used for tasks like animation interpolation, which produces
smoother results if executed in a high frequency (Valente et al. 2005). In this case
it makes sense to make multiple calls to the variable frequency update and
process_io pair of functions between two consecutive calls to the fixed frequency
update, as depicted in Figure 5.9 (a). The fixed frequency update must be called
exactly once in a given real time period. The remaining time is then used to make
calls to the variable frequency update and process_io functions.
5 The InfoPAE Use Case 142
Figure 5.9 – Game loops profiles
This kind of loop forces most of the game logic to be modeled in discrete
time, which can limit the integration and reuse of simulation models, especially if
they work at different time scales as discussed in Section 3.2.1. However, with the
discrete event approach, it is not necessary to have a fixed frequency update
function. The simulation can be advanced by any time period dt at any time,
reaching a perfectly valid and defined state. Besides, removing the fixed
frequency update does not restrict the simulation models. In the context of the
Process-DEVS framework, any logic that is modeled in discrete time can be
embedded in a process for which ta(s) = c, where c is a constant. In this case, even
though the update function is called with a variable frequency, that process will be
executed as if it was modeled in discrete time.
Since it is not necessary to provide a fixed frequency update function, the
loop is considerably simplified. It is only necessary to alternate calls to the update
and process_io functions as depicted in Figure 5.9 (b). In this case, the main
question is to figure out which parameter dt to use in each update call, considering
that it is not known in advance how long those calls will take to execute. The next
section provides a study on some loop models in order to answer this question.
5.5.2 A Loop Model Study
In order to test different loop models, we shall consider a simulation
composed entirely by i processes of the form Pi[Δti, wti] = Si, Xi, Yi, Ei, Pi, int i,
ext i, i, ρi, tai. Each process Pi generates events periodically every Δti simulation
5 The InfoPAE Use Case 143
time units. Each event is assumed to take wti real time units to be processed. In
short, tai(s) = Δti and the process does nothing besides consuming a processing
time equal wti in its internal transition function. Two processes are defined for the
test: P1[50ms, 5ms] and P2[250ms, 100ms]. These two processes will determine
how much processing time each call to the update function will take. The
process_io function is assumed to take a constant time equal to 5ms and the
desired frame rate for this simulation is 10fps. The purpose of this test is to study
the effects of a high processing load of a simulation model in an interactive
simulation. Particularly, the relatively sparse processing peaks generated by P2
and the exhaustion of the processing resources caused by a speed increase shall be
studied in detail. In order to achieve that, the simulation starts normally with
speed = 1. When the real time reaches 4s, the speed is increased to 4. When the
real time reaches 6s, the speed returns to 1. When the simulation time reaches 15s,
the simulation is finished.
The first and simplest loop considered in this study is the MaxFpsLoop,
which is defined as
current_time = get_system_time()
while(!is_finished())
{
last_time = current_time
current_time = get_system_time()
update((current_time – last_time) * get_speed())
process_io()
}
This loop is quite simple and useful. It simply measures the real time it took
to execute the previous cycle and uses it to feed the update function. Note that it is
multiplied by the speed given by the get_speed function. For example, if the speed
equals 2, the simulation will be advanced twice as fast as the real time.
This loop clearly attempts to maximize the frame rate. The faster the update
and process_io functions are executed, the higher the frame rate is. Although
loops like this are used in some single player computer games, it has two
drawbacks if we consider the serious games requisites discussed in the beginning
of this section. First, it always attempts to use all available computational power
to increase the frame rate, even in the cases where that will not improve the user
5 The InfoPAE Use Case 144
experience. Second, it does poorly when trying to run at speeds that surpass the
processing limits. In that case, the frame rate drops dramatically and the dt
parameter of the update function grows indefinitely.
Figure 5.10 depicts the results of the test executed with the MaxFpsLoop.
The chart on the top shows the evolution of the simulation time with the real time
flow. The chart on the bottom shows the frame rate. The frame rate values are
calculated using a time window of 0.5s. The results show clearly that this kind of
loop is inadequate to meet the requirements. First, its frame rate in normal speed
is much higher than desired, therefore wasting resources, which might be a
problem for the serious games industry, where training games might compete for
processing power with other corporative information systems. Second, the frame
rate drops almost to zero when the simulation is accelerated beyond the
processing capacity. Third, it continues to run fast for some time after the speed
has returned to normal at 6s. This is because this loop accumulates time debts
during the period where it does not reach the desired speed. After the speed has
returned to normal, it attempts to compensate by continuing to execute faster for
some time. This is good only for small time debts such as those caused by the
processing peaks of P2. Indeed, the line at the top chart has reached with precision
the point (4,4) because of this property. However, if the time debt is large enough
so that the time necessary for compensation is perceivable to the user, it should
not be compensated. This would give the user a sense of losing control over the
simulation speed.
5 The InfoPAE Use Case 145
Figure 5.10 – MaxFpsLoop
The MaxFpsLoop is based on a catch-up principle. It checks the time it took
to execute the last loop and set the next update dt parameter accordingly. One
alternative also used in computer games is the opposite strategy. Set a fixed dt
parameter and set the loop time accordingly. This is done by taking the time the
update call took and setting a sleep time accordingly. The FixedStepLoop
implements this approach in a simple way.
parameters(
desired_frame_rate = 10.0
)
loop_time = 1.0 / desired_frame_rate
last_time = get_system_time()
while(!is_finished())
{
processIO()
update(loop_time * get_speed())
remaining_time = loop_time – (get_system_time()-last_time)
5 The InfoPAE Use Case 146
if(remaining_time > 0)
{
sleep(remaining_time)
}
last_time = get_system_time()
}
This loop clearly expects that there will always be enough processing time
to execute the update and process_io on time to keep the frame rate constant at the
desired level. In fact, at normal speed, the frame rate is very well behaved as
shown in Figure 5.11. It still drops when the processing capacity is stressed but
less than in the MaxFpsLoop. One other problem solved by this loop is that it does
not accumulate time debts when the processing capacity is reached. This can be
easily seen in Figure 5.11. After 6s, the speed returns to normal almost
immediately.
Figure 5.11 – FixedStepLoop
5 The InfoPAE Use Case 147
Although FixedStepLoop solves most of the problems of MaxFpsLoop, it
raises one new problem. Since it does not accumulate time debts, the processing
peaks caused by P2 forces the simulation to go slower than the desired speed, even
when there is enough processing capacity. This can be easily checked in the top
chart of Figure 5.11. The line does not reach the point (4,4) as expected. This
could be an issue in a computer simulation that is mixed with real dynamics, for
example.
In order to solve the problems raised by these two loop studies, a looping
strategy consisting of the following steps was developed:
1. Update the simulation in small steps until it is time to call
process_io or all time debts have been paid. Updating the
simulation in small steps is good to detect when the next call to
process_io is late. If all time debts have been paid, the simulation is
up to date and there is no need to update it any further.
2. If all time debts have been paid, wait for the time to call
process_io. This is important to release the processing resources if
they are not fully needed to achieve the desired frame rate.
3. Call process_io. It is called once per loop. Therefore each loop
should ideally last the inverse of the frame rate.
4. Compute the loop time and increase the time debt for the next
loop forgiving all debts beyond a given threshold. The desired
time for executing a loop cycle is determined by the desired frame
rate. The debt calculation should consider the difference between the
desired and actual loop time.
This loop requires two additional parameters besides the desired frame rate.
One for defining the granularity of the steps in which the simulation should be
advanced and another for the debt forgiving threshold. The StableFpsLoop
implements those steps. Its pseudo-code is given below.
parameters(
desired_frame_rate = 10.0
max_debt_factor = 2.0
update_granularity = 0.25 //should be between 0.0 and 1.0
)
5 The InfoPAE Use Case 148
loop_time = 1.0 / desired_frame_rate
current_time = last_time = get_system_time()
advance_debt = 0.0
while(!is_finished())
{
advance_debt += loop_time * get_speed()
advance_step = loop_time * get_speed() * update_granularity
do
{
advance_step = min(advance_step, advance_debt)
update(advance_step)
advance_debt -= advance_step
remaining_time = loop_time–(get_system_time()-last_time)
//if no more debts, waits until time to call process_io
if((advance_debt <= 0.0) && (remaining_time > 0.0))
{
sleep(remaining_time)
remaining_time = 0.0
}
} while(remaining_time > 0.0)
processIO()
current_time = get_system_time()
//add time difference between desired and actual loop time
advance_debt +=
((current_time - last_time) - loop_time) * get_speed()
//forgive debts beyond threshold
debt_threshold = max_debt_factor * loop_time * get_speed()
advance_debt = min(advance_debt, debt_threshold)
last_time = current_time
}
The results of actually running this loop are depicted in Figure 5.12. It can
be easily seen that the StableFpsLoop behaves better than the previous loop. Like
the MaxFpsLoop, it is capable of compensating for small processing peaks,
keeping the average speed as desired. However, if the simulation keeps a speed
beyond the limits of the processing capacity for a large time period, it does not
accumulate all the time debt. It is clear that after 6s, the speed returns to normal
rather quickly.
5 The InfoPAE Use Case 149
Figure 5.12 – StableFpsLoop
This loop also behaves well with respect to the frame rate. At normal speed,
it keeps the frame rate in the desired value. Therefore, it saves as much processing
time as possible for other concurrent applications. The frame rate still drops a
little under stressing conditions but the impact is less than in the two previous
loop models.
All in all, the StableFpsLoop is the first loop model that handled the test
case in an acceptable way. The four steps identified for implementing the loop
strategy seem to be a good guide to deal with all the requisites identified for
training games, specially the speed change requisite.
5.6 Summary
This chapter described the implementation of two applications on top of the
Process-DEVS formalism, introduced in Section 3.3. Both applications are part of
5 The InfoPAE Use Case 150
the InfoPAE system, which is targeted at managing emergency response in the oil
industry. The first application consists of a planning module while the second is a
training game.
The first result of the discussion in this chapter is that almost all the
simulation model could be successfully reused by the two applications thanks to
the high level of modularization. Only a small number of processes and the
internal implementation of the environment had to change in order to allow
different types of user interaction and to optimize the 3D rendering performance
of the plan simulator.
Another result is that processes modeled in different formalisms such as
resource displacement, cell space processes and workflows could work well
together while keeping them independent of each other. The process of oil
dispersion, which was the one with the most complex logic in the simulation,
could be easily expressed in terms of the Process-DEVS formalism without
encountering any restrictions. The same happened for the workflow operators of
the InfoPAE response action plan representation. No limitations were faced with
respect to the expressivity of the simulation framework.
As expected, the 3D rendering performance of the plan simulator and the
network traffic of the training game did not show significant changes when the
simulation speed is increased, even when the processing capacity is reached. The
time control techniques described in Section 5.5 worked well for that result.
In the case of the plan simulator, even though the users can compose
different simulations by defining new scenarios and response plans, they have
expressed the desire of defining different specific simulation processes for certain
cases. However, programming directly on top of the Process-DEVS formalism
would be too difficult for non-programmers. Therefore, just as in the case of
SeSam (described in Section 2.3.2), it would be nice to provide users with a
simpler language on top of Process-DEVS to define processes.
Another desirable feature for the InfoPAE system is to provide the notion of
time in its workflow notation. Most workflow representations only allow one to
define before-after relationships between actions. Some InfoPAE users expressed
the desire to represent more detailed and quantized time relations. Since the
Process-DEVS formalism models time explicitly, it would likely be capable of
supporting interesting representations of workflows with time.
6 Conclusions and Future Work 151
6 Conclusions and Future Work
6.1 Conclusions
This thesis aimed at creating a framework through which formal simulation
methods could be integrated with gaming techniques in a modular architecture. It
was argued that serious games can greatly benefit from being based on formal
simulation methods. The results in this thesis contributed to increase the level of
formality in the design of game dynamics, which is an important step if games are
intended to be extensively used outside the entertainment realm. The thesis also
contributed to the development of the InfoPAE system by implementing two
working modules to test the proposed framework.
Chapters 1 and 2 enumerated the requirements of training games and
overviewed a few techniques and systems, selected from the areas of computer
games, modeling and simulation, multi-agent systems and planning that could
help fulfilling those requirements.
Chapter 3 first discussed the desirable characteristics of a framework for
modeling the dynamics of a training game, considering the requirements
enumerated in Section 1.3. The framework followed the process-oriented
simulation (POS) paradigm for dynamic modeling. The results of the discussion
were organized in the form of decisions, which guided the development of the
Process-DEVS dynamic modeling formalism. The fact that DEVS can serve as a
common basis for integrating different simulation formalisms (Vangheluwe 2000)
was inherited by Process-DEVS. This showed that POS has the ability of
inheriting interesting properties of object-oriented simulation (OOS). Moreover,
the separation between behavior and physical state allows simulation applications
to represent the physical objects of the environment directly in the form of scene
graphs. That capability allows the use of formal simulation models together with
fast 3D rendering technology.
6 Conclusions and Future Work 152
Chapter 4 formalized a number of ways to implement some common
dynamic modeling formalisms on top of Process-DEVS. Section 4.1 presented a
way in which workflows can be mapped to Process-DEVS. In fact, in the way
workflow processes were defined, they represent a form of process composition
where the workflow logic defines which processes are created and when. Section
4.2 discussed the issue of modularity in the domain of cell space processes and
presented a formalism for dealing with it on top of Process-DEVS. It also showed
how to compose cell space processes out of smaller pieces. Section 4.3 presented
a formal framework in which it is possible to model multi-agent systems on top of
Process-DEVS, with support for sensors and actuators. The solutions presented in
these three sections suggested some patterns in which to structure processes with
interesting modularity properties. Section 4.4 informally discussed these patterns,
leading to interesting conclusions that still need further experiments to be fully
validated.
Particularly, the contribution of Section 4.2.3 is worth highlighting. It
presented a formal way to compose cell space models out of smaller and
independent cell space models. The separation of behavior and physical state, as
well as the modular nature of POS, were essential to accomplish closure under
composition in this case.
Chapter 5 described the implementation of two applications as part of the
InfoPAE system. These applications were implemented on top of Process-DEVS,
which allowed such a high level of modularization that almost the entire
simulation model could be successfully reused by the two applications. That
simulation model successfully incorporated well established formalisms, such as
workflows and cell-space processes, while separating physical state and behavior,
allowing it to represent the physical objects of the environment directly in the
form of scene graphs. That capability allowed the use of formal simulation models
together with gaming techniques such as fast 3D rendering pipelines. Besides
allowing the integration of independent models from different formalisms,
Process-DEVS also allowed a modular definition of oil behavior, where each
different aspect was implemented as an independent type of process.
Section 5.3.1 presented an interesting way to integrate cell- and vector-
based models that work on the same data through the use of environment views,
6 Conclusions and Future Work 153
which were essential for isolating the logic of the dynamic models from the
internals of the environment.
The StableFpsLoop, presented in Section 5.5.2, provided a way of managing
time in the presence of simulation processing peaks and when the simulation
speed can be accelerated to the limits of the processing capacity. This technique
showed how to keep control of the frame rate under these circumstances and how
to keep the simulation time correctly synchronized when there is enough
processing capacity for that.
Even though this thesis is focused on training games, there seems to be
nothing that prevents using POS or Process-DEVS in the design of entertainment
games. This conclusion is based on the fact that the rendering procedure may
remain totally untouched by the simulation logic. Furthermore, the technique of
adjusting the frequency of periodic processes, discussed in Section 5.3.2, can be
used to optimize simulation performance.
In summary, the major contributions of this thesis were:
A discussion on the requirements for dynamic modeling in the
context of training games
The conception of the process-oriented simulation (POS) paradigm
as a consequence of the discussion
The materialization of POS in a DEVS-based formalism named
Process-DEVS
Mapping a workflow representation to Process-DEVS
A framework for modeling cell space processes on top of Process-
DEVS with composition capabilities that preserve individual
independence of the sub-models
A framework for modeling multi-agent systems on top of Process-
DEVS
The development of a planning system and a training game for the
InfoPAE system as a use case of Process-DEVS
A technique to enable game loops to handle variable game speeds
and simulation processing peaks
6 Conclusions and Future Work 154
6.2 Future Work
As for future research, we may suggest:
Using the POS paradigm for modeling and simulation of real
systems (outside the gaming domain). It would be interesting to see
POS being used for traditional modeling and simulation problems. It
would be possible to get a more detailed comparison with OOS and
AOS.
Extending Process-DEVS to cover the simulation of continuous
processes. In the line of hybrid systems, mentioned in Section 3.2.1,
it would be interesting to provide Process-DEVS with the ability of
defining processes by differential equations. One possibility is to
implement it based on the DEV&DESS formalism (Zeigler et al.
2000), instead of pure DEVS.
Providing more user-friendly languages for defining processes. Just
as in the Jason and SeSam toolkits, described in Section 2.3,
Process-DEVS could be equipped with a high-level language for
allowing non-specialized users to define new kinds of processes.
Providing support for more specific workflow patterns and
standards. In Section 4.1, only the basic workflow patterns were
considered. More complex patterns such as loops could also be
formalized on top of Process-DEVS. Moreover, the integration with
workflow standards currently used by business process management
(BPM) systems is certainly useful for corporations.
Evolving the discussion on process patterns. More experience with
different kinds of simulations could lead to the consolidation and
formalization of the patterns described in Section 4.4. It could also
lead to the detection of other interesting process patterns.
Using Process-DEVS for other types of games, including
entertainment games. It would be interesting to see Process-DEVS
in a game with top-quality graphics using formal simulation
methods. This could also include the integration of Process-DEVS
with physics simulation of current game engines.
6 Conclusions and Future Work 155
Using Process-DEVS in a multi-player network game. In the
InfoPAE training game, all players used the same user interface. The
possibility of using different player configurations could be
investigated in greater detail with multi-player network games.
Additionally, increasing the simulation speed in a multi-player
network game would raise new problems, such as the impact of
network delays.
Modeling information flow for the InfoPAE simulation. The first
actions of contingency plans in the InfoPAE system usually include
alerting, reporting, evaluation and mobilization. In order to simulate
these actions, it would be necessary to model multiple actors, the
information they have about the situation and the communication
among them. That would make the simulation more detailed and
realistic.
156
7 References
Akyildiz, I., Su, W., Sankarasubramaniam, Y., Cayirci, E. (2002) "A survey on
sensor networks". IEEE Communications Magazine, 40(8):102–114.
Balci, O., Arthur, J. D., Nance, R. E. (2008) "Accomplishing reuse with a
simulation conceptual model". In Proceedings of the 40th Winter Simulation
Conference, 959-965
Bandini, S., Manzoni, S., Vizzari, G. (2009) "Crowd Behavior Modeling: From
Cellular Automata to Multi-Agent Systems". Published in Multi-Agent Systems:
Simulation and Applications, by A.M. Uhrmacher and D. Weyns, chapter 10, pp.
301-324.
Banks, J., Carson, J., Nelson, B., Nicol, D. (2005) "Discrete-Event System
Simulation". Fourth Edition. Upper Saddle River, NJ: Prentice Hall.
Batty, M. (2005) "Cities and Complexity". The MIT Press, Cambridge, MA
Benjamin, P., Akella, K. (2009) "Towards ontology-driven interoperability for
simulation-based applications". In Proceedings of the 41st Winter Simulation
Conference.
Bigley, G. A., Roberts, K. H. (2001) "The Incident Command System: High-
Reliability Organizing for Complex and Volatile Task Environments". The
Academy of Management Journal, 44(6), pp. 1281-1299.
Blythe, J. (1999) "An Overview of Planning under Uncertainty". AI Magazine
20(2), pp. 37-54.
Bonabeu, E., Dorigo, M., Theraulaz, G. (1999) "Swarm Intelligence. From
Natural to Artificial Systems". Oxford University Press, Oxford, U.K.
Bordini, R. H., Hübner, J. F. (2009) "Agent-based simulation using BDI
programming in Jason". In A. Uhrmacher and D. Weyns (Eds.), Multi-Agent
Systems: Simulation and Applications, Taylor and Francis, pp 451-476
Carneiro, T. (2006) "Nested-CA: A Foundation for Multiscale Modelling of Land
Use and Land Cover Change". Doctorate Thesis from the Post Graduation Course
in Applied Computer Science, INPE - Sao Jose dos Campos, Brazil
Carvalho, M. T., Freire, J., Casanova, M. A. (2001) "The Architecture of an
Emergency Plan Deployment System". Proc. III Brazilian Symposium on
Geoinformatics, Rio de Janeiro, Brazil.
Cellier, F. E. (1986) "Combined Discrete/Continuous System Simulation -
Application, Techniques and Tools". In Proceedings of the 1986 Winter
Simulation Conference, SCS
157
Chinmoy S., Abbasi, S. (2006) "Cellular automata-based forecasting of the impact
of accidental fire and toxic dispersion in process industries". Journal of Hazardous
Materials, 137(1), pp. 8-30.
Coyne, M. E., Graham, S. R., Hopkinson, K. M., Kurkowski, S. H. (2008) "A
methodology for unit testing actors in proprietary discrete event based
simulations". In Proceedings of the 40th Winter Simulation Conference, 1012-
1019
Dalmau, D. (2003) "Core Techniques and Algorithms in Game Programming",
New Riders, Indianapolis.
de la Beaujardiere, J. (ed) (2006) "Web Map Service Implementation
Specification". Open Geospatial Consortium Specification 06-042.
http://www.opengeospatial.org/standards/wms
Deshpande, A., Göllüu, A., Varaiya, P. (1997) "A Formalism and a Programming
Language for Dynamic Networks of Hybrid Automata". In Hybrid systems IV.
LNCS, Springer-Verlag.
Dykes, J., MacEachren, A. M., and Kraak, M.-J. (eds) (2005) "Exploring
Geovisualization". Elsevier, Amsterdam, the Netherlands.
Egenhofer, M., Franzosa, R. (1991) "Point-set topological spatial relations". In
International Journal of Geographical Information Systems, 5(2):161-174
Eker, J., Janneck, J. W., Lee, E. A., Liu, J., Liu, X., Ludvig, J., Neuendorffer, S.,
Sachs, S., Xiong, Y. (2003) "Taming heterogeneity - the Ptolemy approach". In
Proceedings of the IEEE, 91(2)
Erol, K. (1995) "Hierarchical Task Network Planning: Formalization, Analysis
and Implementation". Ph.D. thesis, Dept. of Computer Science, University of
Maryland.
Fikes, R. E.; Nilsson, N. J. (1971). "STRIPS: A new approach to the application
of theorem proving to problem solving". Artificial Intelligence, 2 (3-4)
Fingas, M. (2000) "The Basics of Oil Spill Cleanup, Second Edition". CRC Press,
Boca Raton, FL.
Forrester, J. (1972) "World Dynamics". Wright-Allen Press, Cambridge, MA.
Frasca, G. (2003) "Simulation versus Narrative: Introduction to Ludology". In:
Wolf & Perron (Eds.) The Video Game Theory Reader. Routledge.
Gamma, E., Helm, R., Johnson, R., Vlissides, J. (1995) "Design Patterns:
Elements of Reusable Object-Oriented Software". Addison-Wesley, Reading, MA
Gimblett, H. R., ed. (2002) "Integrating geographic information systems and
agent-based modeling techniques for simulating social and ecological processes".
Oxford: Oxford University Press.
Giunchiglia, F., Villafiorita, A., Walsh, T. (1997) "Theories of Abstraction". AI
Comm. 10 3-4, pp. 167-176.
Gonçalves, A. S., Rodrigues, A., Correia, L. (2004) "Multi-agent simulation
within geographic information systems". In: Proceedings of the 5th International
Workshop on Agent-Based Simulation, Lisbon, Portugal
158
Güting, R. H. (1994) "An introduction to spatial database systems". VLDB J. 3
(Oct.), 357–399
Himmelspach, J., Uhrmacher, A. M. (2007) "Plug’n simulate". In Proceedings of
the 40th
Annual Simulation Symposium, pp 137-143. IEEE Computer Society.
IEEE (2000) "IEEE 1516". (Standard for Modelling and Simulation High Level
Architecture Framework and Rules)
Karafyllidis, I. (1997) "A model for the prediction of oil slick movement and
spreading using cellular automata", Environment International, Elsevier, 23:6, pp.
839-850.
Kautz, H. A. (1991) "A Formal Theory of Plan Recognition and its
Implementation". Published in Reasoning About Plans, by J.F. Allen et al, chapter
2, pp. 69-126, San Mateo, CA
Kesting, A., Treiber, M., Helbing, D. (2009) "Agents for Traffic Simulation".
Published in Multi-Agent Systems: Simulation and Applications, by A.M.
Uhrmacher and D. Weyns, chapter 11, pp. 325-356.
Kirmse, A. (ed) (2004) "Game Programming Gems 4". Charles River Media,
Boston, MA.
Klir, G. J. (1985) "Architecture of Systems Complexity". Sauders, New York.
Klügl, F., Puppe, F. (1998) "The multi-agent simulation environment SeSAm". In
Proceedings of Workshop “Simulation in Knowledge-based Systems”.
Klügl, F. (2009) "SeSAm: Visual Programming and Participatory Simulation for
Agent-Based Models". Published in Multi-Agent Systems: Simulation and
Applications, by A.M. Uhrmacher and D. Weyns, chapter 16, pp. 477-507.
Kolb, D. (1984) "Experimental Learning: Experience as the Source of Learning
and Development", Upper Saddle River: Prentice-Hall
Lee, E. A., Zheng, H. (2005) "Operational Semantics of Hybrid Systems". Invited
paper in Proc. of Hybrid Systems: Computation and Control (HSCC) LNCS 3414:
25-53, Zurich, Switzerland.
Metello, M., Vera, M., Lemos, M., Masiero, L., Carvalho, M.T.M. (2007)
"Continuous Interaction with TDK Improving the User Experience in Terralib".
In: Proc. IX Brazilian Symp. on GeoInformatics, Campos dos Jordão, Brazil.
Metello, M., Casanova, M. A., Carvalho, M. T. M. (2008) "Using Serious Game
Techniques to Simulate Emergency Situations". In: Proc. X Brazilian Symposium
on GeoInformatics, Rio de Janeiro, Brazil.
Metello, M. G., Casanova, M. A. (2009) "Training Games and GIS". In: Gerhard
Navratil. (Org.). Research Trends in Geographic Information Science. Berlin:
Springer, p. 257-269.
Michael, D. R., Chen, S. L. (2005) "Serious games: games that educate, train, and
inform". Muska and Lipman/Premier-Trade
Michel, F., Ferber, J., Drogoul, A. (2009) "Multi-Agent Systems and Simulation:
A Survey from the Agent Community’s Perspective". Published in Multi-Agent
Systems: Simulation and Applications, by A.M. Uhrmacher and D. Weyns,
chapter 1, pp. 3-51.
159
Minar, N., Burkhat, R., Langton, C., Askenazi, M. (1996) "The swarm simulation
system: A toolkit for building multi-agent simulations". Technical Report 96-06-
042, The Santa Fe Institute, Santa Fe, NM.
Modarres, M. (2006) "Predicting and improving complex business processes:
values and limitations of modeling and simulation technologies". In Proceedings
of the 38th Winter Simulation Conference, 598-603
Nareyek, A. (2004) "AI in computer games". Queue 1(10):58-65. doi:
http://doi.acm.org/10.1145/971564.971593
North, M., Collier, N., Vos, J. (2006) "Experiences creating three
implementations of the repast agent modeling toolkit". ACM Transactions on
Modelling and Computer Simulation, 16(1):1–25.
Pantel, L. Wolf, L. C. (2002) "On the impact of delay on real-time multiplayer
games". In Proceedings of the 12th international Workshop on Network and
Operating Systems Support for Digital Audio and Video (Miami, Florida, USA).
Paynter, H. M. (1961) "Analysis and Design of Engineering Systems". MIT Press.
Cambridge, MA.
Percivall, G. (ed) (2003) "OpenGIS® Reference Model", Document number OGC
03-040, Version 0.1.3, Open GIS Consortium, Inc.
Perumalla, K. S. (2006) "Parallel and distributed simulation: traditional
techniques and recent advances". In Proceedings of the 38th Winter Simulation
Conference, 84-95
Piaget, J. (1992) "The Principles of Genetic Epistemology", New York: Basic
Books
Praehofer, H. (1991) "System Theoretic Formalisms for Combined Discrete-
Continuous System Simulation". Int. J. Gen. Sys. 19(3), 219-240
Praehofer, H., Auernig, F., Reisinger, G. (1993) "An Environment for DEVS-
Based Multi-formalism Modeling and Simulation". Discrete Event Dynamic
Systems: Theory and Applications 3, 119-149
Rao, A. S. (1996) "AgentSpeak(L): BDI agents speak out in a logical computable
language". In Proceedings of Modelling Autonomous Agents in a Multi-Agent
World, number 1038 in LNAI, pp 42–55. Springer Verlag.
Rao, A. S., Georgeff, M. P. (1992) "An abstract architecture for rational agents".
In Proceedings of the 3rd International Conference on Principles of Knowledge
Representation and Reasoning (KR’92), 439–449
Reuters (2011) "Factbox: A look at the $65 billion video games industry".
Available at: http://uk.reuters.com/article/2011/06/06/us-videogames-factbox-
idUKTRE75552I20110606 (Accessed: 26 July 2011)
Robinson, S. (2006) "Conceptual modeling for simulation: issues and research
requirements". In Proceedings of the 38th Winter Simulation Conference, 792-800
Röhl, M., Uhrmacher, A. M. (2008) "Definition and analysis of composition
structures for discrete-event models". In Proceedings of the 40th Winter
Simulation Conference, 942-950
160
Ryoo, S., Rodrigues, C., Baghsorkhi, S., Stone, S., Kirk, D., Hwu, W. (2008)
"Optimization Principles and Application Performance Evaluation of a
Multithreaded GPU using CUDA". In Proceedings of the 13th ACM SIGPLAN
Symposium on Principles and Practice of Parallel Programming, ACM Press, 73–
82.
Sacerdoti, E. (1977) "A Structure for Plans and Behavior". American Elsevier,
New York.
Sánchez, P. J. (2006) "As simple as possible, but no simpler: a gentle introduction
to simulation modeling". In Proceedings of the 38th Winter Simulation
Conference, 2-10
Sargent, R. G. (2009) "Verification and validation of simulation models". In
Proceedings of the 41st Winter Simulation Conference, 130-143
Sarjoughian, H., Kim, S., Ramaswamy, M., Yau, S. (2008) "A simulation
framework for service-oriented computing systems". In Proceedings of the 40th
Winter Simulation Conference, 845-853
Schneider, M., Guthe, M., Klein, R. (2005) "Real-time rendering of complex
vector data on 3d terrain models". In Proceedings of the 11th International
Conference on Virtual Systems and Multimedia, pp. 573–582.
Sheldon, L. (2004) "Character Development and Storytelling for Games". Premier
Press, Boston.
Smed, J., Kaukoranta, T., Hakonen, H. (2002) "Aspects of Networking in
Multiplayer Computer Games". The Electronic Library, Volume 20, Number 2,
2002, pp. 87-97(11)
Smith, R. (2007) "Game Impact Theory: Five Forces That Are Driving the
Adoption of Game Technologies within Multiple Established Industries". Games
and Society Yearbook
Sowa, J. (2000) "Knowledge Representation: Logical, Philosophical, and
Computational Foundations". Brook/Cole, a division of Thomsom Learning:
Pacific Grove, CA
Sowizral, H. (2000) "Scene Graphs in the New Millennium". Vision 2000.
January/February 56-57.
Strauss, P. S., Carey, R. (1992) "An object-oriented 3D graphics toolkit".
SIGGRAPH Comput. Graph. 26(2):341-349. doi:
http://doi.acm.org/10.1145/142920.134089
Susi, T., Johannesson, M., Backlund, P. (2007) "Serious Games - An Overview".
Technical Report HS-IKI-TR-07-001, School of Humanities and Informatics,
University of Skövde, Sweden
Theodoropoulos, G. K., Minson, R., Ewald, R., Lees, M. (2009) "Simulation
Engines for Multi-Agent Systems". Published in Multi-Agent Systems:
Simulation and Applications, by A.M. Uhrmacher and D. Weyns, chapter 3, pp.
77-105.
Troitzsch, K. G. (2009) "Multi-Agent Systems and Simulation: A Survey from an
Application Perspective". Published in Multi-Agent Systems: Simulation and
Applications, by A.M. Uhrmacher and D. Weyns, chapter 2, pp. 53-75.
161
Tumay, K. (2006) "Business Process Simulation". In Proceedings of the 28th
Winter Simulation Conference, 93-98
Uhrmacher, A. M. (1997) "Concepts of Object- and Agent-Oriented Simulation".
In: Transaction on SCS, Vol. 14(2), 56-67
Uhrmacher, A. M. (2001) "Dynamic Structures in Modeling and Simulation - A
Reflective Approach". ACM Transactions on Modeling and Simulation, 11(2):
206-232.
Uhrmacher, A. M., Swartout, W. (2003) "Agent Oriented Simulation" In M.
Obaidat and G. Papadimitriou (Eds.), Applied System Simulation. Kluwer
Academic Press.
Valente, L., Conci, A., Feijo, B. (2005) "Real time game loop models for single-
player computer games". In Proceedings of the IV Brazilian Symposium on
Computer Games and Digital Entertainment, 89–99.
van der Aalst, W.M.P.; der Hofstede. A.H.M.; Kiepuszewski, B.; Barros, A.P.
(2003) "Workflow Patterns". Distributed and Parallel Databases, 14(1): 5-51(47).
van Deursen, W.P.A. (1995) "Geographical Information Systems and Dynamic
Models". Ph.D. thesis, Utrecht University, NGS Publication 190, 198 pp.
Electronically available through www.carthago.nl
Vangheluwe, H. L. (2000) "DEVS as a common denominator for multi-formalism
hybrid systems modelling". IEEE International Symposium on Computer-Aided
Control System Design, pp 129-–134, Anchorage, Alaska
von Neumann, J. (1966) "Theory of self-reproducing automata". Illinois: A.W.
Burks
Wagner, G., Nicolae, O., Werner, J. (2009) "Extending discrete event simulation
by adding an activity concept for business process modeling and simulation". In
Proceedings of the 41st Winter Simulation Conference, 2951-2962
Wainer, G., Giambiasi, N. (2001) "Timed Cell-DEVS: modelling and simulation
of cell spaces", H. Sarjoughian, F. Cellier Eds., Springer-Verlag
Weske, M. (2007) “Business Process Management - Concepts, Languages,
Architectures”, Springer.
Westera, W., Nadolski, R. J., Hummel, H. G. K., Wopereis, I. G. J. H. (2008)
"Serious Games for Higher Education: a Framework for Reducing Design
Complexity". Journal of Computer Assisted Learning, 24: 420-432.
Zeigler, B. (1972) "Toward a formal theory of modeling and simulation: Structure
preserving morphisms". Journal of the ACM (JACM), 19(4):742–764. ISSN
0004-5411. doi: http://doi.acm.org/10.1145/321724.321737.
Zeigler, B. P., Praehofer, H., Kim, T. G. (2000) "Theory of Modeling and
Simulation". Academic Press: San Diego, CA.