73
Universiteit van Amsterdam Master Thesis SAGE: A simple affective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August 16, 2012

SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Universiteit van Amsterdam

Master Thesis

SAGE: A simple affective game engine

Author:Sorin Alexandru Popescu

Supervisors:Joost Broekens (TU Delft)

Maarten van Someren (UvA)

August 16, 2012

Page 2: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August
Page 3: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Abstract

In the past two decades, the interest for artificially generated emotions and affective computinghas increased steadily and there have been many attempts at developing systems that generateemotions. This paper will introduce an affective computing engine for games that can make lifeeasier for programmers of multi-agent systems when adding emotions to their artificial agents. Theengine was developed with simplicity of use, portability and a good theoretical base as the corebenefits. We will describe the psychology theories this engine is based upon, describe the implementedfunctionality and show how it can relate to games and the entertainment industry in general. Wealso present our experiments including emotional situations, experimental games and performancetests.

ii

Page 4: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August
Page 5: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Contents

List of Figures vi

List of Tables vi

1 Introduction 1

2 Previous Work 22.1 Emotion Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Cognitive Appraisal Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2.1 Frijda’s theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2.2 Scherer’s Sequential check theory . . . . . . . . . . . . . . . . . . . . . 52.2.3 Ortony, Clore and Collins . . . . . . . . . . . . . . . . . . . . . . . . . 52.2.4 Section Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 Computational Models of Emotion . . . . . . . . . . . . . . . . . . . . . . . . 72.3.1 ACRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3.2 Affective Reasoner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3.3 Em . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.4 EMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.5 MAMID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.6 Section Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4 Affective Gaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4.1 ”Day of Reckoning” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4.2 ”A Framework for Emotional Digital Actors” . . . . . . . . . . . . . . 142.4.3 ”Procedural Quests” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.4.4 ”Prom Week” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.4.5 Section Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Simple Affective Gaming Engine (SAGE) 153.1 Game World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.4 Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.4.1 Internal Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4.2 Social Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4.3 The emotional brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.4.4 Pleasure-Arousal-Dominance . . . . . . . . . . . . . . . . . . . . . . . 22

3.5 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4 Evaluation of the engine 254.1 A solid foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2.1 Classical Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.2.2 Example games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.3 Performance Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5 Conclusion 45

iv

Page 6: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

6 Future Work 47

Bibliography 48

Appendix A Emotion generation examples 51

Appendix B EmoBrain class reference 63B.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63B.2 Public Member Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63B.3 Static Public Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64B.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

v

Page 7: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

List of Figures

1 Afferent and efferent links of the elements in the appraisal process with asso-ciated cognitive structures and peripheral systems (Frijda, 1986) . . . . . . . 4

2 Afferent and efferent links of the elements in the appraisal process with asso-ciated cognitive structures and peripheral systems (Scherer, 2001) . . . . . . . 5

3 Global structure of emotion types (Ortony et al., 1988) . . . . . . . . . . . . . 64 A history of computational models of emotion (Marsella et al., 2010) . . . . . 75 The Affective Reasoner: processing stages and related representations (El-

liott, 1992) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 The Em Architecture (Reilly, 1996) . . . . . . . . . . . . . . . . . . . . . . . . 107 EMA (Gratch and Marsella, 2004) . . . . . . . . . . . . . . . . . . . . . . . . 118 Schematic illustration of MAMID trait/state modeling approach and archi-

tecture (Hudlicka, 2002) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Diagram representing the flow of information between different entities and

structures in an emotional brain . . . . . . . . . . . . . . . . . . . . . . . . . 2210 The Pleasure-Arousal-Dominance dimensions with representations of temper-

aments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2311 The main screen of Pac-Man . . . . . . . . . . . . . . . . . . . . . . . . . . . 3412 The main screen of Populous . . . . . . . . . . . . . . . . . . . . . . . . . . . 3613 The main screen of Super Mario Kart . . . . . . . . . . . . . . . . . . . . . . 3714 Screenshots from the RPG implementation . . . . . . . . . . . . . . . . . . . 3915 Screenshots from the RTS implementation . . . . . . . . . . . . . . . . . . . . 4116 The Finite State Machine of an RTS unit . . . . . . . . . . . . . . . . . . . . 4117 Screenshots from the RTS implementation . . . . . . . . . . . . . . . . . . . . 4218 Timing various populations of agents . . . . . . . . . . . . . . . . . . . . . . . 4419 Performance test on a population of agents. Negative pleasure in red, positive

pleasure in green . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

List of Tables

1 EMA emotion categorization and intensity rules . . . . . . . . . . . . . . . . . 112 Examples of defined goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Examples of incoming beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Internal emotions appraisal mechanism . . . . . . . . . . . . . . . . . . . . . . 205 Social emotions appraisal mechanism . . . . . . . . . . . . . . . . . . . . . . . 216 The octants of temperament space in PAD (Mehrabian, 1996) . . . . . . . . . 237 The axes of temperament space in PAD (Mehrabian, 1996) . . . . . . . . . . 238 Scores for some emotions as defined in (Mehrabian, 1996) . . . . . . . . . . . 249 Games implemented as examples . . . . . . . . . . . . . . . . . . . . . . . . . 3910 Time values for the performance tests . . . . . . . . . . . . . . . . . . . . . . 45

vi

Page 8: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August
Page 9: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

1 Introduction

The ”quest” to identify and classify emotions has kept humans busy since ancient times.Ancient philosophers were the first ones to question and try to understand the nature ofemotions while attempting to classify them. Philosophers were followed by psychologists,and in particular cognitive psychologists, that see emotions as vital not only for our survivalbut also for social interaction. One of the views on human emotions today is that basicemotions appeared during evolution to protect the individual from events by having a quickcommunication channel to the brain (Fox, 2008).

During the last few decades, Artificial Intelligence researchers started studying emotionswith the intent of using them to create more human-like virtual agents or robots. It is adifficult task, because emotion theories lack the level of detail needed in a computationalframework and this is where the difficulty in applying them comes from. Fortunately, sev-eral theories have been described in such a way that they can provide a solid basis for acomputational model of emotion.

Games, including serious games, are an important research area for emotional theories andexperiments. They are somewhere between A.I. as science and entertainment, they haveparts of both but they are also quite different from each of them. In games, agents use someA.I. techniques but not with the purpose of being the best possible player. The purpose of agame agent is to offer the player an immersive and fun experience and the ideal agent is oneof equal skill to the player. On the other hand, even if an entertaining experience, a gameis an interactive one, thus different from other entertainment media because the player willactually interact with the game agents and expect them to act and respond according tothe situation. The player is willing to lower or suspend disbelief, by playing the game, butif the game is not up for the challenge, the player will lose interest very quickly.

On the other side, even if hardware has improved a lot in the last 15 years, games tookadvantage of it to improve graphics or to enlarge the game world but, up to this date,they allocate generally few resources to A.I. and behavior computations. Several games, in-cluding state-of-the-art ones, still use scripting as the main agent ”intelligence” system andhardcode as much as possible, including map interest points, behavior, dialogue and others.

We, as well as other researchers (Reilly, 1996; Bartneck, 2002; Hudlicka, 2009), think thatgames would be helped by using a computational model of emotions to generate more vari-ation in the behaviour of NPCs (non-playing game characters) while keeping computationalexpense low. Our goal is to develop a computational model of emotions for game agentsthat would be as similar as possible to human emotional models but have the simplicityneeded to perform in a game. The game will provide the information about the agents andthe game world, such as goals of agents and beliefs of agents about the surrounding world.After the computational step, the game will be able to query for the emotional state of eachagent and use that to express emotion or to affect behaviour of the agents. Most games willprobably only use very few basic emotions, such as Joy, Distress, Hope or Fear. Others,will rely on more complex emotions involving social behaviour and emotions towards otheragents, such as Anger, Gratitude or Pity. We include various types of emotions to offerthe possibility to games to model a wide range of situations and behaviours by relying on

1

Page 10: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

psychological theories of emotion. The theory we choose to model emotion has to be similarto the way games work, has to accept inputs that are easily available in a game environmentand give back information that can be easily used in games. The information that a gamecan retrieve from the engine can be used in two different ways: to express emotions (thecharacter would look stressed or joyful or would say specific lines) and to alter the behaviourof the character (run away when scared or start shooting when angry). We also have a lookat previous computational models of emotion that were not developed specifically for gamesand highlight features we can use to develop our game engine.

The advantage of such an engine would be that the game developer doesn’t have tospecify all possible combinations of emotion outcomes per agent and event. From the gamedeveloper point of view this is comparable to the use of a physics engine, used by the game tomodel the phisical world through object masses and forces. Of course, one of the challengesis to have the correct tradeoff between included emotions and features on one side andkeeping the engine simple to use and understand on the other. If the model becomes toocomplicated it can prove to be too hard to understand for developers and too heavy to beused in a game from a computational point of view. If the model is too simple then theresulting improvements in the game could prove to be neglijable and simpler techniqueswould yield the same results.

Our engine needs to have the following important traits:

• Theoretically Solid: the framework has to be based on accepted psychological theo-ries of emotions

• Generic: it will be pluggable into several types of games and agent architectures

• Efficient: it will have an emphasis on performance and will be able to run in real-timewith as many agents as possible

The engine should be an external library that is pluggable into new or existing games withminimal code needed from the game developer to make it work. A fourth requirement wouldbe to keep the public interface as Simple as possible so it does not burden the game de-veloper with information he doesn’t need. The public interface of the engine library shouldbe as small as possible and obvious to use. We argue that cognitive appraisal is the mostappropriate to represent the emotional brain of game artificial agents (non-playing charac-ters) and we built the engine with that in mind.

The following sections are structured as follows: section 2 ( Previous Work) offers anoverview of previous research related to emotions, appraisal and computational models;section 3 ( Simple Affective Gaming Engine (SAGE)) illustrates the technical implementa-tion of our emotional engine; section 4 ( Evaluation of the engine) describes the experimentswe developed to test the traits mentioned above; section 6 ( Future Work) enumerates somepossible future related research topics; and finally section 5 ( Conclusion) will conclude thepaper.

2 Previous Work

2.1 Emotion Theories

Psychology researchers attempted to formulate Emotion Theories, more or less detailed (Fri-jda, 1986; Ortony et al., 1988; Scherer, 2001) and sometimes seen from quite different points

2

Page 11: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

of view. The purpose of these theories is to explain what emotions are and how do theywork, why do we have emotions, how are they elicited, how do they relate to memory, cul-tural background, social environment, etc. by using experimental methods. The researchershave four types of evidence they can use to prove their theories (Ortony et al., 1988):

• The language of emotions: words people use to define emotions or emotional situations.This is a rather risky type of evidence as it might vary a lot accross different languages,becoming hard to rely on

• Self reports: emotions are subjective concepts and the raw emotion can only be feltand described by the person experiencing it

• Behavioral evidence: observing others we can see their external response, although itis a higher level and, as such, not always reliable

• Physiological: some emotion effects are not easy to control or contain and they areobservable in almost any person. Such an example is blushing when ashamed

There are several such theories, and three main perspectives on emotion theory (Hudlicka,2011):

• Discrete: define specific ”basic” emotions or emotion categories (Ekman, 1992)

• Dimensional: define emotions in an n-dimensional space, where each point is an emo-tion (Mehrabian, 1980; 1995)

• Componential: define emotions by studying their components (Ortony et al., 1988;Scherer, 2001)

We will describe a few of the existing theories, what they are based on and how they work,in the following section. These theories are some of the ones we have taken into account forour research.

2.2 Cognitive Appraisal Theories

Some researchers have reached the conclusion that emotions must be generated by a cognitiveevaluation of the situation in terms of whether that situation is conducive or not in thecontext of personal relevance and wellbeing. Such theories are called ”Cognitive AppraisalTheories” and support the idea that emotions are generated by a cognitive process: a personwill assess at all times what is happening in the world around, evaluate these events in termsof how relevant they are and how they affect the person and the others, and an emotionemerges from this process. Most of the appraisal theories are componential and that isbecause the componential model makes it easier to directly relate and explain the influenceof goals and external events on the agent emotions. Here are a few examples:

2.2.1 Frijda’s theory

Frijda (Frijda, 1986) puts at the center of his theory the ”concern”, which is the dispositionof a person to prefer certain states of the world to others. It might be loosely connectedto other theories’ concept of ”goal”, although it’s not exactly the same. According to thismodel an emotion appraisal system must contain the following components:

3

Page 12: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 1: Afferent and efferent links of the elements in the appraisal process with associated cognitivestructures and peripheral systems (Frijda, 1986)

• Analyzer: takes the input event and attempts to categorize it, if possible, in terms ofknown event types and tries to evaluate possible causes or consequences

• Comparator: evaluates the relevance of the event relative to the agent’s concerns

• Diagnoser: evaluates the coping possibilities

• Evaluator: computes urgency, difficulty and seriousness based on the previous infor-mation

• Action proposer: generates an action plan based on all the gathered and computedinformation

• Physiological change generator: this affects physical change based on the emotiontype and urgency

• Actor: chooses the next actions based on the decided plan

This theory doesn’t really deal with the way opposing emotions cancel each other (for ex-ample joy and distress) or with the way multiple emotions are aggregated into an emotionalstate, and it doesn’t need to. Frijda’s theory is a componential one and because of thatit has the advantage of keeping the emotional state more abstract. It does not attempt todefine categories of emotions nor label them using common language.

4

Page 13: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

2.2.2 Scherer’s Sequential check theory

Klaus Scherer (Scherer, 1987; 2001) formulated a componential theory of emotion that woulddefine a step-based appraisal system. He argued that cognitive appraisal does not happenall at once and there are sequential steps that influence each other:

1. relevance: as a first step, the person will evaluate if the event deserves any attention.how unexpected is the event? does it result in pleasure or pain? does it influence theperson’s goals?

2. implications: what are the causes and what is (or will be) the outcome? will the con-sequences influence the person’s goals? how urgently does this situation need adressing?

3. coping potential: does the person have any control over the situation? if it does thenhow easy it is to fix?

4. normative significance: how does the event relate to my standards? how does itrelate to social norms and values?

Figure 2: Afferent and efferent links of the elements in the appraisal process with associated cognitivestructures and peripheral systems (Scherer, 2001)

Figure 2 shows a schematic view of the appraisal process proposed by Scherer and the linksbetween cognitive structures and peripheral systems:

• NES: neuro-endocrine system

• ANS: autonomic nervous system

• SNS: somatic nervous system

2.2.3 Ortony, Clore and Collins

In the Ortony, Clore and Collins theory, now known as OCC, (Ortony et al., 1988), theauthors define 22 emotions (later 24) that can be evaluated based on a number of componentssuch as: desirability for self, desirability for other, likelihood or praiseworthyness. Theydefine the listed emotions as being emotion types and reject the idea of basic emotions.Furthermore, the OCC appraisal evaluates the event in three parts (steps):

• evaluate the event itself: is it desirable? what are the consequences?

5

Page 14: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

• evaluate attribution (the responsible agent): are his actions approved/disapproved?

• evaluate the object of the event: do I like/dislike it?

Figure 3: Global structure of emotion types (Ortony et al., 1988)

For example, if a friend gives my brother two tickets to a match I am interested in, the eventwould result in a set of emotions caused by the appraisal process:

• I know the event is desirable for my brother so happy-for emotion is triggered

• I know there is a possibility that my brother would take me with him to see the match,hence this results in hope

• The action of giving a gift is praiseworthy according to my standards and so there willbe admiration towards the friend

• The tickets are something I like because I am interested in that particular match

The authors don’t explain how these different emotions would combine, except for very fewcases: if my brother decides to invite me to the match, then my joy combined with theadmiration will result in gratitude towards both of them. Figure Figure 3 shows thesetypes of emotions and the structure put in place by the OCC theory.

6

Page 15: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

2.2.4 Section Conclusions

In this section we have introduced a few of the appraisal theories that have been formulatedin the past decades. They are psychological theories that attempt to define emotions andexplain how they are elicited by relying on human cognition. The OCC theory is the onewe decided to use in our research and experiments for multiple reasons: it is a well-knownand accepted theory of emotions, it is a componential model of emotion that fits the needsof a computational framework, components are generic enough to allow for a wide set ofemotions, it accounts for both internal emotions and social relationships which in gamesare quite important. In the next section, we talk about some of the computational modelsbased on psychology theories.

2.3 Computational Models of Emotion

The research regarding computational models of emotion has seen quite an increase recently,especially during the last few years. Computational models are usually based on psycholog-ical emotion theories and attempt modeling the theory in a way to be usable by artificialagents. Figure 4 presents a few of the computational models developed so far, along with

Figure 4: A history of computational models of emotion (Marsella et al., 2010)

their connections to theories or other models. We will introduce some of them, that arerelevant to the current research, in a few words.

7

Page 16: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

2.3.1 ACRES

ACRES (Frijda and Swagerman, 1987) is a computational model based on Frijda’s theoryof appraisal (Frijda, 1986). It defines the system as having the same components as Frijda’stheory, being built around ”concerns” and with the same steps for appraisal. ACRES willcheck continuously for matches between the received information and the concerns to findout wether anything that has happened is relevant. Then, for the relevant parts, it computesthe rest of the components: coping variables, urgency, seriousness, difficulty, etc. It also hasa powerful action plan generator that combines actions from a set of predefined ones to forma variety of possible plans.

2.3.2 Affective Reasoner

The ”Affective Reasoner” (Elliott, 1992) is a computational model of emotion based on theOCC appraisal theory (Ortony et al., 1988) which model all of the twenty-four emotiontypes defined in OCC. The appraisal process is based on goals, standards and preferencesand results in one or more emotions. The system goes further to decide which actions canthe agent take, based on the resulted emotions. The components considered for appraisal

Figure 5: The Affective Reasoner: processing stages and related representations (Elliott, 1992)

are:

• Desirability for self

• Desirability for another agent

• Pleasingness (pleased/displeased about an event)

• Status (unconfirmed/confirmed/disconfirmed)

• Approval (approve/disapprove an action)

8

Page 17: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

• Responsible agent (self or other)

• Appealingness (like/dislike an object)

The system also maintains for each agent a set of representations of other agents, that is,what does agent X think agent Y is feeling right now. This is a feature used to diversify theresponse. The emotions elicited at a certain time are not eliminated nor aggregated in the”Affective Reasoner”, as the system is focused on the output actions instead. The actiongeneration step will include an exclusion mechanism, to eliminate any possible incongruentbehavior before deciding on the action to take.

2.3.3 Em

”Em” (Reilly, 1996) is a computational model based mostly on the OCC theory. Theemotion generation is based on a set of predefined rules, such as: ”when an agent has agoal failure and the goal has importance X, generate an emotion structure of type distressand with intensity X”. An emotion will then have a type and an intensity. These emotionsare then stored in a hierarchical structure based on the specificity of the emotion: a genericdistress emotion will be higher in the hierarchy while more specific ones, such as grief willbe a subtype of distress. At this point each subtree will result in an emotion type with acertain intensity based on all subtypes in the subtree. This is done using a set of ”EmotionCombination Functions”. These intensities will decay over time as specified by the ”EmotionDecay Functions”. The two sets of functions are left by the author to the application usingthe system, and as such they can be defined as parameters. In the end, a ”BehavioralFeature Map” will map resulting emotions to behavioral features and send them back to thesystem. Figure 6 shows the architecture of the ”Em” system, including the components andcommunication between them. Later (Reilly, 2006), Reilly comes with further details of thesystem, specifying more about how emotions are stored, how theirs intensity is determinedand how are emotions combined. Here, he describes some interesting features:

• Expectedness: in order to model expectedness, he computes the initial intensity basednot on the likelihood of the latest event, but on the difference between likelihoods. Forexample, if an event has a probability of 0.9 to happen at a previous step and then ithappens, it will trigger a lower intensity emotion than a totally unexpected event, evenif at the latest step, they both have likelihood 1

• Bias against failure: as humans have higher intensity emotions for failure than forsuccess, he introduces desirability and undesirability as two separate variables for eachgoal

• Different decay per emotion type: this allows for ”hope” and ”fear” to be treateddifferently, slower than others when the event is still uncertain and instant when theevent becomes clear

• Combining emotions: Em uses a logarithmic function to combine and normalizeemotions. This will ensure, as opposed to the sigmoidal one, that it’s closest to linearfor lower values and less additive as values increase

2.3.4 EMA

EMA is a ”computational model of appraisal and coping” as defined by its authors (Gratchand Marsella, 2004) which was inspired by the ”Affective Reasoner” (Elliott, 1992) and as

9

Page 18: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 6: The Em Architecture (Reilly, 1996)

a consequence by the ”OCC” (Ortony et al., 1988) appraisal theory. The appraisal is basedon a few variables (components of the appraisal process):

• Relevance: how significant is an event for the agent

• Desirability: does the agent want this to happen or not

• Likelihood: is it a surprise or an expected event

• Causal attribution: who is responsible for the event

• Controllability: does the agent have any control over the event

• Changeability: does the agent have any power to change the outcome (together withControllability these two variables refer to the ability of coping)

They only represent six emotions out of the 24 described in OCC (Hope, Joy, Fear, Distress,Anger and Guilt) and they are described according to very simple rules. They also introducean interesting concept of frames which makes certain emotions to lose focus in time and comeback into focus if a similar event happens or the agent just remembers and thinks aboutthe event that caused it. A ”mood” structure is used to aggregate multiple emotions andthey do it by adding up all current emotions by type and then passing the results through asigmoid function in order to normalize them. The mood also creates a bias towards certainemotions rather than others. To evaluate coping strategies, EMA implements a planningmechanism that works closely together with the appraisal to identify the best coping strategyand modify resulting emotions accordingly.

10

Page 19: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 7: EMA (Gratch and Marsella, 2004)

Emotion Appraisal configuration Intensity

Hope Desirability(p) > 0, Likelihood(p) < 1 Desirability(p)× Likelihood(p)

Joy Desirability(p) > 0, Likelihood(p) = 1 Desirability(p)× Likelihood(p)

Fear Desirability(p) < 0, Likelihood(p) < 1 |Desirability(p)× Likelihood(p)|

Distress Desirability(p) < 0, Likelihood(p) = 1 |Desirability(p)× Likelihood(p)|

Anger Desirability(p) < 0, Causal(q) = blame |Desirability(p)× Likelihood(p)|

Guilt Desirability(q 6= p) < 0, Causal(p) = blame |Desirability(q)× Likelihood(p)|

Table 1: EMA emotion categorization and intensity rules

2.3.5 MAMID

MAMID (Hudlicka, 2002) is a computational model created by Eva Hudlicka and signifi-cantly different from the other models introduced in this section. MAMID is focused notonly on modeling emotions but also modeling differences between agents (traits) as well aspossible. Because of that, it represents individual differences of three different types: cog-nitive, affective and personality.Figure 8 shows the MAMID architecture with, on the right column, the cognitive compo-nents:

• Attention: filters incoming information for relevance

• Situation Assessment: evaluates the current situation in terms of known high-levelsituations (basically the agent will ”recognize” the situation)

11

Page 20: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 8: Schematic illustration of MAMID trait/state modeling approach and architecture (Hudlicka,2002)

• Expectation Generation: evaluates the possible outcomes of the current situation

• Affect Appraiser: gathers previous information and outputs an affective state for theagent. The emotion representation is both categorical (anxiety/fear, anger/aggression,negative affect (sadness, distress) and positive affect (joy, happiness)) and dimensional(valence and arousal). This module outputs all of the above and it is up to the followingmodules to use that information consistently

• Goal Manager: modifies the current goals according to the affective state

• Action Selection: selects the actions the agent is going to perform

An important feature of this model is that each of the above steps contain some individualdifferences that make the agents more or less different than other agents.

2.3.6 Section Conclusions

We have presented a few of the computational models of emotions that were developed formulti agent systems and explained what they rely upon and how they work. The ”AffectiveReasoner” appears as the most complete model built on top of the OCC theory. It imple-ments all possible emotions, including social interaction but it goes ahead to include insideeach agent a representation of all the other agents that should enhance the social interac-tion by allowing agents to speculate about other agents’ feelings and goals. We think thisinternal representation would be too much for a game engine and adds too much overheadand complexity with no obvious benefit. Besides that, there are other design choices wedon’t want to include in our model: AR defines qualitative variables and components, so

12

Page 21: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

the emotions are just there or not, don’t have intensity; EMA implements only six OCCemotions and includes a coping module which we don’t want. All of them develop frame-works that combine the emotional side with the reasoning and action selection. Instead, weare aiming for a standalone model of emotions that can be independent from the reasoningpart because that is the only way we can make our engine easily pluggable into differentgames. In the next section we will investigate some of the existing affective models createdspecifically for games.

2.4 Affective Gaming

Several researchers have undertaken the challenge of implementing emotions for computeragents (Frijda and Swagerman, 1987; Elliott, 1992; Reilly, 1996; Guoliang et al., 2006; Liu,2009), or defining a more abstract computational framework for emotions (Gratch andMarsella, 2004; Broekens et al., 2008) or defining what the guidelines for this sort of frame-work should be (Hudlicka, 2011). Even if there has been great progress in the field and thestate of affairs now is much better than 15 years ago, the industry still lacks standards inthis area or a generic reusable engine that could be used for emotional agents.

One of the areas that would greatly benefit from using emotion computation would begames (including serious games). According to research, such as Sweetser et al. (2003),players appreciate human opponents for their ability to adapt, learn but also socialize. Inorder to please this type of players, games should try to improve their imitation of humanbehavior. Very few do so, because of development time or other restrictions and they wouldrather develop tricks or workarounds to give the player an illusion of something similar tohuman behavior.

On the other side, games need to obey very strict performance constraints, sometimesallowing only one millisecond per frame for AI calculations. There were a few games thatsimulated a certain level of human-like emotional behavior (Byl, 2004), such as ”Creatures”by Cyberlife Technology (Grand et al., 1997; Grand, 2000), ”The Sims” series by ElectronicArts or ”Black & White” by Lionhead Studios (Wexler, 2002), but up to this date there areno emotional engines that can be used in a variety of games as a simple pluggable black-box. A few papers that explain the need for this and give guidelines for the developmenthave been published (Hudlicka, 2009; Hudlicka and Broekens, 2009; Hudlicka, 2011) andthey emphasize the importance of affective gaming to ”support the development of sociallycomplex and affectively realistic games”. In the following paragraphs we take a look at someof the attempts at implementing a framework for emotions in games.

2.4.1 ”Day of Reckoning”

In her book about believable characters in games (Byl, 2004), Penny Baillie-De Byl createsan example game where the agents emulate emotions to be more believable. The model im-plemented in this game is based on the work by Smith and Ellsworth (Smith and Ellsworth,1985) which proposes six apprasial dimensions: pleasantness, responsibility, certainty, at-tention, effort, control. For the output the game uses the basic emotions of Ekman (Ekman,1992): happiness, sadness, anger, fear, disgust, surprise. The implementation is based on aneural network with six input nodes, seven hidden nodes and six output nodes that get asan input the value of each appraisal dimension and output six flags (for the emotions) whereonly one is set to true. The result is a very simplistic agent that cannot have any complexemotion and cannot have two emotions at the same time. Additionally, the represented

13

Page 22: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

emotions are considered the most basic ones. This might work for very simple games but isnot enough to build a complex character.

2.4.2 ”A Framework for Emotional Digital Actors”

Phil Carlisle proposed a simple framework (Carlisle, 2010) used to create emotional agents forgames. This framework implements personality using the OCEAN model of personality (Mc-crae and Costa, 1996) which has the potential to give each agent a unique personality thatwill affect the behaviour later. As far as mood goes, this framework uses a one-dimensionalrepresentation along the pleasure-displeasure axis where positive values represent pleasure.The appraisal part is built on a simplified version of OCC model, which doesn’t elicit thespecific emotion types defined by OCC. Instead, it focuses on the types of emotions (relatedto events, agents or objects) and defines a single component for each of them: desirabilityfor events, and attraction for agents and objects. This offers advantages in computationaltimes but reduces the amount of available emotional consequences and behaviour. Thedownside of this approach is that many specific emotions, such as Gratitude or Pity, cannotbe generated at all. For most of the games this is enough but the author also admits thatmore complex social emotions are to be desired for a truly realistic experience.

2.4.3 ”Procedural Quests”

The research done at the University of Bath (Grey and Bryson, 2009) had as a purpose asystem that creates procedural side-quests for role-playing games. Even if it is not basedon any established psychological model, it is an important step towards specific sociallycapable agents. Their model is based on behaviour trees that use information from a set ofpriorities (goals) and memory (beliefs/events) to decide on the next quest to be proposedto the player. The emotional state is not maintained but inferred from goals and beliefs.

2.4.4 ”Prom Week”

Researchers at the University of California Santa Cruz developed a ”social simulation”where agents emulate emotions and social interaction between teenagers during the lastweek before the Prom (McCoy et al., 2011). The main focus of this game are relationships,including making and breaking friendships that include the player or not. The scenarioshave predefined goals and the player wins by achieving the goal before the week is over. Theimplementation of this game is very interesting, although quite different from the approachwe are interested in. It is based on the following structures:

• social facts database: a database of known facts that contains social events thatalready happened

• cultural knowledgebase: a shared database of facts about the world, topics andnorms. This is used to create dialogs.

• social networks: the relationship state between all characters including friendliness,romance interest and respect level. The number of networks can change because of themodularity of this system.

• social status rules: the prerequisites needed to make specific plays. For example, tostart dating there has to be a reciprocal friendship above a certain threshold.

14

Page 23: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

The personality of the characters is defined using the above tools and the rules are inspiredfrom transactional analysis (Berne, 1996) and were handcoded by the team of researchers.It is an interesting project but very focused on social networking and the large set of rulesmakes it hard to test and to perform in a game environment.

2.4.5 Section Conclusions

Existing emotional implementations for games developed up to this date include a fewmodels, some of which have a psychology background, but they do not allow for enoughvariation in the behaviour of agents. Some focus on the internal emotions, and some onthe social interaction, but we want a generic model that implements most of the OCCemotions in a simple way that can be easily scalable to use in multi-agent systems in real-time. The main issue we have seen in the existing systems is the reduced number of possibleemotions. We want to be able to have more possible emotional states to build behaviour.Another missing feature we have noticed is that none of the found implementations see theemotional component as a separate component and none of them try to implement it asa black box. Our approach to have an emotional engine, just like a physics or graphicalengine, comes from the need of the programmer to use a premade solution without having toread all the documentation and psychological publications to find out how emotion appraisalworks. Some implementations compensated this by simplifying the design and decreasingthe number of emotions. We have chosen to support most of the OCC emotions (sixteenfor now, expandable in the future to cover all twenty-four) and at the same time designit as a black-box so that the game programmer does not have to understand the innermechanisms of appraisal. In the next section we will explain how our engine works and howit is implemented.

3 Simple Affective Gaming Engine (SAGE)

In this section we introduce our engine (SAGE) and we describe how it is setup internallyand the mechanisms of appraisal we have opted for. Our choice for the theory or theoriesthat we were going to use was from the beginning constrained by the medium we chosefor expression: computer games. Whether it is a serious game or entertainment, a gameattempts to model, more or less realistically, a part of the real world, with the purpose thatthe player can try different situations to see the possible outcomes. Game characters aresimplified versions of real life characters and, as such, they tend to borrow behaviour fromreal-life characters. Three things that almost all game characters have in common are:

• characters have goals like staying alive, stopping the player or helping the player

• they get their information from the world through events (or beliefs if they areuncertain)

• they interact with the world using actions such as do or say

These three features of game characters are almost always there, sometimes defined implic-itly, and the game programmer needs to explicitly define them in order to feed the emotionalengine. This setup seems very similar to cognitive emotion theories described in psychologyand so the first step of our model was to decide a cognitive theory of emotion to base ourwork upon. On the other side, the purpose of this framework is to abstract the emotions

15

Page 24: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

and encapsulate them in a reusable black-box that can be plugged in any game. Becauseof that, we have decided to exclude the decision making (actions) from the core library andleave it to the implementation of the game programmer. An additional expression librarycould be the centralized way of giving the game programmers some predefined expressionblocks, but this falls outside of the scope of the current implementation. That left us withmanaging goals which define the desires of each character and beliefs that represent theevents happening in the game.

As far as the feedback is concerned, we decided to use either componential or dimensionalemotions because they are better suited to be used in games as they are considered more”computational friendly” (Hudlicka, 2011). Discrete emotion theories define very few basicemotions and don’t define components that can be modelled into numbers easily (Ekman,1992). Eventually, we opted to support both the OCC theory and the PAD (Pleasure-Arousal-Dominance) dimensional approach as output. This would allow the game program-mer to ask for direct emotions, such as ”what is the fear level for this agent?”, and moreabstract ones ”to what extent does this agent feel dominated?”.

We will describe the concepts that make the engine, together with basic concepts neededto understand how it works. For readers interested in more technical details, we haveincluded the reference of the public interface of our engine in B.

3.1 Game World

When we say ”Game World” we refer to the virtual world where the game takes place andwhere the player (gamer) is immersed while playing. The world, in this case is not only anenvironment or a scene but can also be an active participant in the game, changing duringthe game to enhance the experience of the player. The game world is usually populatedwith various ”Agents”. The agents are entities used by the game to interact with the playerby helping him, blocking him or just to be there to make the world more believable. Forexample, in the classical game ”PacMan”, the world is a labyrinth, the player is representedby PacMan, a character that has to survive while collecting all dots in the labyrinth. Theghosts, running around the labyrinth and trying to block pacman from finishing the levelare the agents of this world. The agents (also called NPCs, Non-Playing Characters) arethe entities targeted by our engine. Their behaviour is very important to keep the playerimmersed and to maintain the illusion of realism. Most of the times, players prefer playingagainst other humans because the programmed agents are not offering the same experience.Using our engine, each agent will have the possibility to exhibit emotions and vary itsbehaviour accordingly. An agent can be not only a character in the game but also theenvironment, the weather in the game or a supervisor that balances the game by regulatingresources. All of them will have a purpose, some goals, implicitly or explicitly defined, thatthey need to achieve or maintain during the game to make the experience better for theplayer.

3.2 Goals

Goals can be of three types:

• states that an agent wants to achieve (active pursuit goals)

• states that an agent wants achieved (passive goals)

16

Page 25: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

• states that an agents wants maintained (maintenance goals)

The system does not differentiate between the first two, as the emotion engine itself doesnot do planning and acting. As a consequence we are left with two types, the ones that needto be achieved and the ones that need to be maintained. In our engine, they are representedusing the same structure defined by:

• owner: the agent that has this as a goal

• name: unique identifier, has no actual meaning and it’s only used to refer to this goal

• utility: the value the agent attributes to this goal becoming true (∈ [−1, 1] where anegative value means the agent doesn’t want this to happen - typical for maintenancegoals)

Owner Goal Name Utility Explanation

knight kill monster 1 the knight wants to kill the monster

knight self die −1 the knight wants to preserve his own life (not die)

knight princess die −1 the knight wants to keep the princess alive

knight find gold 0.5 the knight wants to find treasures

Table 2: Examples of defined goals

Some examples of goals are shown in Table 2. The value of the utility should be propor-tional to the level of desire for that specific goal. Utility values of 1 or −1 should be usedfor goals that decide the outcome of the game. Everything in between is less important.For example, in the list above, the find gold goal is not as important as staying alive, it hasa lower utility and hence the failure of this goal will not generate the same level of distressfor the agent as having his own life threatened.

Even if the game does not define such goals for all agents explicitly, it will be quite easyfor any game programmer to define some goals for an agent that needs emotions. If an agentneeds to emulate emotions, it is an active agent that interacts with the player. section 4 willdescribe some ways of defining all the necessary elements for an agent to emulate emotions.

3.3 Beliefs

Beliefs are chunks of information sent by the game whenever important events happen. Wecall them beliefs and not events because they have an associated likelihood. This givesthe game freedom of implementing concepts such as rumors (event was not witnessed butthe agent has heard of it) or credibility (the agent doesn’t entirely believe the informationbecause it was delivered by another unknown agent). Beliefs are defined by:

• name: unique identifier used to define and update a belief

• likelihood: the likelihood of this information being true (∈ [0, 1]) where 0 means thebelief is infirmed and 1 means it is confirmed

• agent: the agent responsible for this event (can be null if it’s an ”Act of God” orirrelevant)

17

Page 26: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

• affected goals: a hashtable reference that contains pairs of

– goal name: identifies the goal

– valence: ∈ [−1, 1] where negative values mean this belief is blocking the goal andpositive values mean this belief facilitates the goal

Beliefs are basically events from the game, but they are called Beliefs because they have alikelihood attached. So it might happen that the same event has 0.5 likelihood for an agentand 0.8 likelihood for another. Table 3 lists some examples of possible beliefs and the waythey might affect the goals.

Belief Name Likelihood Causal Agent Affects Goal Valence

princess attacked by monster 0.5 monster princess die 0.8

found magic sword 1 knight kill monster 0.5

Table 3: Examples of incoming beliefs

Desirability

The desirability of an event with respect to a goal defines how much does the owner of thegoal desire this event or not and will be calculated using the formula:

desirability(b, g, p)← valence(b, g) ∗ utility(g) (1)

Where desirability(b, g, p) is the desirability of belief b with regards to goal g from the viewpoint of agent p, valence(b, g) is the valence of belief b concerning goal g and utility(g) isthe utility of goal g. In the example above, the desirability for the ”princess attacked bymonster” event, as far as the knight is concerned, would be 0.8∗(−1) = −0.8 so for this agentit is quite an undesirable event. If we define the goal of the monster to be ”kill princess”with utility 1 and the ”princess attacked by monster” would also affect ”kill princess” witha valence of 0.8 then the system would also be able to calculate the desirability for themonster 0.8 ∗ 1 = 0.8 for which the event is very desirable. This shows how the same beliefcan have a very different desirability for two different agents because of the opposite goals.

Goal Likelihood

The likelihood of the goal is maintained internally and updated, if needed, everytime a beliefthat affects that specific goal is received. The initial value is Unknown. The goal likelihoodis calculated using the following formula:

likelihood(g)← (valence(b, g) ∗ likelihood(b) + 1)/2 (2)

which means: the probability that this goal becomes true given the probability of the beliefand the extent to which this belief affects the goal. Where likelihood(g) is the likelihoodof goal g, valence(b, g) is the valence of belief b concerning goal g and likelihood(b) is thelikelihood of belief b. In the previous example, the likelihood of the ”princess die” will be(0.8 ∗ 0.5 + 1)/2 = 0.7. Every time the goal likelihood is updated, the old value is savedinto an internal variable so we are able to also know the difference of likelihoods for latercalculations. This difference will be referenced to as ∆likelihood.

18

Page 27: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

3.4 Emotions

The emotions are based on the OCC theory of appraisal described by Ortony, Clore andCollins. For simplicity of implementation we decided to split the emotion representation intotwo structures: internal emotions and social emotions. This is mainly because the internalemotion will be only one per agent while the social emotions connect to a different agent, soeach brain will contain a collection of them, one per modeled fellow agent. Below is a shortdescription of what each of them contains and how they work.

3.4.1 Internal Emotions

This contains all emotions that don’t relate to other agents. These are:

• hope: a desirable uncertain goal has a hight chance of success or an undesirable un-certain goal has a low chance of success

• fear: an undesirable uncertain goal has a hight chance of success or a desirable uncer-tain goal has a low chance of success

• joy: a desirable goal succeeds or an undesirable goal fails

• distress: an undesirable goal succeeds or a desirable goal fails

• satisfaction: a desirable goal was expected to succeed and it actually does succeed

• fearsConfirmed: an undesirable goal was expected to succeed and it actually doessucceed

• disappointment: a desirable goal was expected to succeeed and it fails

• relief : an undesirable goal was expected to succeed and it fails

Given the previous definitions of goals and beliefs, we consider ”uncertain” any goal or be-lief that has a likelihood between 0 and 1 exclusive. For example, if the likelihood of anundesirable event is 0.99 then the fear will have a high level but the belief is not certain yet.Only when it will have a likelihood of 1, distress and fearsConfirmed are generated. Addi-tionally, for emotions like disappointment and relief we needed to define when a situationis reversed or not and we defined that threshold at ∆likelihood being above or below 0.5.Since ∆likelihood is the difference of two successively calculated likelihoods, we thought adifference of 0.5 would be enough to mark the situation as being reversed. Table 4 showsthe formulas used to calculate if each emotion should be elicited. For all these emotions theintensity will be computed as:

intensity(e)← desirability(b, g, self) ∗∆likelihood(g) (3)

Note that desirability(b, g, p) is the desirability of belief b concerning goal g from the point ofview of agent p, likelihood(g) is the likelihood of goal g and ∆likelihood(g) is the differenceof likelihoods for goal g after the latest modification.

3.4.2 Social Emotions

This contains all emotions that have another agent as a target. They relate to action ofother agents that influence one’s goals or goals of other agents being influenced. These are:

19

Page 28: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Emotion Condition

(desirability(b, g, self) > 0, likelihood(g) < 1, ∆likelihood(g) > 0)∨hope

(desirability(b, g, self) < 0, likelihood(g) > 0, ∆likelihood(g) < 0)

(desirability(b, g, self) < 0, likelihood(g) < 1, ∆likelihood(g) > 0)∨fear

(desirability(b, g, self) > 0, likelihood(g) > 0, ∆likelihood(g) < 0)

(desirability(b, g, self) > 0, likelihood(g) = 1)∨joy

(desirability(b, g, self) < 0, likelihood(g) = 0)

(desirability(b, g, self) < 0, likelihood(g) = 1)∨distress

(desirability(b, g, self) > 0, likelihood(g) = 0)

satisfaction desirability(b, g, self) > 0, likelihood(g) = 1, ∆likelihood(g) < 0.5

fearsConfirmed desirability(b, g, self) < 0, likelihood(g) = 1, ∆likelihood(g) < 0.5

disappointment desirability(b, g, self) > 0, likelihood(g) = 0, ∆likelihood(g) > 0.5

relief desirability(b, g, self) < 0, likelihood(b) = 0, ∆likelihood(g) > 0.5

Table 4: Internal emotions appraisal mechanism

• anger: an undesirable event is caused by another agent

• guilt: this agent causes an undesirable event for a liked agent

• gratitude: a desirable event is caused by another agent

• gratification: this agent causes a desirable event for a liked agent

• happyFor: a desirable event happens to a liked agent

• pity: an undesirable event happens to a liked agent

• gloating: an undesirable event happens to a disliked agent

• resentment: a desirable event happens to a disliked agent

• like: this is calculated based on all events caused by that specific agent (it’s neutral =0 if unknown)

In this list the terms desirable and undesirable refer to the desirability calculated from thegoal owner’s point of view. Table 5 shows the formulas used to calculate if each emotionshould be elicited. For the emotions that don’t take into account the like(p, q) function(anger and gratitude), the intensity will be computed as before:

intensity(e)← desirability(b, g, self) ∗∆likelihood(g) (4)

for the others, the formula will be multiplied by the liking factor:

intensity(e)← desirability(b, g, q 6= self) ∗∆likelihood(g) ∗ like(self, q) (5)

Note that desirability(b, g, p) is the desirability of belief b concerning goal g from the pointof view of agent p, agent(b) is the agent that caused belief b and like(p, q) is how much agentp likes agent q. The like(agent1, agent2) relationship is defined by all beliefs that include

20

Page 29: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Emotion Condition

anger desirability(b, g, self) < 0, agent(b) 6= self

guilt desirability(b, g, q 6= self) < 0, agent(b) = self, like(self, q) > 0

gratitude desirability(b, g, self) > 0, agent(b) 6= self

gratification desirability(b, g, q 6= self) > 0, agent(b) = self, like(self, q) > 0

happyFor desirability(b, g, q 6= self) > 0, like(self, q) > 0

pity desirability(b, g, q 6= self) < 0, like(self, q) > 0

gloating desirability(b, g, q 6= self) < 0, like(self, q) < 0

resentment desirability(b, g, q 6= self) > 0, like(self, q) < 0

Table 5: Social emotions appraisal mechanism

both these agents and influenced by the desirability of such beliefs as seen through the eyesof each agent. For example an event that facilitates a desirable goal, increases the value oflike towards the agent that caused it. Likewise, if the goal is blocked then the value of likeis decreased.

3.4.3 The emotional brain

The emotional brain of one agent will contain the following information:

• a list of goals the agent knows about, owned by anyone

• a default internal mood initialized when the brain is created

• a default social mood initialized when the brain is created

• an internal mood

• the current emotional state

• the relationship to each known agent (social emotion structure)

• (optional) an internal decay function and a social decay function

The default internal mood, the default social mood and the decay functions are basically thetools the game designer can use to create individual differences between agents. They willrepresent an agent personality because they will influence the way emotions develop andevolve over time. The internal mood is the temporary mood and will be initialized to thevalue of the default internal mood, will include the fluctuations of the emotional state andeventually slowly decay back towards the default internal mood. There is no similar conceptfor the social emotions: the default social mood will serve as a default social emotion towardsnot yet known agents. The emotional state is a short term state that will encompass allemotions when they are elicited and will quickly decay towards the internal mood. A similarthing will happen to the social emotions: each of them will decay towards the default socialmood. Figure 9 is a simplified representation of the emotional brain of an agent and showshow the different mentioned components and structures influence each other. In this figure,”world” represents the game world that sends information to the emotional brain and,after computation, receives feedback from it. The decay will be done using delegate decay

21

Page 30: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 9: Diagram representing the flow of information between different entities and structures in anemotional brain

functions provided by the game developer. If the game doesn’t provide any decay functions,the default ones present in the library will be used. The combination of emotions or moodsand emotions will be done using Reilly’s logarithmic function.

3.4.4 Pleasure-Arousal-Dominance

Sometimes in games, in addition to specific emotions, the system needs to have genericvariables to be used for physical behaviour of the agent. In these cases, using the emotionscomponents could lead to much more complicated checks and computational overhead. Forexample if the system wants to know what is the level of ”calmness” of a certain agent, itwould have to check for all components that affect the calmness of the agent and combinethem into one. We decided to offer something like that as an output alternative to OCCemotions mentioned above: PAD (Pleasure-Arousal-Dominance) dimensions. Albert Mehra-bian and James A. Russell developed a dimensional theory that creates a three-dimensionalspace where each specific emotion is represented by a point (Mehrabian, 1980; 1995; 1996).The three dimensions are:

• Pleasure: measures how pleasant is the emotion

• Arousal: measures how intense is the emotion

• Dominance: measures how controllable is the emotion

For example, Happiness is defined as being an emotion with high pleasure, arousal and dom-inance while Fear is an emotion with high arousal but low pleasure and dominance. Mehra-bian also defines eight temperaments corresponding to the corners of the three-dimensionalcube as seen in Figure 10 and explained in Table 6. The authors call these the ”octants oftemperament space”.

This model offers the possibility for the game to check for the three basic dimensions andwe also implemented the four temperamental axes, see Table 7 as defined in (Mehrabian,

22

Page 31: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 10: The Pleasure-Arousal-Dominance dimensions with representations of temperaments

Temperament Pleasure Arousal Dominance

Exuberant + + +

Bored - - -

Dependent + + -

Disdainful - - +

Relaxed + - +

Anxious - + -

Docile + - -

Hostile - + +

Table 6: The octants of temperament space in PAD (Mehrabian, 1996)

1996). The implementation in our case is a property on the emotional brain that returns the

Axes Extremes Formula used

Exuberance Exuberant (+P + A + D) vs. Bored (−P −A−D) 0.577 ∗ (P + A + D)

Dependency Dependent (+P + A−D) vs. Disdainful (−P −A + D) 0.577 ∗ (P + A−D)

Relaxation Relaxed (+P −A + D) vs. Anxious (−P + A−D) 0.577 ∗ (P −A + D)

Docility Docile (+P −A−D) vs. Hostile (−P + A + D) 0.577 ∗ (P −A−D)

Table 7: The axes of temperament space in PAD (Mehrabian, 1996)

23

Page 32: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

PAD emotional state on demand. This property is recomputed only when needed to reduceoverhead, for example when adding new beliefs or after the decay is triggered. To calculatethe three dimensions we use the scores published by Mehrabian (Mehrabian, 1980) of whicha few examples are shown in Table 8. The score for each of the OCC emotions is multipliedby the intensity of that emotion and then the whole set is combined using the formula:

PAD ← 0.1× log2(∑

e

210×PAD(e)×intensity(e)) (6)

where PAD(e) is the PAD score for each specific emotion, intensity(e) is the intensity ofeach emotion and PAD is the total emotional state according to the PAD model.

Emotion Pleasure Arousal Dominance

suspicious −0.25 0.42 0.11

alert 0.49 0.57 0.45

fearful −0.64 0.60 −0.43

distressed −0.61 0.28 −0.36

protected 0.60 −0.22 −0.42

happy 0.81 0.51 0.46

angry −0.51 0.59 0.25

masterful 0.58 0.44 0.69

grateful 0.64 0.16 −0.21

Table 8: Scores for some emotions as defined in (Mehrabian, 1996)

3.5 Output

Since games are visual simulations, they rely heavily on what the user (player) sees andunderstands to influence the experience. Many game AI professionals agree that the playeris not interested in the work necessary to achieve a certain visual result but if the playercannot see the results the implementation is not worth the effort (Funge, 2004; Schwab,2004; Mark, 2009). This makes it important for the current project to be not only a librarybut also have significant feedback that can be used in commercial games. The decision wasto support both OCC emotions (Ortony et al., 1988) and a Pleasure-Arousal-Dominancestructure (Mehrabian, 1995). This would allow the programmer to ask questions as:

• how happy is the agent?

• how afraid is the agent?

• how grateful is the agent to another agent?

• how angry is the agent at another agent?

• what is the level of arousal of the agent?

Of course, programmers would probably not use all these options in the same game buthaving the options offers more possibilities to be able to develop more variation in thebehaviour. The OCC components (joy, fear, anger, etc.) would tipically be used to alter

24

Page 33: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

sequential behaviours such as, for example, changing state in a finite state machine ora behavioural tree or pick different planning. On the other hand, the PAD dimensions,being an aggregation of all OCC components, offer the possibility to be used as variables incalculations such as: modifying the aim capabilities or speed based on arousal or change theface expression based on pleasure. As far as the technical implementation goes, we decidedto keep all internal and social emotions as OCC and only export to a PAD structure onrequest of the game.

4 Evaluation of the engine

As we explained in section 1, we wanted our engine to have three main features: be The-oretically Solid, Generic and Performant. In order to show we have achieved this wewill run some experiments for each of these features and discuss the results. The followingsubsections describe our experimental setup and the results we have had.

4.1 A solid foundation

We have based our work mainly on the OCC theory of emotional appraisal and on the PADemotional space (Mehrabian, 1980). These two theories offered us the mechanics neededto elicit emotions that make sense and have been successfully used in other computationalmodels before. There is considerably less risk for the game developer in using theories thatare not experimental anymore and we think this is an additional advantage of our approach.The authors conducted several studies on human subjects to create those theories and ourengine should not only implement the premises but we also expect similar situations to havesimilar results to the ones encountered by the authors of the theories during their research.To evaluate our model, we are going to discuss cases presented by Ortony, Clore and Collinsin their book and show that our engine displays the expected results. The following exampleswere taken from the OCC book (Ortony et al., 1988) and a full list of our experiments canbe found in A. We included the description from the original book and the page where itwas found, as well as the code needed to recreate the example using our engine and theoutput of the example code when set to full debug mode.

Joy (example 5.1, page 86)

Description: The man was pleased when he realized he was to get a small inheritance froman unknown distant relative. Here is the code needed to run this experiment:

Listing 1: Code for example ”Joy”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”become r i c h ” , 0 . 8 f ) ;3 bra in . B e l i e f ( ” i n h e r i t a n c e ” , 1 , ” d i s t a n t r e l a t i v e ” ) ;4 bra in . Af f ec t sGoa l ( ” i n h e r i t a n c e ” , ”become r i c h ” , 0 . 5 f ) ;5 bra in . Update ( ) ;

This first example is quite simple to emulate. We will need to start by adding a goal,mainly because any significant belief has to affect a goal in order to influence emotions (line2). Then add a belief that is sure (likelihood = 1) and is desirable (line 3). In the end wewill need to define the manner in which the belief affects the goal (line 4).

25

Page 34: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Listing 2: Output for example ”Joy”

adding goa l : become r ich , u t i l i t y = 0 . 8 , owner = s e l fadding b e l i e f : i nhe r i t ance , l i k e l i h o o d = 1 , agent = d i s t a n t r e l a t i v eDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : i nhe r i t ance , l i k e l i h o o d = 1a f f e c t e d goa l : become r ich , va l ence = 0 .5d e s i r a b i l i t y : 0 . 4 <− 0 .5 x 0 .8goa l l i k e l i h o o d : 0 .75 <− ( 0 . 5 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .75 <− 0 .75emotion i n t e n s i t y : 0 . 3 <− abs ( 0 . 4 x 0 . 7 5 )adding JOY: 0 .3adding GRATITUDE: 0 .3 towards d i s t a n t r e l a t i v e

Current I n t e r n a l State : Joy : 0 .30Re l a t i o n s h i p s :

d i s t a n t r e l a t i v e −> Gratitude : 0 .30

Note that we have also mentioned the ”distant relative” as the causal agent for this belief,which generated an emotion of type Gratitude towards the agent. If we omit the agent, thislast emotion will not be added. This is usually done for beliefs caused by the world or if theagent doesn’t matter or is unknown.

Satisfaction (example 6.3, page 118)

Description: When she realized that she was indeed being asked to go to the dance by the boyof her dreams, the girl was gratified.

Listing 3: Code for example ”Satisfaction”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”go to dance with boy” , 0 . 9 f ) ;3 bra in . B e l i e f ( ” get i n v i t e d by boy” , 0 . 4 f , ”boy” ) ;4 bra in . Af f ec t sGoa l ( ” get i n v i t e d by boy” , ”go to dance with boy” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” get i n v i t e d by boy” , 1 f , ”boy” ) ;7 bra in . Update ( ) ;

This is an example that needs two steps: first the girl has Hope that she will be invited(line 3) and then the confirmation comes (line 6), bringing Satisfaction to her. You willalso notice the decay between the two steps reducing the intensity for Hope. Ideally, ifHope was caused by a certain belief, it should be removed entirely when confirmation ordisconfirmation comes, but at the moment this system does not keep a connection betweenelicited emotions and the original beliefs, mainly for memory and computational reasons.

Listing 4: Output for example ”Satisfaction”

adding goa l : go to dance with boy , u t i l i t y = 0 . 9 , owner = s e l fadding b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 . 4 , agent = boyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 .4a f f e c t e d goa l : go to dance with boy , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 0 . 7 <− (1 x 0 .4 + 1) / 2

26

Page 35: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

d e l t a goa l l i k e l i h o o d : 0 . 7 <− 0 .7emotion i n t e n s i t y : 0 .63 <− abs ( 0 . 9 x 0 . 7 )adding HOPE: 0 .63adding GRATITUDE: 0 .63 towards boyadding b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 1 , agent = boyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 1a f f e c t e d goa l : go to dance with boy , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 3 <− 1 − 0 .7emotion i n t e n s i t y : 0 .27 <− abs ( 0 . 9 x 0 . 3 )adding JOY: 0 .27adding SATISFACTION: 0 .27adding GRATITUDE: 0 .27 towards boy

Current I n t e r n a l State : Joy : 0 . 2 7 ; Hope : 0 . 3 2 ; S a t i s f a c t i o n : 0 .27Re l a t i o n s h i p s :

boy −> Gratitude : 0 .39

Relief (example 6.5, page 121)

Description: The employee was relieved to learn that he was not going to be fired.

Listing 5: Code for example ”Relief”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l o s e job ” , −0.9 f ) ;3 bra in . B e l i e f ( ” get f i r e d ” , 0 . 6 f ) ;4 bra in . Af f ec t sGoa l ( ” get f i r e d ” , ” l o s e job ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” get f i r e d ” , 0 f ) ;7 bra in . Update ( ) ;

Relief is a two-step emotion as well, but a reversed one. If there is a feared event, suchas being fired (line 3), the disconfirmation will automatically bring Relief which will beproportional to the Fear (line 6).

Listing 6: Output for example ”Relief”

adding goa l : l o s e job , u t i l i t y = −0.9 , owner = s e l fadding b e l i e f : get f i r e d , l i k e l i h o o d = 0 . 6 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 0 .6a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.9 <− 1 x −0.9goa l l i k e l i h o o d : 0 . 8 <− (1 x 0 .6 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 8 <− 0 .8emotion i n t e n s i t y : 0 .72 <− abs (−0.9 x 0 . 8 )adding FEAR: 0 .72adding b e l i e f : get f i r e d , l i k e l i h o o d = 0 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

27

Page 36: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 0a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.9 <− 1 x −0.9goa l l i k e l i h o o d : 0 . 5 <− (1 x 0 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.3 <− 0 .5 − 0 .8emotion i n t e n s i t y : 0 .27 <− abs (−0.9 x −0.3)adding JOY: 0 .27adding RELIEF : 0 .27

Current I n t e r n a l State : Joy : 0 . 2 7 ; Fear : 0 . 3 6 ; R e l i e f : 0 .27

Gratitude (example 7.5, page 148)

Description: The woman was grateful to the stranger for saving the life of her child.

Listing 7: Code for example ”Gratitude”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l o s e c h i l d ” , −1 f ) ;3 bra in . B e l i e f ( ” c h i l d in danger ” , 0 . 6 f ) ;4 bra in . Af f ec t sGoa l ( ” c h i l d in danger ” , ” l o s e c h i l d ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” save c h i l d ” , 1 f , ” s t r ange r ” ) ;7 bra in . Af f ec t sGoa l ( ” save c h i l d ” , ” l o s e c h i l d ” , −1 f ) ;8 bra in . Update ( ) ;

We have added a maintenance goal in negative form (lose child, line 2) which is threatenedby the child in danger belief (line 3). In a latter step, a new event is added which repairs thesituation, eliminating the threat, and has an agent as cause. This creates Gratitude towardsthe causal agent (the stranger).

Listing 8: Output for example ”Gratitude”

adding goa l : l o s e ch i ld , u t i l i t y = −1, owner = s e l fadding b e l i e f : c h i l d in danger , l i k e l i h o o d = 0 . 6 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : c h i l d in danger , l i k e l i h o o d = 0 .6a f f e c t e d goa l : l o s e ch i ld , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 0 . 8 <− (1 x 0 .6 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 8 <− 0 .8emotion i n t e n s i t y : 0 . 8 <− abs(−1 x 0 . 8 )adding FEAR: 0 .8adding b e l i e f : save ch i ld , l i k e l i h o o d = 1 , agent = s t range rDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : save ch i ld , l i k e l i h o o d = 1a f f e c t e d goa l : l o s e ch i ld , va l ence = −1d e s i r a b i l i t y : 1 <− −1 x −1goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.8 <− 0 − 0 .8emotion i n t e n s i t y : 0 . 8 <− abs (1 x −0.8)adding JOY: 0 .8adding SATISFACTION: 0 .8

28

Page 37: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

adding GRATITUDE: 0 .8 towards s t r ange r

Current I n t e r n a l State : Joy : 0 . 8 0 ; Fear : 0 . 4 0 ; S a t i s f a c t i o n : 0 .80Re l a t i o n s h i p s :

s t r ange r −> Gratitude : 0 .80

Pride (Gratification) (example 7.1, page 137)

Description: The woman was proud of saving the life of a drowning child.

Listing 9: Code for example ”Pride”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l i k e k ids ” , 0 . 7 f ) ;3 bra in . B e l i e f ( ” i s f r i e n d l y ” , 1 f , ” kid ” ) ;4 bra in . Af f ec t sGoa l ( ” i s f r i e n d l y ” , ” l i k e k ids ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” d i e ” , −1f , ” kid ” ) ;7 bra in . B e l i e f ( ” kid i s drowning” , 0 . 7 f ) ;8 bra in . Af f ec t sGoa l ( ” kid i s drowning” , ” d i e ” , 1 f ) ;9 bra in . Update ( ) ;

10 bra in . B e l i e f ( ” save kid ” , 1 f , ” s e l f ” ) ;11 bra in . Af f ec t sGoa l ( ” save kid ” , ” d i e ” , −1);12 bra in . Update ( ) ;

Here we had to create an initial relationship with the kid. Even if they don’t know eachother, the first step has the purpose of making the woman care about him. Thus, after thefirst step, Joy is generated because the woman is pleased by seeing the child being friendlyand Gratitude towards the kid is a byproduct of this situation: the woman feels grateful forthe experience. The second step brings the belief that the kid is drowning (line 7) which ofcourse threatens to block the survival goal of the kid (line 6). This step will trigger Pity assomething bad is happening to an agent the woman likes. As a consequence, when beingable to help the kid (line 10), two new emotions appear: HappyFor as a liked person hasa happy event and Gratification as the woman feels proud of helping a friendly agent inneed. As an additional remark, if we were to model a second brain for the kid at the sametime, after the second step it would generate Fear and at the end Relief, Joy and Gratitudetowards the woman.

Listing 10: Output for example ”Pride”

adding goa l : l i k e kids , u t i l i t y = 0 . 7 , owner = s e l fadding b e l i e f : i s f r i e n d l y , l i k e l i h o o d = 1 , agent = kidDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : i s f r i e n d l y , l i k e l i h o o d = 1a f f e c t e d goa l : l i k e kids , va l ence = 1d e s i r a b i l i t y : 0 . 7 <− 1 x 0 .7goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 0 . 7 <− abs ( 0 . 7 x 1)adding JOY: 0 .7adding GRATITUDE: 0 .7 towards kidadding goa l : die , u t i l i t y = −1, owner = kidadding b e l i e f : k id i s drowning , l i k e l i h o o d = 0 . 7 , agent =Decaying . . .Updating . . .

29

Page 38: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : k id i s drowning , l i k e l i h o o d = 0 .7a f f e c t e d goa l : die , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 0 .85 <− (1 x 0 .7 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .85 <− abs(−1 x 0 . 8 5 )adding PITY: 0 .85 towards kidadding b e l i e f : save kid , l i k e l i h o o d = 1 , agent = s e l fDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : save kid , l i k e l i h o o d = 1a f f e c t e d goa l : die , va l ence = −1d e s i r a b i l i t y : 1 <− −1 x −1goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.85 <− 0 − 0 .85emotion i n t e n s i t y : 0 .85 <− abs (1 x −0.85)adding GRATIFICATION: 0 .85 towards s e l fadding HAPPY−FOR: 0 .85 towards kid

Current I n t e r n a l State : Joy : 0 .18Re l a t i o n s h i p s :

k id −> Gratitude : 0 . 1 8 ; G r a t i f i c a t i o n : 0 . 8 5 ; HappyFor : 0 . 8 5 ; Pity : 0 .43

Guilt (Self-Reproach) (example 7.2, page 137)

Description: The spy was ashamed of having betrayed his country.

Listing 11: Code for example ”Guilt”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” f e e l s a f e ” , 1 f ) ;3 bra in . B e l i e f ( ” i s s a f e ” , 1 f , ” country ” ) ;4 bra in . Af f ec t sGoa l ( ” i s s a f e ” , ” f e e l s a f e ” , 0 . 7 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ”be betrayed ” , −1f , ” country ” ) ;7 bra in . B e l i e f ( ” betray country ” , 1 f , ” s e l f ” ) ;8 bra in . Af f ec t sGoa l ( ” betray country ” , ”be betrayed ” , 1 f ) ;9 bra in . Update ( ) ;

We introduced an abstract agent in this case, the country, because guilt has to exist towardssomeone or something. At the same time, we created a previous setup to have the spy likehis country and has some Gratitude towards it. All of this causes the betrayal, that puts thecountry at risk, to generate not only Guilt but also Pity as a byproduct because somethingbad happened to a liked agent.

Listing 12: Output for example ”Guilt”

adding goa l : f e e l sa f e , u t i l i t y = 1 , owner = s e l fadding b e l i e f : i s sa f e , l i k e l i h o o d = 1 , agent = countryDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : i s sa f e , l i k e l i h o o d = 1a f f e c t e d goa l : f e e l sa f e , va l ence = 0 .7d e s i r a b i l i t y : 0 . 7 <− 0 .7 x 1

30

Page 39: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

goa l l i k e l i h o o d : 0 .85 <− ( 0 . 7 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .595 <− abs ( 0 . 7 x 0 . 8 5 )adding JOY: 0 .595adding GRATITUDE: 0 .595 towards countryadding goa l : be betrayed , u t i l i t y = −1, owner = countryadding b e l i e f : betray country , l i k e l i h o o d = 1 , agent = s e l fDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : betray country , l i k e l i h o o d = 1a f f e c t e d goa l : be betrayed , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 1 <− abs(−1 x 1)adding GUILT: 1 towards countryadding PITY: 1 towards country

Current I n t e r n a l State : Joy : 0 .30Re l a t i o n s h i p s :

country −> Gui l t : 1 . 0 0 ; Grat itude : 0 . 3 0 ; Pity : 1 .00

Resentment (example 5.5, page 99)

Description: The executive resented the large pay raise awarded to a colleague whom heconsidered incompetent.

Listing 13: Code for example ”Resentment”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”work with incompetents ” , −0.8 f ) ;3 bra in . B e l i e f ( ” incompetent a c t i o n s ” , 1 f , ” c o l l e a g u e ” ) ;4 bra in . Af f ec t sGoa l ( ” incompetent a c t i o n s ” , ”work with incompetents ” , 0 . 7 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” get more money” , 0 . 8 f , ” c o l l e a g u e ” ) ;7 bra in . B e l i e f ( ”pay r i s e ” , 1 f ) ;8 bra in . Af f ec t sGoa l ( ”pay r i s e ” , ” get more money” , 1 f ) ;9 bra in . Update ( ) ;

Another complex example where we had to create the previous setup ”the executive con-sidered the colleague incompetent” and then giving the pay raise to the colleague causedResentment.

Listing 14: Output for example ”Resentment”

adding goa l : work with incompetents , u t i l i t y = −0.8 , owner = s e l fadding b e l i e f : incompetent act ions , l i k e l i h o o d = 1 , agent = c o l l e a g u eDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : incompetent act ions , l i k e l i h o o d = 1a f f e c t e d goa l : work with incompetents , va l ence = 0 .7d e s i r a b i l i t y : −0.56 <− 0 .7 x −0.8goa l l i k e l i h o o d : 0 .85 <− ( 0 . 7 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .476 <− abs (−0.56 x 0 . 85 )adding DISTRESS : 0 .476

31

Page 40: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

adding ANGER: 0 .476 towards c o l l e a g u eadding goa l : get more money , u t i l i t y = 0 . 8 , owner = c o l l e a g u eadding b e l i e f : pay r i s e , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : pay r i s e , l i k e l i h o o d = 1a f f e c t e d goa l : get more money , va l ence = 1d e s i r a b i l i t y : 0 . 8 <− 1 x 0 .8goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 0 . 8 <− abs ( 0 . 8 x 1)adding RESENTMENT: 0 .8 towards c o l l e a g u e

Current I n t e r n a l State : D i s t r e s s : 0 .24Re l a t i o n s h i p s :

c o l l e a g u e −> Anger : 0 . 2 4 ; Resentment : 0 .80

Gloating (example 5.6, page 100)

Description: Political opponents of Richard Nixon gloated over his ignominious departurefrom office.

Listing 15: Code for example ”Gloating”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”have a democrat p r e s i d e n t ” , 0 . 8 f ) ;3 bra in . B e l i e f ( ”Nixon won e l e c t i o n s ” , 1 f , ”Nixon” ) ;4 bra in . Af f ec t sGoa l ( ”Nixon won e l e c t i o n s ” , ”have a democrat p r e s i d e n t ” , −1 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” stay in o f f i c e ” , 0 . 8 f , ”Nixon” ) ;7 bra in . B e l i e f ( ” watergate scanda l ” , 1 f ) ;8 bra in . Af f ec t sGoa l ( ” watergate scanda l ” , ” stay in o f f i c e ” , −1 f ) ;9 bra in . Af f ec t sGoa l ( ” watergate scanda l ” , ”have a democrat p r e s i d e n t ” , 1 f ) ;

10 bra in . Update ( ) ;

In this case we model the brain of an opponent of Nixon by setting the initial goal havea democrat president (line 2). Of course, Nixon winning the elections creates Distress andAnger towards Nixon as the goal of the opponent is blocked. In a second step, we addknowledge about the ”Watergate” scandal, which forced Nixon to resign, hence Gloatingtowards Nixon is generated and, because we also connected ”Watergate” to a high chanceof getting a democrat president, we also have Joy and Satifaction.

Listing 16: Output for example ”Gloating”

adding goa l : have a democrat pre s ident , u t i l i t y = 0 . 8 , owner = s e l fadding b e l i e f : Nixon won e l e c t i o n s , l i k e l i h o o d = 1 , agent = NixonDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : Nixon won e l e c t i o n s , l i k e l i h o o d = 1a f f e c t e d goa l : have a democrat pre s ident , va l ence = −1d e s i r a b i l i t y : −0.8 <− −1 x 0 .8goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 <− 0s p e c i a l case , the goa l i s d i r e c t l y blocked , d e l t a becomes the d e s i r a b i l i t y : −0.8emotion i n t e n s i t y : 0 .64 <− abs (−0.8 x −0.8)

32

Page 41: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

adding DISTRESS : 0 .64adding ANGER: 0 .64 towards Nixonadding goa l : s tay in o f f i c e , u t i l i t y = 0 . 8 , owner = Nixonadding b e l i e f : watergate scandal , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : watergate scandal , l i k e l i h o o d = 1a f f e c t e d goa l : s tay in o f f i c e , va l ence = −1d e s i r a b i l i t y : −0.8 <− −1 x 0 .8goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 <− 0s p e c i a l case , the goa l i s d i r e c t l y blocked , d e l t a becomes the d e s i r a b i l i t y : −0.8emotion i n t e n s i t y : 0 .64 <− abs (−0.8 x −0.8)adding GLOATING: 0 .64 towards Nixona f f e c t e d goa l : have a democrat pre s ident , va l ence = 1d e s i r a b i l i t y : 0 . 8 <− 1 x 0 .8goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1 − 0emotion i n t e n s i t y : 0 . 8 <− abs ( 0 . 8 x 1)adding JOY: 0 .8adding SATISFACTION: 0 .8

Current I n t e r n a l State : Joy : 0 . 8 0 ; D i s t r e s s : 0 . 3 2 ; S a t i s f a c t i o n : 0 .80Re l a t i o n s h i p s :

Nixon −> Anger : 0 . 3 2 ; Gloat ing : 0 .64

Conclusion

These examples are just a few of the situations we are able to create using just a few linesof code and we have demonstrated our engine generates the expected psychological conse-quences. The examples used in this section are simple examples from real-life situations,as illustrated in ”The Cognitive Structure of Emotions” (Ortony et al., 1988). Our resultsshow that these situations are easy to set up using our engine. Sometimes, they require ex-tra information, such as creating a previous relationship between two characters, but theseextra steps can be considered part of the scenario, and it is up to the game creator to setthe story up.

4.2 Case Studies

The second feature we wanted to be reflected in our implementation is the ability to use thisengine with any type of game and any architecture with minimal custom implementation. Toillustrate this feature the experiment will have two parts: first we will discuss some classicalgames in terms of emotional behaviour and then we will describe our own implementationfor games that use the emotional engine to generate behaviour changes.

4.2.1 Classical Games

What if our engine would have existed since the beginning of video games history? In thissection we have chosen some of the most known games, that have shaped the evolution ofgaming, and discuss how they would have benefited from this implementation.

33

Page 42: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Pac-Man (1980)

Pac-Man enjoyed a huge success when it was released and it’s evaluated to be the bestgrossing video game ever if all the arcade coins are included (Morris, 2005; Wikipedia, a).The game generated an endless sequels, spin-offs or clones, and it is a representative gamefor action arcade game mechanics. Pac-Man is the agent handled by the player who has tocollect all dots in a labyrinth but stay away from the enemies (ghosts) who try to block himfrom achieving the goal. There are also some power dots that give Pac-Man the ability to”eat” the enemies and send them back to their base location. Figure 11 shows the mainscreen of this game at the very start, including Pac-Man (the yellow disc) and the fourenemies in central position. The purpose of an enemy is to try to block the player in normal

Figure 11: The main screen of Pac-Man

state and run away from him when Pac-Man eats a power dot.

Listing 17: Code for example ”Pac-Man”

1 EmoBrain ghost = new EmoBrain ( ) ;2

3 // i n i t i a l g o a l s4 ghost . Goal ( ” get eaten ” , −1 f ) ;5 ghost . Goal ( ” catch pacman” , 0 . 6 f ) ;6

7 // PacMan i s g e t t i n g c l o s e8 ghost . B e l i e f ( ”pacman g e t t i n g c l o s e ” , 0 . 5 f ) ;9 ghost . Af f ec t sGoa l ( ”pacman g e t t i n g c l o s e ” , ” catch pacman” , 1 f ) ;

10 ghost . Update ( ) ;11

12 // PacMan i s g e t t i n g c l o s e a f t e r e a t i n g a power p e l l e t13 ghost . B e l i e f ( ”pacman g e t t i n g c l o s e with power” , 0 . 7 f ) ;14 ghost . Af f ec t sGoa l ( ”pacman g e t t i n g c l o s e with power” , ” get eaten ” , 1 f ) ;15 ghost . Update ( ) ;

We set the initial goals for the enemies on lines 4 and 5, mainly not to get eaten by Pac-Manand block him at the same time. Later, during the game, we can update the belief ”pacman

34

Page 43: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

getting close” which is good for the enemy and its variation ”pacman getting close withpower” that threatens the state of the enemy.

Listing 18: Output for example ”Pac-Man”

adding goa l : get eaten , u t i l i t y = −1, owner = s e l fadding goa l : catch pacman , u t i l i t y = 0 . 6 , owner = s e l fadding b e l i e f : pacman g e t t i n g c l o s e , l i k e l i h o o d = 0 . 5 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : pacman g e t t i n g c l o s e , l i k e l i h o o d = 0 .5a f f e c t e d goa l : catch pacman , va l ence = 1d e s i r a b i l i t y : 0 . 6 <− 1 x 0 .6goa l l i k e l i h o o d : 0 .75 <− (1 x 0 .5 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .75 <− 0 .75emotion i n t e n s i t y : 0 .45 <− abs ( 0 . 6 x 0 . 7 5 )adding HOPE: 0 .45adding b e l i e f : pacman g e t t i n g c l o s e with power , l i k e l i h o o d = 0 . 7 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : pacman g e t t i n g c l o s e with power , l i k e l i h o o d = 0 .7a f f e c t e d goa l : get eaten , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 0 .85 <− (1 x 0 .7 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .85 <− abs(−1 x 0 . 8 5 )adding FEAR: 0 .85

Current I n t e r n a l State : Hope : 0 . 2 3 ; Fear : 0 .85

This setup generates two simple emotions Hope and Fear which the game can check tomodify the expression or the behaviour of the enemy. The Arousal component could helpmodify the speed of the agents and the Pleasure component could change the expressionbetween happy and scared or stressed.

Populous (1989)

Populous was the first ”God-game”, a game where the player plays the role of a deity andmanages a tiny world and the creatures in it. The game was extremely successful and hascreated a whole new sub-genre of strategy games (Wikipedia, b). The world is populatedwith several characters and the player gets stronger by creating flat areas where these littlepeople can settle and make a house. The player can modify the terrain for that but alsoto destroy the land of the ”enemies”. Figure 12 shows the main play screen of one of theearliest versions of Populous.

Listing 19: Code for example ”Populous”

1 EmoBrain agent = new EmoBrain ( ) ;2

3 // i n i t i a l g o a l s4 agent . Goal ( ”have house ” , 1 f ) ;5 agent . Goal ( ” d i e ” , −1 f ) ;6

7 // the p l a y e r prepares f l a t land f o r the house8 agent . B e l i e f ( ” found land ” , 1 f , ” de i ty ” ) ;

35

Page 44: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 12: The main screen of Populous

9 agent . Af f ec t sGoa l ( ” found land ” , ”have house ” , 0 . 8 f ) ;10 agent . Update ( ) ;11

12 // the p l a y e r c r e a t e s a vo lcano nearby13 agent . B e l i e f ( ” volcano ” , 1 f , ” de i ty ” ) ;14 agent . Af f ec t sGoa l ( ” volcano ” , ” d i e ” , 0 . 4 f ) ;15 agent . Update ( ) ;

The goals of the agents are simply defined as ”have house” and ”not die” (lines 4 and 5).After that, any action of the player and any action of the opponent deities can generatea belief. We give just two examples, one that affects each indicated goal. Each of themalso causes emotions towards the responsible deity and this could be used to play someanimations to show the agents looking at the camera and being happy or angry.

Listing 20: Output for example ”Populous”

adding goa l : have house , u t i l i t y = 1 , owner = s e l fadding goa l : die , u t i l i t y = −1, owner = s e l fadding b e l i e f : found land , l i k e l i h o o d = 1 , agent = de i tyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : found land , l i k e l i h o o d = 1a f f e c t e d goa l : have house , va l ence = 0 .8d e s i r a b i l i t y : 0 . 8 <− 0 .8 x 1goa l l i k e l i h o o d : 0 . 9 <− ( 0 . 8 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 9 <− 0 .9emotion i n t e n s i t y : 0 .72 <− abs ( 0 . 8 x 0 . 9 )adding JOY: 0 .72adding GRATITUDE: 0 .72 towards de i tyadding b e l i e f : volcano , l i k e l i h o o d = 1 , agent = de i tyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : volcano , l i k e l i h o o d = 1a f f e c t e d goa l : die , va l ence = 0 .4d e s i r a b i l i t y : −0.4 <− 0 .4 x −1goa l l i k e l i h o o d : 0 . 7 <− ( 0 . 4 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 7 <− 0 .7emotion i n t e n s i t y : 0 .28 <− abs (−0.4 x 0 . 7 )adding DISTRESS : 0 .28adding ANGER: 0 .28 towards de i ty

Current I n t e r n a l State : Joy : 0 . 3 6 ; D i s t r e s s : 0 .28

36

Page 45: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Re l a t i o n s h i p s :de i ty −> Anger : 0 . 2 8 ; Grat itude : 0 .36

Super Mario Kart (1992)

Super Mario Kart is a console kart racing game that became one of the best selling gameson the Nintendo consoles and was followed by numerous sequels (Wikipedia, c). The playerdrives a kart and competes against other agents in a race. The race is not completely fairand racers can use various items, such as throwing bananas, to slow down the opponents.They can also bump the other karts to cause them to spin and be delayed by a couple ofimportant seconds. Figure 13 shows the main play screen of this game with the currentview of the player in the upper half of the screen and the general situation in the lower half.

Figure 13: The main screen of Super Mario Kart

Listing 21: Code for example ”Super Mario Kart”

1 EmoBrain kart = new EmoBrain ( ) ;2

3 // i n i t i a l g o a l s4 kart . Goal ( ” crash ” , −1 f ) ;5 kart . Goal ( ”win” , 0 . 8 f ) ;6

7 // when the p l a y e r g e t s ahead8 kart . B e l i e f ( ” get ahead” , 0 . 6 f ) ;9 kart . Af f ec t sGoa l ( ” get ahead” , ”win” , 1 f ) ;

10 kart . Update ( ) ;11

12 // the agent car i s bumped13 kart . B e l i e f ( ” car bumped” , 1 f ) ;14 kart . Af f ec t sGoa l ( ” car bumped” , ” crash ” , 0 . 3 f ) ;15 kart . Update ( ) ;

The initial goals of the agents are to win the race and not crash (lines 4 and 5). When theagent gets ahead we introduce a belief that improves the chances of winning (lines 8 and 9)that generates Hope. On the other hand if the agent’s kart is bumped, there is a new beliefthat puts the ”win” goal of the agent at risk. Since this belief is certain (the agent knowshis kart was bumped) it will generate Distress.

37

Page 46: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Listing 22: Output for example ”Super Mario Kart”

adding goa l : crash , u t i l i t y = −1, owner = s e l fadding goa l : win , u t i l i t y = 0 . 8 , owner = s e l fadding b e l i e f : get ahead , l i k e l i h o o d = 0 . 6 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get ahead , l i k e l i h o o d = 0 .6a f f e c t e d goa l : win , va l ence = 1d e s i r a b i l i t y : 0 . 8 <− 1 x 0 .8goa l l i k e l i h o o d : 0 . 8 <− (1 x 0 .6 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 8 <− 0 .8emotion i n t e n s i t y : 0 .64 <− abs ( 0 . 8 x 0 . 8 )adding HOPE: 0 .64adding b e l i e f : car bumped , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : car bumped , l i k e l i h o o d = 1a f f e c t e d goa l : crash , va l ence = 0 .3d e s i r a b i l i t y : −0.3 <− 0 .3 x −1goa l l i k e l i h o o d : 0 .65 <− ( 0 . 3 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .65 <− 0 .65emotion i n t e n s i t y : 0 .195 <− abs (−0.3 x 0 . 6 5 )adding DISTRESS : 0 .195

Current I n t e r n a l State : D i s t r e s s : 0 . 2 0 ; Hope : 0 .32

The resulting emotions could be used to modify the behaviour of the agents by modifyingthe speed, precision of driving or for example, if the Fear level is too high, the agent couldpanic and drive the kart off road.

4.2.2 Example games

Because of the necessity to show our engine is useable with real games, we decided toimplement some short games (only one level/map) of different genres and make them use theengine in one way or another to simulate some emotional agents. Our final purpose here wastwo-fold: first demonstrate the engine is genre independent and second show that standardgame AI methods can easily be modulated using the emotions coming as output from theengine. The technical choice fell on Unity3D which is a fast prototyping environment for3D games with many great advantages: can publish for web (using Unity Web Player), hasa huge community that provides examples and prepacked assets, supports C# as scriptinglanguage which was used to develop the engine. Our technical choice allows us to supportthe following platforms with no change in the engine: Windows, Linux, XBOX 360, Mac,iOS, Android, Windows Phone, Web Browser. The genres needed here were genres that hadto be different to implement, have significant agent (NPC) interaction and would normallyuse various types of game AI agent implementations. After consulting information on thegame market (Schwab, 2004; Byl, 2004; Mark, 2009), we decided to implement the gameslisted in Table 9. These examples would offer a wide enough palette of options to be able toshow that our library is useable in different real-life games. As far as the emotional outputgoes, it could be used in two ways: emotion expression or changes in behaviour. Emotionalexpression can usually be done using specific animations and other visual hints which require

38

Page 47: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Game Genre Agent AI Emotion Focus

Role-Playing Behaviour Trees NPC Dialogs

Real-time Strategy Finite State Machine Enemy Unit Behaviour

First-person Shooter Rule-based System Enemy Unit Behaviour

Table 9: Games implemented as examples

Figure 14: Screenshots from the RPG implementation

the assistance of a professional artist and we didn’t have that. As a consequence, in ourexamples, we have taken into account only behaviour changes.

Role-Playing game

Classical RPGs involve building a world where the player character can interact with variousfriendly characters but also fight or compete against enemies. It is quite a complex type ofgame and has the potential for a huge variety of stories and situations. The setup we cameup with, as a very simple example of interaction with both friends and enemies, was to havetwo areas, one was ”the village” where people were neutral to our character and they couldgrow to like him or not, and the other was ”the forest” where the character would have tofight enemies. The story was about this village that was surrounded by monsters and thepeople in it that were scared to death about going outside.

Initial Setup To achieve the initial mood we set a goal for all people in the village (3-4agents, not a huge population) that was called die with utility −1. This meant people don’twant to achieve that state so it was an opposite maintenance state interpretable as ”stayalive”. Also as initial setup, we added a belief monster invasion with likelihood 0.9 becausethe people in the village are not completely sure about it. The belief affects goal die withvalence 0.8 which generated a high level of Fear in the village but also a small amount ofHope. The initial setup of the neutral characters included also some differentiation usingthe default mood. By setting the default Fear of one of them to 0.3 for example, we ensurethis character will generally have a higher chance of getting scared than the others.

Emotional Game Dynamics After the initial setup, belief generation was added to allimportant events in the game. When the player would talk to villagers, the choices he made

39

Page 48: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

would generate beliefs. When he would win a battle outside of the village, beliefs would alsobe generated that made people in the village less afraid for their lives, thus generating Hopeand also gratitude towards the player’s character. We also introduced a quest by havingone of the villagers tell a story about an object he left outside of the village. This waspossible by setting an initial goal for that villager have item with utility 0.8. The absolutevalue of the utility was lower than that of the die goal in order to have a higher impact forbeliefs that affect the life of the agent (i.e. the agent would not die to retrieve the item).When the player accepts the quest, belief accept quest with likelihood 1 is added, generatingHope and Gratitude for that villager, and when the player actually returns the item, beliefreceive item with likelihood 1 is added which affects goal have item with valence 1 whichmeans the villager has the item. This generates extra feelings of Joy and Satisfaction forthe villager.

Emotional Feedback The feedback is given to the player only through dialog. As thegame uses dialog trees, which are a special case of behaviour trees (Champandard, 2007),the dialog tree contains certain nodes where the emotion values are checked and the courseof the dialog changes accordingly. One such example is the usage of extra phrases in themiddle of a speech. By checking the value for Fear the villager can say an extra line suchas ”I am afraid to go out of the village”, which will dissapear if Fear decreases and whenHope appears he would start saying ”I hope we will all make it out of here alive” instead.These examples, even if very simple and obvious, give an idea about the capabilieties of thesystem which would benefit greatly from using a natural language generator.

Real-time Strategy game

A real-time strategy game usually contains high level management of units and resources.The player is the commander of an army and gives orders to his troops that are usuallyfollowed without discussion, so the only interaction with other agents usually found in RTSgames is battle. The simple setup we have found for this type of game was having twofactions (for example Human and Ork) and each faction has three types of buildings whichare pre-built at the beginning of the game and has two types of units that can be trainedusing one of the buildings (the Barracks). Another type of building would provide resources(gold) needed to train troops and the third one would just be a central building with noactive use. When a building is destroyed, it stays as ruins and becomes non-functional.Ruins can be rebuilt using a unit but doing so costs gold and takes a few seconds.

Emotional Game Dynamics Here, the initial setup was less complicated, all enemy unitsreceiving upon instantiation the following two goals: die with utility −1 and win with utility1 which would ensure not dying is of equal importance to them as winning the game. Noinitial beliefs were generated and no differences in the default mood. After that, differenttypes of beliefs were introduced in the game. First of all, units would see their goal of stayingalive when being wounded: belief wounded where the likelihood was 1− health affects goaldie with valence 1. This would generate feelings of Fear when units are wounded. Second,if more units start fleeing, a belief is added to all enemy units fleeing with likelihood 1 thataffects goal win to an extent proportional to the number of fleeing units. Then the win goalwould also be affected negatively by losing buildings and positively by having more enemytroops on the field than player troops. All of these have as effect the generation of feelings

40

Page 49: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 15: Screenshots from the RTS implementation

such as Fear, Hope but also Anger towards the player’s units.

Figure 16: The Finite State Machine of an RTS unit

Emotional Feedback All of these ”feelings” have to be visible to the player, and for thatwe have found two different ways. One of them was to modify the Finite State Machineused to control the unit. Finite State Machines (Schwab, 2004) are simple structures usedto control AI and are very popular in games because of their implementation simplicity butalso their scalability that can generate very complex behaviours. The initial implementationhas only two states: Idle and Attack. To show the effects of emotions we also added a thirdone, Flee. The transition was very simple: when the amount of Fear would go above athreshold, the unit would enter the Flee state. Figure 16 shows a graphical representationof the states and transitions. The second one was by implementing some modifications tothe A* pathfinding algorithm (Hart et al., 1968) so that troops would lose precision whenthe emotions were running high. For this case we used the Arousal component of the PADstructure. See below for an explanation on how A* works and how it was modified.

First-Person Shooter game

First-Person shooters are games where the player experiences action with a first-personcamera positioned at the level of the eyes and involve shooting weapons and direct combat.Our example for this type of games was developed to show that a wider range of gamegenres can benefit from our engine. The way agents are coded works in a similar manner

41

Page 50: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 17: Screenshots from the RTS implementation

to the RTS example: they have a very simple decision making system akin to a finite statemachine and they use A* for pathfinding. We have implemented the same emulation of fearwhen their life decreases and this has effects on the A* pathfinding precision. An additionalfeature we added was that arousal causes the bots to have less precision. As a consequence,when they get hurt, they automatically start missing some shots.

A* pathfinding modification The A* algorithm (Hart et al., 1968) is a graph searchalgorithm based on Dijkstra’s algorithm (Dijkstra, 1959) that solves the shortest path in agraph problem. Because in games, speed is more important than precision, the A* introducesa heuristic function that estimates the future steps approximatively so that there is no needto calculate all possible paths.

Listing 23: A* algorithm pseudocode

1 f unc t i on A∗( s ta r t , goa l )2 c l o s e d s e t := the empty s e t // The s e t o f nodes a l r e a d y e v a l u a t e d .3 openset := { s t a r t } // The s e t o f t e n t a t i v e nodes to be eva lua ted ,4 // i n i t i a l l y c o n t a i n i n g the s t a r t node5 came from := the empty map // The map o f n a v i g a t e d nodes .6

7 g s c o r e [ s t a r t ] := 0 // Cost from s t a r t a long b e s t known path .8 // Estimated t o t a l c o s t from s t a r t to g o a l through y .9 f s c o r e [ s t a r t ] := g s c o r e [ s t a r t ] + h e u r i s t i c c o s t e s t i m a t e ( s ta r t , goa l )

10

11 while openset i s not empty12 cur rent := the node in openset having the lowest f s c o r e [ ] va lue13 i f cur rent = goa l14 return r e con s t ruc t pa th ( came from , goa l )15

16 remove cur rent from openset17 add cur rent to c l o s e d s e t18 for each neighbor in ne ighbor nodes ( cur rent )19 i f neighbor in c l o s e d s e t20 continue21 t e n t a t i v e g s c o r e := g s c o r e [ cur rent ] + d i s t between ( current , ne ighbor )22

23 i f neighbor not in openset or t e n t a t i v e g s c o r e < g s c o r e [ ne ighbor ]24 add neighbor to openset25 came from [ neighbor ] := cur rent26 g s c o r e [ ne ighbor ] := t e n t a t i v e g s c o r e27 f s c o r e [ ne ighbor ] := g s c o r e [ ne ighbor ]28 + h e u r i s t i c c o s t e s t i m a t e ( neighbor , goa l )29

30 return f a i l u r e

42

Page 51: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

31

32 f unc t i on r e con s t ruc t pa th ( came from , current node )33 i f came from [ current node ] i s s e t34 p := re con s t ruc t pa th ( came from , came from [ current node ] )35 return (p + current node )36 else37 return current node

The way we modified this algorithm is by having an extra ”precision” argument that will beused when searching for the path. When checking for the distance (line 23) we multiply oneof the sides by a random number between 0 and the given parameter. If this parameter hasthe value of, for example, the Arousal level of the agent, then the result is the possibilityof getting a deviation in the path proportional to the Arousal level. It is a very simpletechnique but it can be effective when used in a crowded situation, when the agents startpanicking.

4.3 Performance Tests

The third type of tests we needed was the computational demand of the framework. Forthis we created two experiments: timing populations of various sizes to see what is thetime needed for update and running a real time graphical representation of a population ofagents.

Timing the update

For this experiment we incrementally run tests that consist in the following steps:

• create a population of size n

• create g goals for each agent

• create b beliefs for each agent

• compute emotions after each 500 beliefs are added

• decay

The generated goals and beliefs have random parameters and the beliefs affect a goal chosenrandomly. We ran three tests: with one goal and one belief per agent, two goals and twobeliefs per agent and four goals and four beliefs per agent. Figure 18 shows a comparisonbetween the three cases and the amount of time it took to calculate the emotions for eachpopulation (see also Table 10).

Our tests were run on a 3 GHz processor in a single thread. These values represent thetotal amount of time needed to calculate the corresponding emotions. Of course a gamecannot afford to compute emotions for a second because then a one second long blockagewould be visible to the player. The technique that could be used here is a time limit tothe computations per frame. So for example, if calculating one belief per agent for 1000agents, the total computing time is 29 milliseconds. by spreading that over 30 frames weneed less than 1 millisecond per frame. And this assumes that all 1000 agents will receiveone belief at the same time. For a higher number of agents in the world (thousands) thisshould not happen except for massive events, such as disasters. The results are quite good,allowing to have up to 12, 000 agents, 2 goals and 2 beliefs per agent, with a total computingtime of less than one second. We should also keep in mind that for 12, 000 agents there are

43

Page 52: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 18: Timing various populations of agents

24, 000 beliefs added, and there is an update every 500 beliefs. This means the update isexecuted 48 times including computing new beliefs and decaying the previous emotions. Ina real game, this could be setup differently by separating the updating and decaying partand running update once per second, while de decay runs only once every five seconds.

Sandbox test

As a second performance test we have built a sandbox environment using the XNA gameframework. The world would be a rectangle area where agents live, each represented bya symbol. The user would influence them using the mouse: left clicking an area wouldsend positive beliefs (beliefs that facilitate their goals) to the agents in that area and rightclicking an area would send negative beliefs (beliefs that block their goals). As an outcome,the agents become agitated (speed increases) if the Arousal is high or calm if it is low andtheir color would change between red and green to show the level of Pleasure. Figure 19 showsthe test running for 5000 agents in real-time where red is the color for negative pleasure andgreen is the one for positive pleasure.

In this case each emotional wave only hits a part of the population and, there is nocomputational problem while running 5000 agents and keep the 60 frames per second.

44

Page 53: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Population size 1 goal and 1 belief 2 goals and 2 beliefs 4 goals and 4 beliefs

1000 0.029 0.039 0.061

2000 0.030 0.050 0.110

3000 0.052 0.094 0.210

4000 0.073 0.150 0.307

5000 0.115 0.206 0.433

6000 0.152 0.274 0.580

7000 0.181 0.361 0.744

8000 0.230 0.431 0.916

9000 0.288 0.511 1.119

10000 0.321 0.602 1.314

12000 0.431 0.827 1.797

14000 0.556 1.064 2.334

16000 0.699 1.333 2.953

18000 0.817 1.641 3.680

20000 0.970 1.979 4.521

25000 1.423 2.851 6.858

30000 1.939 3.967 9.574

35000 2.515 5.265 12.788

40000 3.250 6.892 16.353

45000 4.081 8.582 20.345

50000 4.834 10.590 24.641

Table 10: Time values for the performance tests

5 Conclusion

We have introduced SAGE, a simple affective game engine for multi-agent environments, inparticular for commercial games. By leaving some freedom to the programmer of the game,the internal model is kept simple enough to be powerful and generic at the same time. Ofcourse, this means the programmer will have to do some modelling work for his own gameto be able to use the engine to generate emotions. As we have demonstrated in section 4,the goals and beliefs of the in-game agents need to be explicitly stated. This might provedifficult sometimes but in most of the cases game characters are based on a similar system.We have explained and even implemented emotions for a few different genres and it wasn’tdifficult to implement them. Additions to this model are very easy to do and the systemitself is powerful enough to be able to model most situations that can arise in a game. Wehave tested our engine against the emotion theory by implementing real life situations andhave demonstrated the outcome to be the expected one. Performance is good, the enginebeing able to run a few thousand emotional agents at once with very good timing.

Using the OCC model for emotions has proven to be a good choice in our opinion mostly

45

Page 54: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Figure 19: Performance test on a population of agents. Negative pleasure in red, positive pleasure ingreen

because it seems to have the right amount of complexity we needed. In addition to that, theOCC model is a tried and tested one over the last decade and thus a safe choice for gamerdevelopers. We were able to model sixteen different emotions using just a few variables:desirability, likelihood, causal agent. Another good choice was the decision to leave most ofthe details hidden to the programmer. In the beginning we wanted to also have a rule-basedcomponent that would decide if a goal is affected by a certain belief. This component wasinstead removed and the responsibility for the relationship between goals and beliefs movedto the game programmer. Retrospectively, we think it was a good choice for the scope ofthis research because there would have been an extra component to develop. It was also agood choice for the usability because the programmer does not have to provide all the rulesneeded to deduct relationships but only the relationships directly. The engine became easierto use and understand.

The hardest part of this project was to implement short simple games where peoplewould notice the difference when using emotions. The first problem was that short gameshave a simple story, few characters and most situations can be very easily implementedusing just a few variables or flags. The second problem was the difficulty in making a shortgame experience (around 5 to 10 minutes) and make sure the players would see the intendedresults. These issues made it very complicated for us to set up a usability test with humanplayers as there was no way to make sure all players would have a similar experience, giventhe short time allowed for playing.

Our approach to building this engine is based on cognitive appraisal but not entirelya BDI model. We model only the emotions in such a way that they are based on beliefsand desires (goals) but leave the intention out to be implemented in the game. We thinkthe engine is usable for professional games, and we have shown that it makes sense froma psychological point of view, it is useable in many types of games, it is pluggable by just

46

Page 55: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

adding a few lines of code and it’s performing well enough to be used even for large worlds(for example massive multiplayer games).

6 Future Work

We have shown in this paper that simplicity can help build very complex scenarios and worldsand there are a few extra features that can be added to this framework to make it better.The power and flexibility of the engine really starts showing up with complex games thatneed several hours of gameplay and in which the player interacts with tens of other agents,both friendly and hostile. Such games (for example Role-Playing or Adventure) imply aconsiderable amount of time to setup all possible relationships using traditional techniques,hence the current framework would greatly speed up development and offer better resultsat the same time.

On the performance side, we believe it is still possible to make some improvements, notspecifically in speeding up the computational process, which is already quite fast, but byoffering features the games could use. One of these would be managing the computationlimitation internally by having a parameter to specify how many milliseconds per frame areallowed. This would shift the time management to the engine and make it easier for thegame developer. Another one would be multi-processing which we haven’t considered at allduring this research and, keeping in mind today’s gaming devices, could be a very importantaddition.

To the core engine the features that could be added include management for norms andtastes. According to the OCC theory (Ortony et al., 1988), norms are useful for an agentto judge the acts of other agents even if they don’t affect anyone’s goals. For example agentA lives in a world where it is frowned upon to shout in public, and another agent, B, doesexactly that. Now even if it doesn’t have a silence goal, the first agent can still judge theactions of agent B and modify the relationship. Tastes refer to preferences the agent hasfor objects or concepts which could enrich the interaction between agents regarding gifts,gestures, etc. We think these two additions could improve the palette of available tools thegame programmer can use to create immersive stories.

On the other side, regarding the emotion expression plugins, much more work can bedone. A language module that can generate replies from emotional state would probably bevery useful. Such an engine would enable an interaction game to have agents that can have ahuge variation of lines when interacting with the player. This could offer a virtually unlimitednumber of different paths a dialog could take. In addition to language, a procedural questgeneration tool could be the next step to expand virtual worlds. In a role-playing game forexample, the quests are usually scripted and, as such, limited in number. Generating thembased on the emotions of the NPC could bring not only variation but also realism. NPCscould ”argue” among themselves, have dynamic relationships and, as a consequence, ask theplayer to take action against another NPC or to help an NPC.

We strongly believe this is the kind of procedural games towards which the industryis moving but we also know these targets are not very close yet, the mentioned modulesbeing still very far from complete on any system. Nevertheless, the simpler model for theunderlying engine could speed up the process in the coming years.

47

Page 56: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Bibliography

C. Bartneck. Integrating the occ model of emotions in embodied charac-ters. In Proceedings of the Workshop on Virtual Conversational Charac-ters: Applications, Methods, and Research Challenges, Melbourne., 2002.http://bartneck.de/publications/2002/integratingTheOCCModel/bartneckHF2002.pdf.

E. M. Berne. Games People Play: The basic handbook of transactional analysis. Bal-lantine Books, Aug. 1996. URL http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&path=ASIN/0345410033.

J. Broekens, D. DeGroot, and W. A. Kosters. Formal models of appraisal: Theory, specifi-cation, and computational model. Cognitive Systems Research, pages 173–197, 2008.

P. B.-D. Byl. Programming Believable Characters for Computer Games (Game DevelopmentSeries). Charles River Media, Inc., Rockland, MA, USA, 2004. ISBN 1584503238.

P. Carlisle. A framework for emotional digital actors. In A. Lake, editor, Game ProgrammingGems 8, pages 312–322. Cengage Learning, 2010.

A. Champandard. Behavior trees for next-gen game AI. In Proceedings of Game DevelopersConference, 2007. URL http://aigamedev.com/insider/article/behavior-trees/.

E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik,1:269–271, 1959. ISSN 0029-599X. URL http://dx.doi.org/10.1007/BF01386390.10.1007/BF01386390.

P. Ekman. An argument for basic emotions. Cognition & Emotion, 6(3-4):169–200, May1992.

C. D. Elliott. The Affective Reasoner: A process model of emotions in a multi-agent system.PhD thesis, North-Western University, 6 1992.

E. Fox. Emotion Science: Cognitive and Neuroscientific Approaches to Understanding Hu-man Emotions. Palgrave Macmillan, Sept. 2008. ISBN 0230005187.

N. H. Frijda. The emotions. Studies in emotion and social interaction. Editions de la Maisondes sciences de l’homme, 1986. ISBN 0521316006.

N. H. Frijda and J. Swagerman. Can computers feel? theory and design of an emotionalsystem. Cognition & Emotion, 1(3):235–257, 1987.

J. D. Funge. Artificial Intelligence For Computer Games: An Introduction. A. K. Pe-ters, Ltd., Natick, MA, USA, 2004. ISBN 1568812086. URL http://portal.acm.org/citation.cfm?id=1014903.

S. Grand. Creation: Life and How to Make It. Harvard University Press, 2000.

S. Grand, D. Cliff, and A. Malhotra. Creatures: artificial life autonomous software agents forhome entertainment. In Proceedings of the first international conference on Autonomousagents, AGENTS ’97, pages 22–29. ACM, 1997. ISBN 0-89791-877-0.

J. Gratch and S. Marsella. A domain-independent framework for modeling emotion. Journalof Cognitive Systems Research, 5:269–306, 2004.

48

Page 57: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

J. Grey and J. J. Bryson. Procedural quests: A focus for agent interaction in role-playing-games. In D. Romano and D. Moffat, editors, Proceedings of the AISB 2011 Symposium:AI & Games, pages 3–10, York, April 2009. SSAISB.

Y. Guoliang, W. Zhiliang, W. Guojiang, and C. Fengjun. Affective computing model basedon emotional psychology. Advances in Natural Computation, pages 251–260, 2006.

P. Hart, N. Nilsson, and B. Raphael. A Formal Basis for the Heuristic Determination ofMinimum Cost Paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100–107, Feb. 1968. URL http://dx.doi.org/10.1109/TSSC.1968.300136.

E. Hudlicka. This time with feeling: Integrated model of trait and state effects oncognition and behavior. Applied Artificial Intelligence, 16(7-8):611–641, 2002. doi:10.1080/0833951029003041. URL http://www.tandfonline.com/doi/abs/10.1080/08339510290030417.

E. Hudlicka. Affective game engines: motivation and requirements. In Proceedings of the4th International Conference on Foundations of Digital Games, FDG ’09, pages 299–306,New York, NY, USA, 2009. ACM. ISBN 978-1-60558-437-9.

E. Hudlicka. Guidelines for designing computational models of emotions. InternationalJournal of Synthetic Emotions, 2(1):26–79, 2011.

E. Hudlicka and J. Broekens. Foundations for modelling emotions in game characters:Modelling emotion effects on cognition. In Affective Computing and Intelligent Interaction(ACII), 2009.

K.-L. Liu. Affective Computing for Computer Games: Bots with Emotions. VDM Verlag,Saarbr&#252;cken, Germany, Germany, 2009. ISBN 3639161645, 9783639161649.

D. Mark. Behavioral Mathematics for Game AI. Charles River Media, Inc., 2009. ISBN1584506849.

S. Marsella, J. Gratch, and P. Petta. Computational models of emotion. In K. Scherer,T. Banziger, and E. Roesch, editors, A blueprint for a affective computing: A sourcebookand manual. Oxford University Press, 2010.

J. McCoy, M. Treanor, B. Samuel, M. Mateas, and N. Wardrip-Fruin. Prom week: socialphysics as gameplay. In Proceedings of the 6th International Conference on Foundationsof Digital Games, FDG ’11, pages 319–321, New York, NY, USA, 2011. ACM. ISBN978-1-4503-0804-5.

R. R. Mccrae and Costa. Toward a new generation of personality theories: Theoreticalcontexts for the five-factor model., pages 51–87. Guildford, New York, 1996.

A. Mehrabian. Basic Dimensions for a General Psychological Theory. OGH Publishers,Cambridge, 1980.

A. Mehrabian. Framework for a comprehensive description and measurement of emotionalstates. Genetic, social, and general psychology monographs, 121(3):339–361, Aug. 1995.ISSN 8756-7547.

49

Page 58: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

A. Mehrabian. Pleasure-arousal-dominance: A general framework for describing and mea-suring individual differences in Temperament. Current Psychology, 14(4):261–292, Dec.1996.

C. Morris. Pac man turns 25. http://money.cnn.com/2005/05/10/commentary/game_over/column_gaming/index.htm, may 2005. [Online; accessed 29-July-2012].

A. Ortony, G. L. Clore, and A. Collins. The Cognitive Structure of Emotions. CambridgeUniversity Press, 7 1988.

W. Reilly. Modeling What Happens Between Emotional Antecedents and Emotional Con-sequents. ACE, 2006. URL http://www.ofai.at/~paolo.petta/conf/ace2006/pres/Reilly-ModelingWhatHappensBetween-E0144-pres.pdf.

W. S. N. Reilly. Believable Social and Emotional Agents. PhD thesis, School of ComputerScience, Carnegie Mellon University, 5 1996.

K. R. Scherer. Appraisal considered as a process of multilevel sequential checking. Appraisalprocesses in emotion: Theory, methods, research, pages 92–120, 2001.

K. R. Scherer. Toward a dynamic theory of emotion: The component process model ofaffective states. Geneva Studies in Emotion and Communication, 1:1–98, 1987.

B. Schwab. Ai Game Engine Programming (Game Development Series). Charles RiverMedia, Inc., Rockland, MA, USA, 2004. ISBN 1584503440.

C. A. Smith and P. C. Ellsworth. Attitudes and social cognition. Journal of Personalityand Social Psychology, 48(4):813–838, 1985.

P. Sweetser, D. Johnson, J. Sweetser, and J. Wiles. Creating engaging artificial characters forgames. In Proceedings of the second international conference on Entertainment computing,ICEC ’03, pages 1–8. Carnegie Mellon University, 2003.

J. Wexler. Artificial intelligence in games: A look at the smarts behind lionhead studio’s”black and white” and where it can go and will go in the future, 2002.

Wikipedia. Pac-man. http://en.wikipedia.org/wiki/Pacman, a. [Online; accessed 29-July-2012].

Wikipedia. Populous. http://en.wikipedia.org/wiki/Populous, b. [Online; accessed29-July-2012].

Wikipedia. Super mario kart. http://en.wikipedia.org/wiki/Super_mario_kart, c.[Online; accessed 29-July-2012].

50

Page 59: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

A Emotion generation examples

These examples are taken from the book that defined the OCC emotional model (Ortonyet al., 1988)

Joy (example 5.1, page 86)

Description: The man was pleased when he realized he was to get a small inheritance froman unknown distant relative.

Listing 24: Code for example ”Joy”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”become r i c h ” , 0 . 8 f ) ;3 bra in . B e l i e f ( ” i n h e r i t a n c e ” , 1 , ” d i s t a n t r e l a t i v e ” ) ;4 bra in . Af f ec t sGoa l ( ” i n h e r i t a n c e ” , ”become r i c h ” , 0 . 5 f ) ;5 bra in . Update ( ) ;

Listing 25: Output for example ”Joy”

adding goa l : become r ich , u t i l i t y = 0 . 8 , owner = s e l fadding b e l i e f : i nhe r i t ance , l i k e l i h o o d = 1 , agent = d i s t a n t r e l a t i v eDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : i nhe r i t ance , l i k e l i h o o d = 1a f f e c t e d goa l : become r ich , va l ence = 0 .5d e s i r a b i l i t y : 0 . 4 <− 0 .5 x 0 .8goa l l i k e l i h o o d : 0 .75 <− ( 0 . 5 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .75 <− 0 .75emotion i n t e n s i t y : 0 . 3 <− abs ( 0 . 4 x 0 . 7 5 )adding JOY: 0 .3adding GRATITUDE: 0 .3 towards d i s t a n t r e l a t i v e

Current I n t e r n a l State : Joy : 0 .30Re l a t i o n s h i p s :

d i s t a n t r e l a t i v e −> Gratitude : 0 .30

Distress (example 5.2, page 87)

Description: The driver was upset about running out of gas on the freeway.

Listing 26: Code for example ”Distress”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” get home” , 0 . 8 f ) ;3 bra in . B e l i e f ( ” run out o f gas ” , 1 ) ;4 bra in . Af f ec t sGoa l ( ”run out o f gas ” , ” get home” , −0.6 f ) ;5 bra in . Update ( ) ;

Listing 27: Output for example ”Distress”

adding goa l : get home , u t i l i t y = 0 . 8 , owner = s e l fadding b e l i e f : run out o f gas , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : run out o f gas , l i k e l i h o o d = 1

51

Page 60: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

a f f e c t e d goa l : get home , va l ence = −0.6d e s i r a b i l i t y : −0.48 <− −0.6 x 0 .8goa l l i k e l i h o o d : 0 . 2 <− (−0.6 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 2 <− 0 .2emotion i n t e n s i t y : 0 .096 <− abs (−0.48 x 0 . 2 )adding DISTRESS : 0 .096

Current I n t e r n a l State : D i s t r e s s : 0 .10

Hope (example 6.1, page 112)

Description: As she thought about the possibility of being asked to the dance, the girl wasfilled with hope.

Listing 28: Code for example ”Hope”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”go to dance” , 0 . 9 f ) ;3 bra in . B e l i e f ( ” get i n v i t e d to dance” , 0 . 8 f ) ;4 bra in . Af f ec t sGoa l ( ” get i n v i t e d to dance” , ”go to dance” , 1 f ) ;5 bra in . Update ( ) ;

Listing 29: Output for example ”Hope”

adding goa l : go to dance , u t i l i t y = 0 . 9 , owner = s e l fadding b e l i e f : get i n v i t e d to dance , l i k e l i h o o d = 0 . 8 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d to dance , l i k e l i h o o d = 0 .8a f f e c t e d goa l : go to dance , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 0 . 9 <− (1 x 0 .8 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 9 <− 0 .9emotion i n t e n s i t y : 0 .8099999 <− abs ( 0 . 9 x 0 . 9 )adding HOPE: 0.8099999

Current I n t e r n a l State : Hope : 0 .81

Fear (example 6.2, page 112)

Description: The employee, suspecting he was no longer needed feared that he would be fired.

Listing 30: Code for example ”Fear”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l o s e job ” , −0.8 f ) ;3 bra in . B e l i e f ( ” get f i r e d ” , 0 . 7 f ) ;4 bra in . Af f ec t sGoa l ( ” get f i r e d ” , ” l o s e job ” , 1 f ) ;5 bra in . Update ( ) ;

Listing 31: Output for example ”Fear”

adding goa l : l o s e job , u t i l i t y = −0.8 , owner = s e l fadding b e l i e f : get f i r e d , l i k e l i h o o d = 0 . 7 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

52

Page 61: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 0 .7a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.8 <− 1 x −0.8goa l l i k e l i h o o d : 0 .85 <− (1 x 0 .7 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .68 <− abs (−0.8 x 0 . 85 )adding FEAR: 0 .68

Current I n t e r n a l State : Fear : 0 .68

Satisfaction (example 6.3, page 118)

Description: When she realized that she was indeed being asked to go to the dance by the boyof her dreams, the girl was gratified.

Listing 32: Code for example ”Satisfaction”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”go to dance with boy” , 0 . 9 f ) ;3 bra in . B e l i e f ( ” get i n v i t e d by boy” , 0 . 4 f , ”boy” ) ;4 bra in . Af f ec t sGoa l ( ” get i n v i t e d by boy” , ”go to dance with boy” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” get i n v i t e d by boy” , 1 f , ”boy” ) ;7 bra in . Update ( ) ;

Listing 33: Output for example ”Satisfaction”

adding goa l : go to dance with boy , u t i l i t y = 0 . 9 , owner = s e l fadding b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 . 4 , agent = boyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 .4a f f e c t e d goa l : go to dance with boy , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 0 . 7 <− (1 x 0 .4 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 7 <− 0 .7emotion i n t e n s i t y : 0 .63 <− abs ( 0 . 9 x 0 . 7 )adding HOPE: 0 .63adding GRATITUDE: 0 .63 towards boyadding b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 1 , agent = boyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 1a f f e c t e d goa l : go to dance with boy , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 3 <− 1 − 0 .7emotion i n t e n s i t y : 0 .27 <− abs ( 0 . 9 x 0 . 3 )adding JOY: 0 .27adding SATISFACTION: 0 .27adding GRATITUDE: 0 .27 towards boy

Current I n t e r n a l State : Joy : 0 . 2 7 ; Hope : 0 . 3 2 ; S a t i s f a c t i o n : 0 .27Re l a t i o n s h i p s :

boy −> Gratitude : 0 .39

53

Page 62: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Fears-Confirmed (example 6.4, page 119)

Description: The employee’s fears were confirmed when he learned that he was indeed goingto be fired.

Listing 34: Code for example ”Fears-Confirmed”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l o s e job ” , −0.9 f ) ;3 bra in . B e l i e f ( ” get f i r e d ” , 0 . 4 f ) ;4 bra in . Af f ec t sGoa l ( ” get f i r e d ” , ” l o s e job ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” get f i r e d ” , 1 f ) ;7 bra in . Update ( ) ;

Listing 35: Output for example ”Fears-Confirmed”

adding goa l : l o s e job , u t i l i t y = −0.9 , owner = s e l fadding b e l i e f : get f i r e d , l i k e l i h o o d = 0 . 4 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 0 .4a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.9 <− 1 x −0.9goa l l i k e l i h o o d : 0 . 7 <− (1 x 0 .4 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 7 <− 0 .7emotion i n t e n s i t y : 0 .63 <− abs (−0.9 x 0 . 7 )adding FEAR: 0 .63adding b e l i e f : get f i r e d , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 1a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.9 <− 1 x −0.9goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 3 <− 1 − 0 .7emotion i n t e n s i t y : 0 .27 <− abs (−0.9 x 0 . 3 )adding DISTRESS : 0 .27adding FEARS CONFIRMED: 0 .27

Current I n t e r n a l State : D i s t r e s s : 0 . 2 7 ; Fear : 0 . 3 2 ; FearsConfirmed : 0 .27

Relief (example 6.5, page 121)

Description: The employee was relieved to learn that he was not going to be fired.

Listing 36: Code for example ”Relief”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l o s e job ” , −0.9 f ) ;3 bra in . B e l i e f ( ” get f i r e d ” , 0 . 6 f ) ;4 bra in . Af f ec t sGoa l ( ” get f i r e d ” , ” l o s e job ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” get f i r e d ” , 0 f ) ;7 bra in . Update ( ) ;

54

Page 63: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Listing 37: Output for example ”Relief”

adding goa l : l o s e job , u t i l i t y = −0.9 , owner = s e l fadding b e l i e f : get f i r e d , l i k e l i h o o d = 0 . 6 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 0 .6a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.9 <− 1 x −0.9goa l l i k e l i h o o d : 0 . 8 <− (1 x 0 .6 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 8 <− 0 .8emotion i n t e n s i t y : 0 .72 <− abs (−0.9 x 0 . 8 )adding FEAR: 0 .72adding b e l i e f : get f i r e d , l i k e l i h o o d = 0 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get f i r e d , l i k e l i h o o d = 0a f f e c t e d goa l : l o s e job , va l ence = 1d e s i r a b i l i t y : −0.9 <− 1 x −0.9goa l l i k e l i h o o d : 0 . 5 <− (1 x 0 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.3 <− 0 .5 − 0 .8emotion i n t e n s i t y : 0 .27 <− abs (−0.9 x −0.3)adding JOY: 0 .27adding RELIEF : 0 .27

Current I n t e r n a l State : Joy : 0 . 2 7 ; Fear : 0 . 3 6 ; R e l i e f : 0 .27

Disappointment (example 6.6, page 122)

Description: The girl was disappointed when she realized that she would not be asked to thedance after all.

Listing 38: Code for example ”Disappointment”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”go to dance with boy” , 0 . 9 f ) ;3 bra in . B e l i e f ( ” get i n v i t e d by boy” , 0 . 6 f , ”boy” ) ;4 bra in . Af f ec t sGoa l ( ” get i n v i t e d by boy” , ”go to dance with boy” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” get i n v i t e d by boy” , 0 f , ”boy” ) ;7 bra in . Update ( ) ;

Listing 39: Output for example ”Disappointment”

adding goa l : go to dance with boy , u t i l i t y = 0 . 9 , owner = s e l fadding b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 . 6 , agent = boyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 .6a f f e c t e d goa l : go to dance with boy , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 0 . 8 <− (1 x 0 .6 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 8 <− 0 .8emotion i n t e n s i t y : 0 .72 <− abs ( 0 . 9 x 0 . 8 )adding HOPE: 0 .72

55

Page 64: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

adding GRATITUDE: 0 .72 towards boyadding b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0 , agent = boyDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : get i n v i t e d by boy , l i k e l i h o o d = 0a f f e c t e d goa l : go to dance with boy , va l ence = 1d e s i r a b i l i t y : 0 . 9 <− 1 x 0 .9goa l l i k e l i h o o d : 0 . 5 <− (1 x 0 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.3 <− 0 .5 − 0 .8emotion i n t e n s i t y : 0 .27 <− abs ( 0 . 9 x −0.3)adding DISTRESS : 0 .27adding DISAPPOINTMENT: 0 .27adding ANGER: 0 .27 towards boy

Current I n t e r n a l State : D i s t r e s s : 0 . 2 7 ; Hope : 0 . 3 6 ; Disappointment : 0 .27Re l a t i o n s h i p s :

boy −> Anger : 0 . 2 7 ; Grat itude : 0 .36

Gratitude (example 7.5, page 148)

Description: The woman was grateful to the stranger for saving the life of her child.

Listing 40: Code for example ”Gratitude”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l o s e c h i l d ” , −1 f ) ;3 bra in . B e l i e f ( ” c h i l d in danger ” , 0 . 6 f ) ;4 bra in . Af f ec t sGoa l ( ” c h i l d in danger ” , ” l o s e c h i l d ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . B e l i e f ( ” save c h i l d ” , 1 f , ” s t r ange r ” ) ;7 bra in . Af f ec t sGoa l ( ” save c h i l d ” , ” l o s e c h i l d ” , −1 f ) ;8 bra in . Update ( ) ;

Listing 41: Output for example ”Gratitude”

adding goa l : l o s e ch i ld , u t i l i t y = −1, owner = s e l fadding b e l i e f : c h i l d in danger , l i k e l i h o o d = 0 . 6 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : c h i l d in danger , l i k e l i h o o d = 0 .6a f f e c t e d goa l : l o s e ch i ld , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 0 . 8 <− (1 x 0 .6 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 . 8 <− 0 .8emotion i n t e n s i t y : 0 . 8 <− abs(−1 x 0 . 8 )adding FEAR: 0 .8adding b e l i e f : save ch i ld , l i k e l i h o o d = 1 , agent = s t range rDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : save ch i ld , l i k e l i h o o d = 1a f f e c t e d goa l : l o s e ch i ld , va l ence = −1d e s i r a b i l i t y : 1 <− −1 x −1goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.8 <− 0 − 0 .8

56

Page 65: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

emotion i n t e n s i t y : 0 . 8 <− abs (1 x −0.8)adding JOY: 0 .8adding SATISFACTION: 0 .8adding GRATITUDE: 0 .8 towards s t r ange r

Current I n t e r n a l State : Joy : 0 . 8 0 ; Fear : 0 . 4 0 ; S a t i s f a c t i o n : 0 .80Re l a t i o n s h i p s :

s t r ange r −> Gratitude : 0 .80

Anger (example 7.6, page 148)

Description: The woman was angry with her husband for forgetting to buy the groceries.

Listing 42: Code for example ”Anger”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”have g r o c e r i e s ” , 0 . 7 f ) ;3 bra in . B e l i e f ( ” f o r g o t g r o c e r i e s ” , 1 f , ”husband” ) ;4 bra in . Af f ec t sGoa l ( ” f o r g o t g r o c e r i e s ” , ”have g r o c e r i e s ” , −1 f ) ;5 bra in . Update ( ) ;

Listing 43: Output for example ”Anger”

adding goa l : have g r o c e r i e s , u t i l i t y = 0 . 7 , owner = s e l fadding b e l i e f : f o r g o t g r o c e r i e s , l i k e l i h o o d = 1 , agent = husbandDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : f o r g o t g r o c e r i e s , l i k e l i h o o d = 1a f f e c t e d goa l : have g r o c e r i e s , va l ence = −1d e s i r a b i l i t y : −0.7 <− −1 x 0 .7goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 <− 0s p e c i a l case , the goa l i s d i r e c t l y blocked , d e l t a becomes the d e s i r a b i l i t y : −0.7emotion i n t e n s i t y : 0 .49 <− abs (−0.7 x −0.7)adding DISTRESS : 0 .49adding ANGER: 0 .49 towards husband

Current I n t e r n a l State : D i s t r e s s : 0 .49Re l a t i o n s h i p s :

husband −> Anger : 0 .49

Pride (Gratification) (example 7.1, page 137)

Description: The woman was proud of saving the life of a drowning child.

Listing 44: Code for example ”Pride”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” l i k e k ids ” , 0 . 7 f ) ;3 bra in . B e l i e f ( ” i s f r i e n d l y ” , 1 f , ” kid ” ) ;4 bra in . Af f ec t sGoa l ( ” i s f r i e n d l y ” , ” l i k e k ids ” , 1 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” d i e ” , −1f , ” kid ” ) ;7 bra in . B e l i e f ( ” kid i s drowning” , 0 . 7 f ) ;8 bra in . Af f ec t sGoa l ( ” kid i s drowning” , ” d i e ” , 1 f ) ;9 bra in . Update ( ) ;

10 bra in . B e l i e f ( ” save kid ” , 1 f , ” s e l f ” ) ;11 bra in . Af f ec t sGoa l ( ” save kid ” , ” d i e ” , −1);

57

Page 66: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

12 bra in . Update ( ) ;

Listing 45: Output for example ”Pride”

adding goa l : l i k e kids , u t i l i t y = 0 . 7 , owner = s e l fadding b e l i e f : i s f r i e n d l y , l i k e l i h o o d = 1 , agent = kidDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : i s f r i e n d l y , l i k e l i h o o d = 1a f f e c t e d goa l : l i k e kids , va l ence = 1d e s i r a b i l i t y : 0 . 7 <− 1 x 0 .7goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 0 . 7 <− abs ( 0 . 7 x 1)adding JOY: 0 .7adding GRATITUDE: 0 .7 towards kidadding goa l : die , u t i l i t y = −1, owner = kidadding b e l i e f : k id i s drowning , l i k e l i h o o d = 0 . 7 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : k id i s drowning , l i k e l i h o o d = 0 .7a f f e c t e d goa l : die , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 0 .85 <− (1 x 0 .7 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .85 <− abs(−1 x 0 . 8 5 )adding PITY: 0 .85 towards kidadding b e l i e f : save kid , l i k e l i h o o d = 1 , agent = s e l fDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : save kid , l i k e l i h o o d = 1a f f e c t e d goa l : die , va l ence = −1d e s i r a b i l i t y : 1 <− −1 x −1goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : −0.85 <− 0 − 0 .85emotion i n t e n s i t y : 0 .85 <− abs (1 x −0.85)adding GRATIFICATION: 0 .85 towards s e l fadding HAPPY−FOR: 0 .85 towards kid

Current I n t e r n a l State : Joy : 0 .18Re l a t i o n s h i p s :

k id −> Gratitude : 0 . 1 8 ; G r a t i f i c a t i o n : 0 . 8 5 ; HappyFor : 0 . 8 5 ; Pity : 0 .43

Guilt (Self-Reproach) (example 7.2, page 137)

Description: The spy was ashamed of having betrayed his country.

Listing 46: Code for example ”Guilt”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ” f e e l s a f e ” , 1 f ) ;3 bra in . B e l i e f ( ” i s s a f e ” , 1 f , ” country ” ) ;4 bra in . Af f ec t sGoa l ( ” i s s a f e ” , ” f e e l s a f e ” , 0 . 7 f ) ;5 bra in . Update ( ) ;

58

Page 67: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

6 bra in . Goal ( ”be betrayed ” , −1f , ” country ” ) ;7 bra in . B e l i e f ( ” betray country ” , 1 f , ” s e l f ” ) ;8 bra in . Af f ec t sGoa l ( ” betray country ” , ”be betrayed ” , 1 f ) ;9 bra in . Update ( ) ;

Listing 47: Output for example ”Guilt”

adding goa l : f e e l sa f e , u t i l i t y = 1 , owner = s e l fadding b e l i e f : i s sa f e , l i k e l i h o o d = 1 , agent = countryDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : i s sa f e , l i k e l i h o o d = 1a f f e c t e d goa l : f e e l sa f e , va l ence = 0 .7d e s i r a b i l i t y : 0 . 7 <− 0 .7 x 1goa l l i k e l i h o o d : 0 .85 <− ( 0 . 7 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .595 <− abs ( 0 . 7 x 0 . 8 5 )adding JOY: 0 .595adding GRATITUDE: 0 .595 towards countryadding goa l : be betrayed , u t i l i t y = −1, owner = countryadding b e l i e f : betray country , l i k e l i h o o d = 1 , agent = s e l fDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : betray country , l i k e l i h o o d = 1a f f e c t e d goa l : be betrayed , va l ence = 1d e s i r a b i l i t y : −1 <− 1 x −1goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 1 <− abs(−1 x 1)adding GUILT: 1 towards countryadding PITY: 1 towards country

Current I n t e r n a l State : Joy : 0 .30Re l a t i o n s h i p s :

country −> Gui l t : 1 . 0 0 ; Grat itude : 0 . 3 0 ; Pity : 1 .00

Happy-For (example 5.3, page 93)

Description: Fred was happy for his friend Mary because she won a thousand dollars.

Listing 48: Code for example ”Happy-For”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”have f r i e n d s ” , 0 . 9 f ) ;3 bra in . B e l i e f ( ”mary i s f r i e n d ” , 1 f , ”mary” ) ;4 bra in . Af f ec t sGoa l ( ”mary i s f r i e n d ” , ”have f r i e n d s ” , 0 . 7 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ”have money” , 0 . 7 f , ”mary” ) ;7 bra in . B e l i e f ( ”mary won money” , 1 f ) ;8 bra in . Af f ec t sGoa l ( ”mary won money” , ”have money” , 1 f ) ;9 bra in . Update ( ) ;

Listing 49: Output for example ”Happy-For”

adding goa l : have f r i e n d s , u t i l i t y = 0 . 9 , owner = s e l fadding b e l i e f : mary i s f r i end , l i k e l i h o o d = 1 , agent = mary

59

Page 68: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : mary i s f r i end , l i k e l i h o o d = 1a f f e c t e d goa l : have f r i e n d s , va l ence = 0 .7d e s i r a b i l i t y : 0 .63 <− 0 .7 x 0 .9goa l l i k e l i h o o d : 0 .85 <− ( 0 . 7 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .5355 <− abs ( 0 . 63 x 0 . 8 5 )adding JOY: 0 .5355adding GRATITUDE: 0 .5355 towards maryadding goa l : have money , u t i l i t y = 0 . 7 , owner = maryadding b e l i e f : mary won money , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : mary won money , l i k e l i h o o d = 1a f f e c t e d goa l : have money , va l ence = 1d e s i r a b i l i t y : 0 . 7 <− 1 x 0 .7goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 0 . 7 <− abs ( 0 . 7 x 1)adding HAPPY−FOR: 0 .7 towards mary

Current I n t e r n a l State : Joy : 0 .27Re l a t i o n s h i p s :

mary −> Gratitude : 0 . 2 7 ; HappyFor : 0 .70

Pity (Sorry-For) (example 5.4, page 93)

Description: Fred was sorry for his friend Mary because her husband was killed in a carcrash.

Listing 50: Code for example ”Pity”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”have f r i e n d s ” , 0 . 9 f ) ;3 bra in . B e l i e f ( ”mary i s f r i e n d ” , 1 f , ”mary” ) ;4 bra in . Af f ec t sGoa l ( ”mary i s f r i e n d ” , ”have f r i e n d s ” , 0 . 7 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” l i v e with husband” , 1 f , ”mary” ) ;7 bra in . B e l i e f ( ” car crash ” , 1 f ) ;8 bra in . Af f ec t sGoa l ( ” car crash ” , ” l i v e with husband” , −1 f ) ;9 bra in . Update ( ) ;

Listing 51: Output for example ”Pity”

adding goa l : have f r i e n d s , u t i l i t y = 0 . 9 , owner = s e l fadding b e l i e f : mary i s f r i end , l i k e l i h o o d = 1 , agent = maryDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : mary i s f r i end , l i k e l i h o o d = 1a f f e c t e d goa l : have f r i e n d s , va l ence = 0 .7d e s i r a b i l i t y : 0 .63 <− 0 .7 x 0 .9goa l l i k e l i h o o d : 0 .85 <− ( 0 . 7 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85

60

Page 69: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

emotion i n t e n s i t y : 0 .5355 <− abs ( 0 . 63 x 0 . 8 5 )adding JOY: 0 .5355adding GRATITUDE: 0 .5355 towards maryadding goa l : l i v e with husband , u t i l i t y = 1 , owner = maryadding b e l i e f : car crash , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : car crash , l i k e l i h o o d = 1a f f e c t e d goa l : l i v e with husband , va l ence = −1d e s i r a b i l i t y : −1 <− −1 x 1goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 <− 0s p e c i a l case , the goa l i s d i r e c t l y blocked , d e l t a becomes the d e s i r a b i l i t y : −1emotion i n t e n s i t y : 1 <− abs(−1 x −1)adding PITY: 1 towards mary

Current I n t e r n a l State : Joy : 0 .27Re l a t i o n s h i p s :

mary −> Gratitude : 0 . 2 7 ; Pity : 1 .00

Resentment (example 5.5, page 99)

Description: The executive resented the large pay raise awarded to a colleague whom heconsidered incompetent.

Listing 52: Code for example ”Resentment”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”work with incompetents ” , −0.8 f ) ;3 bra in . B e l i e f ( ” incompetent a c t i o n s ” , 1 f , ” c o l l e a g u e ” ) ;4 bra in . Af f ec t sGoa l ( ” incompetent a c t i o n s ” , ”work with incompetents ” , 0 . 7 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” get more money” , 0 . 8 f , ” c o l l e a g u e ” ) ;7 bra in . B e l i e f ( ”pay r i s e ” , 1 f ) ;8 bra in . Af f ec t sGoa l ( ”pay r i s e ” , ” get more money” , 1 f ) ;9 bra in . Update ( ) ;

Listing 53: Output for example ”Resentment”

adding goa l : work with incompetents , u t i l i t y = −0.8 , owner = s e l fadding b e l i e f : incompetent act ions , l i k e l i h o o d = 1 , agent = c o l l e a g u eDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : incompetent act ions , l i k e l i h o o d = 1a f f e c t e d goa l : work with incompetents , va l ence = 0 .7d e s i r a b i l i t y : −0.56 <− 0 .7 x −0.8goa l l i k e l i h o o d : 0 .85 <− ( 0 . 7 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 .85 <− 0 .85emotion i n t e n s i t y : 0 .476 <− abs (−0.56 x 0 . 85 )adding DISTRESS : 0 .476adding ANGER: 0 .476 towards c o l l e a g u eadding goa l : get more money , u t i l i t y = 0 . 8 , owner = c o l l e a g u eadding b e l i e f : pay r i s e , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

61

Page 70: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

r e c a l c u l a t i n g b e l i e f : pay r i s e , l i k e l i h o o d = 1a f f e c t e d goa l : get more money , va l ence = 1d e s i r a b i l i t y : 0 . 8 <− 1 x 0 .8goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1emotion i n t e n s i t y : 0 . 8 <− abs ( 0 . 8 x 1)adding RESENTMENT: 0 .8 towards c o l l e a g u e

Current I n t e r n a l State : D i s t r e s s : 0 .24Re l a t i o n s h i p s :

c o l l e a g u e −> Anger : 0 . 2 4 ; Resentment : 0 .80

Gloating (example 5.6, page 100)

Description: Political opponents of Richard Nixon gloated over his ignominious departurefrom office.

Listing 54: Code for example ”Gloating”

1 EmoBrain bra in = new EmoBrain ( ) ;2 bra in . Goal ( ”have a democrat p r e s i d e n t ” , 0 . 8 f ) ;3 bra in . B e l i e f ( ”Nixon won e l e c t i o n s ” , 1 f , ”Nixon” ) ;4 bra in . Af f ec t sGoa l ( ”Nixon won e l e c t i o n s ” , ”have a democrat p r e s i d e n t ” , −1 f ) ;5 bra in . Update ( ) ;6 bra in . Goal ( ” stay in o f f i c e ” , 0 . 8 f , ”Nixon” ) ;7 bra in . B e l i e f ( ” watergate scanda l ” , 1 f ) ;8 bra in . Af f ec t sGoa l ( ” watergate scanda l ” , ” stay in o f f i c e ” , −1 f ) ;9 bra in . Af f ec t sGoa l ( ” watergate scanda l ” , ”have a democrat p r e s i d e n t ” , 1 f ) ;

10 bra in . Update ( ) ;

Listing 55: Output for example ”Gloating”

adding goa l : have a democrat pre s ident , u t i l i t y = 0 . 8 , owner = s e l fadding b e l i e f : Nixon won e l e c t i o n s , l i k e l i h o o d = 1 , agent = NixonDecaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : Nixon won e l e c t i o n s , l i k e l i h o o d = 1a f f e c t e d goa l : have a democrat pre s ident , va l ence = −1d e s i r a b i l i t y : −0.8 <− −1 x 0 .8goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 <− 0s p e c i a l case , the goa l i s d i r e c t l y blocked , d e l t a becomes the d e s i r a b i l i t y : −0.8emotion i n t e n s i t y : 0 .64 <− abs (−0.8 x −0.8)adding DISTRESS : 0 .64adding ANGER: 0 .64 towards Nixonadding goa l : s tay in o f f i c e , u t i l i t y = 0 . 8 , owner = Nixonadding b e l i e f : watergate scandal , l i k e l i h o o d = 1 , agent =Decaying . . .Updating . . .Reca l cu l a t i ng . . .

r e c a l c u l a t i n g b e l i e f : watergate scandal , l i k e l i h o o d = 1a f f e c t e d goa l : s tay in o f f i c e , va l ence = −1d e s i r a b i l i t y : −0.8 <− −1 x 0 .8goa l l i k e l i h o o d : 0 <− (−1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 0 <− 0s p e c i a l case , the goa l i s d i r e c t l y blocked , d e l t a becomes the d e s i r a b i l i t y : −0.8

62

Page 71: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

emotion i n t e n s i t y : 0 .64 <− abs (−0.8 x −0.8)adding GLOATING: 0 .64 towards Nixona f f e c t e d goa l : have a democrat pre s ident , va l ence = 1d e s i r a b i l i t y : 0 . 8 <− 1 x 0 .8goa l l i k e l i h o o d : 1 <− (1 x 1 + 1) / 2d e l t a goa l l i k e l i h o o d : 1 <− 1 − 0emotion i n t e n s i t y : 0 . 8 <− abs ( 0 . 8 x 1)adding JOY: 0 .8adding SATISFACTION: 0 .8

Current I n t e r n a l State : Joy : 0 . 8 0 ; D i s t r e s s : 0 . 3 2 ; S a t i s f a c t i o n : 0 .80Re l a t i o n s h i p s :

Nixon −> Anger : 0 . 3 2 ; Gloat ing : 0 .64

B EmoBrain class reference

Represents the emotional part of the brain of an agent. This is the only public function andit contains the only members in the library accessible to the client (game).

B.1 Constructor

EmoBrain(...)Constructor of the class. Initializes the emotional brain with a default mood and the decay functions

Parameters:

• joy: The default level of joy (optional, default value = 0)• distress: The default level of distress (optional, default value = 0)• hope: The default level of hope (optional, default value = 0)• fear: The default level of fear (optional, default value = 0)• satisfaction: The default level of satisfaction (optional, default value = 0)• fearsConfirmed: The default level of fearsConfirmed (optional, default value = 0)• disappointment: The default level of disappointment (optional, default value = 0)• relief: The default level of relief (optional, default value = 0)• anger: The default level of anger (optional, default value = 0)• guilt: The default level of guilt (optional, default value = 0)• gratitude: The default level of gratitude (optional, default value = 0)• gratification: The default level of gratification (optional, default value = 0)• happyFor: The default level of happyFor (optional, default value = 0)• pity: The default level of pity (optional, default value = 0)• gloating: The default level of gloating (optional, default value = 0)• resentment: The default level of resentment (optional, default value = 0)

B.2 Public Member Functions

float Anger(string name)The anger level towards another agent

63

Page 72: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

float Guilt(string name)The guilt level towards another agent

float Gratitude(string name)The gratitude level towards another agent

float Gratification(string name)The gratification level towards another agent

float HappyFor(string name)The ”happy-for” level towards another agent

float Pity(string name)The pity level towards another agent

float Gloating(string name)The gloating level towards another agent

float Resentment(string name)The resentment level towards another agent

void Goal(string name, float utility, string owner=null)Add or Modify a goal in the goal set

void Belief(string name, float likelihood, string agent=null)Add or Modify a belief in the belief set

void AffectsGoal(string belief, string goal, float valence)Updates the influence of a belief on a goal

• belief: The identifier of the belief

• goal: The identifier of the goal

• valence: The way it affects the goal (-1 = goal is blocked, 1 = goal is succeeding,everything in bewteen = variations)

void Update(float gameTime=0f)Updates the intensities of emotions over time

override string ToString()Returns a string representation of the current state

B.3 Static Public Attributes

static Logger Logger = nullThe function used to log messages if any

B.4 Properties

float Pleasure [get]The PAD pleasure level corresponding to the current emotional state

float Arousal [get]The PAD arousal level corresponding to the current emotional state

64

Page 73: SAGE: A simple a ective game engine - UvA · SAGE: A simple a ective game engine Author: Sorin Alexandru Popescu Supervisors: Joost Broekens (TU Delft) Maarten van Someren (UvA) August

float Dominance [get]The PAD dominance level corresponding to the current emotional state

float Exuberance [get]The PAD exuberance temperament axis

float Dependency [get]The PAD dependency temperament axis

float Relaxation [get]The PAD relaxation temperament axis

float Docility [get]The PAD docility temperament axis

float Joy [get]The joy component

float Distress [get]The distress component

float Hope [get]The hope component

float Fear [get]The fear component

float Satisfaction [get]The satisfaction component

float FearsConfirmed [get]The fears confirmed component

float Disappointment [get]The disappointment component

float Relief [get]The relief component

65