42
Deliverables Report IST-2001-33310 VICTEC <May 2004> First Prototype of the bullying Demonstrator AUTHORS: Ana Paiva, Daniel Sobral, João Dias, Jonathan Pickering, Marco Vala, Ruth Aylett, Sandy Louchart STATUS: Final CHECKERS: Sarah Woods, Malcolm Padmore -1- Deliverable 6.4.1/Report no.1/Final Version

1  · Web viewReal time estimation for the user pleasantness, even if only with rules of thumb, may help in understand the user's behaviour throughout the interaction with the application

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

First Prototype of the bullying Demonstrator

AUTHORS: Ana Paiva, Daniel Sobral, João Dias, Jonathan Pickering, Marco Vala, Ruth Aylett, Sandy Louchart

STATUS: Final

CHECKERS: Sarah Woods, Malcolm Padmore

-1-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

PROJECT MANAGER

Name: Ruth AylettAddress: CVE, Business House, University of Salford, University Road,, Salford, M5 4WTPhone Number: +44 161 295 2922Fax Number:+44 161 295 2925E-mail: [email protected]

TABLE OF CONTENTS

PURPOSE OF DOCUMENT............................................................................4

EXECUTIVE OVERVIEW.................................................................................5

1. INTRODUCTION..........................................................................................6

2. AN AGENT-BASED APPROACH FOR FEARNOT!...................................7

ION-Framework...........................................................................................................7Implementation of domain knowledge.......................................................................7Implementation of agents...........................................................................................8Building an application with the framework............................................................10

3. IMPLEMENTATION OF THE BULLYING DEMONSTRATOR..................13

Using the framework to implement the demonstrator...........................................13The bullying domain................................................................................................13Characters.................................................................................................................13Visualization System................................................................................................14Narrative Control.....................................................................................................18

The FearNot! demonstrator.....................................................................................25Description from the user’s point of view................................................................25Description from an implementation point of view.................................................28

4. EVALUATION............................................................................................30

5. CONCLUSIONS AND FURTHER WORK..................................................31

-2-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Work done so far........................................................................................................31

Further Work.............................................................................................................31

6. REFERENCES...........................................................................................33

-3-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

PURPOSE OF DOCUMENT

This document constitutes Deliverable 6.4.1 of the IST Project VICTEC. It describes a first prototype of the bullying demonstrator, following the requirements assessed in the previous Deliverable 6.3.1. This document presents a first approach towards the implementation of the proposed architecture described in the aforementioned deliverable. Further, it details all the solutions adopted whilst developing the bullying demontrator. Finally, somes issues and resolutions regarding the final software are discussed.

-4-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

EXECUTIVE OVERVIEW

This Deliverable is the second deliverable from WP6 (First prototype of the bullying demonstrator) of the IST-sponsored VICTEC project. This workpackage has as its main goal to demonstrate the use of the technology developed in the other workpackages through the construction of a virtual improvisational drama environment dealing with violence and bullying in schools. This document’s purpose is the thorough description of a first prototype of the bullying demonstrator, which is the central delivery of this workpackage.

-5-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

1. INTRODUCTION

The anti-bullying demonstrator is one of the main products of this project, and therefore relates to all the project aims. The fearNot demonstrator takes the form of an episodic virtual drama. A single child user interacts with the demonstrator by acting as the ‘invisible friend’ of one of the virtual characters and advising them between episodes. One of the main aspects of the demonstrator is that it should raise awareness and evoke empathy in the children interacting with the system. An introduction segment is located at the start of the episodes, and a reflective interactive segment at the end. The reflective interactive segment will allow for children to give their input to the system and influence future events in the subsequent episodes.

This document will focus on the implementation of a first prototype of the fearNot bullying demonstrator towards the achievement of the proposed goals. Section 2 briefly discusses the agent-based approach used to develop the application. Section 3 develops on the implementation of the demonstrator. Finally, we present some preliminary results and draw some conclusions.

-6-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

2. AN AGENT-BASED APPROACH FOR FEARNOT!

The bullying demonstrator, named FearNot! is an interactive system that generates bullying episodes, based on the behaviour of synthetic characters, implemented as autonomous agents.

As described in previous deliverables such as D3.2.1, D6.3.1, we took on an agent-based approach during our development cycle. This adoption lead to the development of a generic framework, that enabled a separation of the system components, along with the creation of interfaces between them. In the following sections, we will briefly describe the framework used and how it was used for developing our system.

ION-Framework

This generic agent-based framework (described more thoroughly in the aforementioned deliverables) was the basis for the development of the run-time environment. It served as a platform for a knowledge-sharing environment, where multiple agents communicate within a virtual space.

Implementation of domain knowledge

The application information is stored in the form of Entities. Each entity can have a set of properties to describe it. The set of stored entities form the World Model. Entities can be preloaded in the beginning of the application or can be dynamically created/destroyed in real-time. For example, one of the entities that may exist in the world is a “book” and its location is on another entity “table”.

Figure 1 – Example of an entity’s description

Entities form a hierarchy to represent all type of objects (Figure 2). This includes the notion of an object (World Object), a physical location (a Locale, within which other entities can be contained – expressing a special relation between them), and the agent’s representation in the virtual environment (Agent Proxy). For example, a

-7-Deliverable 6.4.1/Report no.1/Final Version

<WorldObject> <Property id='Name'>Book</Property> <Property id='Owner'>John</Property> <Property id='Location'>Table</Property> <Property id='On'>Table</Property>

</WorldObject>

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Locale can be a “classroom” which contains another entity that is the “table”. In locales we may have also Agents that are represented in this symbolic representation of the word, by the “Agent Proxy” (see Figure 2).

Entities are typically created to hold information that is relevant to some/all the agents present in the virtual space. Information that is required only by one agent is usually not stored within the world model, as this would put an extra burden on the system. This way, only the common concepts are effectively stored and shared.

Implementation of agents

Agents are represented in the virtual world through the agent proxy (Figure 3). This entity makes the interface between the decision and the information that passes in the environment. The agent proxy holds a set of Effectors and Sensors with which the agent can participate in the virtual space.

Figure 3 – Agent Proxy

-8-Deliverable 6.4.1/Report no.1/Final Version

Entity

WorldObject Group Agent

Proxy

Locale

Effectors

Sensors

Figure 2 – The Entity Hierarchy

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Effectors are the mechanism through which the agent can make changes in the shared information. These changes evoke events, which can be captured by the sensors to acknowledge such changes. The activation of an effector, by itself, generates an event that can be captured. This can be used, for example, to generate speech acts, as described in the following chapter and in deliverable D5.3.1.

Figure 4 – An Agent (signalled with the blue circle) within the virtual space

Effectors constitute, in brief, the action language. It is relevant to note that an agent need not to know, a priori, all the effectors and entities that may exist, since the agent may infer the meaning of an effector by observing it’s effect on the world model. This offers an extra degree of flexibility, since entities can be added/removed and even effectors and sensors can be activated/deactivated in run-time, dynamically.

-9-Deliverable 6.4.1/Report no.1/Final Version

World Model

Control

AgentEntities

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 5 - The execution cycle of an agent and its impact in the virtual space

Building an application with the framework

The agent-based framework is currently implemented in Java. Therefore, building an application with the framework requires the use of the framework java library. Within the application it is necessary to create an instance of the World Model. This world will start with the use of a Timer, which will simulate the passage of time. The world can be created either empty or with some pre-built content, described in an XML file.

-10-Deliverable 6.4.1/Report no.1/Final Version

LocaleControl

Effectors

Sensors

Request execution Execute Change

Effectors

SensorsEffectors

Sensors

Distribute EventSend Perception

The Control request the execution of an Effector

The Effector execute the changes in the model

The Locale (holding a set of entities sharing a space) creates and distributes Events to the Sensors

The Sensors process the events and generate Perceptions

Sensors send the perception to the Control Module

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 6 – Example of a XML file describing a world with an WorldObject and an Agent

The world is then ready to accept the connection of Agents to inhabit the world. Some can be directly encoded in the application as local agents, as the XML example shows. Nonetheless, the typical implementation of an agent is as a Remote Agent, which dynamically enters the world through a network connection, as a separate application. This is done so in order to make the development of the different agents as separate of each other as possible, and to make agents as cleanly separated from the particular implementation of the framework as possible. Since remote agents communicate with the world through XML message passing (local agents make explicit java calls), these agents also work if the framework implementation changes. Therefore, this also allows that, as long as the message language is respected, agents can be built in any programming language. Although the agents are connected through a network, they can be obviously work in the same machine, through a local connection.

For the agents to be able to interact with the virtual world, it is now necessary to define the agent proxy’s sensors and effectors. In brief, it is necessary to define the agent action language and sensing machinery. The framework provides a set of predefined sensors and effectors, which allow property changes (PropertyChange effector), sensing such changes (PropertyChanged sensor), add/remove world entities, detecting new entities, and other common actions and corresponding sensors.

-11-Deliverable 6.4.1/Report no.1/Final Version

<World> <WorldObject> <Property id='Name'>Book</Property> <Property id='Owner'>John</Property> <Property id='Location'>Table</Property> <Property id='On'>Table</Property> </WorldObject> <Agent> <Property id='Name'>PropAgent</Property> <Effector class='inescid.gaips.ionagents.effectors.ChangePropertyEffector'/> <Sensor class='inescid.gaips.ionagents.sensors.PropertySensor'/> <Mind class='myFirstMindXmlDef.PropMind'/> </Agent></World>

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Nonetheless, each domain will have its particular action language. This leads to the need to implement the necessary effectors and sensors (in java, dependent of the current framework implementation). A possible solution for this problem is to develop in the framework a generic effector that can be parameterized to instantiate to the appropriate action. This approach is appropriate in the case of actions that do not affect the world, as it is the case of speech acts, but in the case of effectors that alter world content (either by adding/removing entities or changing properties) this may become problematic. All actions in the world will ultimately correspond to the simple actions that are already provided in the framework. Therefore, this approach, although it may increase the complexity of the agent’s actions, may prove to be successful.

-12-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

3. IMPLEMENTATION OF THE BULLYING DEMONSTRATOR

The bullying demonstrator (FearNot!), as a final product of the Victec project, pretends to be an educational tool, aware of location and user specificities, as described in previous deliverables. It will depict a specifically designed bullying scenario within the demonstrator that will be comprised of a sequence of episodes during which the child user will have the opportunity to try out a number of different coping responses to deal with different bullying situations that arise. Due to the non-scripted nature of the demonstrator and the use of emergent narrative, no one scenario will be the same and different episodes will be shown with some randomness.

Using the framework to implement the demonstrator

In the following sections we will describe how the demonstrator has been implemented using the agent-based framework.

The bullying domain

All relevant entities have a “Name” property that identifies them to the agents. All the relevant objects and props, like John’s Book or Luke’s Ball are world object entities. All physical locations are locale entities, like the Classroom, the Corridor, the Library or the Playground. All characters are agents, represented in the world through their agent proxy. All entities contain properties that are relevant to the action.

Finally, the action language created includes a set of domain-specific effectors. Among these is the SpeechAct, and physical actions like Mock or Push. The rationale for their use is explained in later sections and in more detail in deliverable D5.3.1.

Characters

Characters are implemented as agents within the framework. Further details on their implementation are described in the characters deliverable D5.3.1 and previous deliverables. It is relevant to note that the notion of a character only makes sense when we are in simulation mode (see the section on narrative control), in which the agent is effectively controlling the character. In scripted mode, the character is only a visual manifestation of a script, although this is transparent for the user.

-13-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Visualization System

As we discussed in previous deliverables, we wanted to keep all development independent of a particular visualization system. This is achieved through the use of the framework, with the View Manager. This is a special agent that has the task to make the interface to a particular visualization system.

Figure 7 a) Local C# viewer window application b) Web-based viewer running in a browser

-14-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 8 – The View Manager in the run-time application

The root locale is the base location where all objects in the virtual world are contained (Figure 9). Locales can be organized into a tree hierarchy, containing sub-locales. This spatial hierarchy create an awareness hierarchy, where agents can only receive events from the locale where they are and its sub-locales. By putting the view manager in the root locale, it has access to all the events happening in the world.

Basically, the task of the view manager is to listen to world events expressed in the agent’s action language and translate it to a particular visualization system. Note that many view managers can coexist, each one interpreting the events and enacting a particular presentation of the events, in multiple different media.

-15-Deliverable 6.4.1/Report no.1/Final Version

World Model CharacterMind

CharacterMind

User

ViewManager

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 9 – The Locale Hierarchy

Each agent communicates with the virtual world independently of the particular visualization of its actions (expressed as the activation of effectors). During each execution step of the agent’s effector (Figure 10), its properties or other entities properties can be changed, generating events the view manager perceives. A particularly important event is the Effector Start event, which is generated when the effector is first activated. When a speech act effector is started, for example, although no property is affected, the view manager detects its activation and sends a sequence of view commands (playing of animations, display of text and/or sound, camera movement, etc.). The effector must contain sufficient information for the view manager to know how to display it.

-16-Deliverable 6.4.1/Report no.1/Final Version

Locale 1

Root Locale

Locale 2

Locale 3

A

B C

D

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

An important issue relates to the difference in virtual and real time. Atomic effectors, like the speech act effector, which do not change properties, can be executed in a single step of virtual time. Furthermore, even if further steps are necessary, the virtual time it takes to execute is very distinct from the required visualization time. To overcome this, the agent framework supports an additional feature: effector synchronization. Agents can require the synchronization of particular effectors. When an effector is activated, it is only finished when all the agents that required their synchronization declared that it has finished. Even in the case of the atomic speech act effector, this is only finished after all the visualization commands are executed.

One important aspect of the speech act effector is the display of the corresponding utterance. (eg., the Help Offer speech act can generate the “Can I help you” utterance – see deliverable D5.3.1 for more details). This is the task of the language system, which must keep the history of all the agent’s dialogue and produce an adequate utterance to display. The language system is currently a module of the view manager, but since it’s functionality is the same for all the views, it can be turned into another agent, thus becoming a global service.

Another essential task of the view manager is that it makes the interface to the user. Not only it makes the visual presentation of the action, but it also deals with user input. Although this could / should be handled by an agent on its own, most of the

-17-Deliverable 6.4.1/Report no.1/Final Version

Activationprocedures

ExecutionConditions

Termination

Conditions

ExecutionStep

On failureprocedures

On successprocedures

InterruptionConditions

On interruptionprocedures

cannot perform

finished

interrupted

Figure 10 – Effector execution cycle

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

user intervention is done directly through the visualization system. The specificities of each system make it reasonable as a module of the view manager. User input patterns are captured and sent transparently to the other agents through the use of effectors. Currently, only the coping responses are being considered as relevant user input.

Narrative Control

As we have seen, the Demonstrator will present a series of episodes to the user. Narrative controls the sequencing of these episodes and their appropriate execution. The Stage Manager (Figure 11), another non-character agent, handles these issues. All episode-related content is only relevant to this agent, and is therefore not stored explicitly in the virtual world, but within the agent control module.

The Narrative, generally speaking, is a sequence of events in time. We structured narrative information (Figure 13) as a list (i.e., a unique linear sequence) of acts. An act is the narrative’s most abstract structure, representing significantly distinct sections of a narrative. Each act can be seen as a group of scenes, connected through a hierarchic structure of dependencies (directed graphs). Scenes within an act hold related content, but represent still distinct narrative moments. Scenes will represent the notion of an episode within the application. Finally, each scene contains a pattern of beats that are the most basic element within the narrative. A beat describes event patterns that are relevant for the narrative within the scene where they are active. An event signals some change in the world, being caused by the activation of effectors. Events are very low-level and are not considered at narrative level, although they can satisfy a pattern that constitutes a beat. A beat can detect either just a simple event (e.g., an activation of an effector) or a complex pattern of events (e.g., a particular sequence of property changes).

-18-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 11 – An overview of the system emphasizing the Stage Manager

The authoring of narrative content structure can be seen as a task of capturing expert knowledge. Therefore, the use of knowledge elicitation tools arises as a natural medium to perform this task. We use Jess (Java expert system shell) as a tool to capture and manipulate narrative knowledge. Jess applies a series of rules to a collection of facts, through a forward chaining mechanism. This simple representation is easily understandable by non-computer experts and is relatively fast to manipulate, even with a considerable amount of information. Facts consist of information from the virtual world, including existing entities and their properties and status of effector execution (e.g., if a particular agent action has succeeded). Acts contain global facts that will be used in the act’s scenes. Each scene contains a set of facts relevant to that scene and a collection of beats to detect for specific event patterns. In Jess, rules are activated upon the detection of patterns of facts. Therefore, beats are implemented as rules in the Jess language.

To separate narrative moments (acts and scenes) we use namespaces, which is a set of knowledge (a space) associated to a name. Each scene can form a different namespace where the facts and rules specific to that scene are active, thus avoiding conflicting rules and facts. Global facts and rules defined by the acts are equally available within all scenes.

-19-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

A particular user experience will consist on a traversal of the narrative content structure. The stage manager reads the narrative content as a Jess file, containing the necessary information. A first rule with no preconditions will fire to enable a first act to execute (Figure 12 – Example of narrative information relative to the IntroductoryAct). Further rules will interchange namespaces until no more rules can be fired. Rules must be authored so the narrative space can be properly traversed. Nevertheless, as we will see, this does not mean that the experience is tightly bounded by the content of that structure. Such content includes control mechanisms that act at different levels to enact not only a pedagogically valid, but also a unique and satisfying experience. To provide for a balance between author-induced content and user’s free-play, narrative control can be carried at the levels of simulation and presentation (Daniel Sobral et al., 2003).

-20-Deliverable 6.4.1/Report no.1/Final Version

(defmodule INTRO-ACT-PHYSICAL) ; Introductory Act(defrule start-on-user (user-info (name ?nm) (sex ?sx) (age ?ag)) => (focus INTRO-SCRIPT))(defmodule INTRO-SCRIPT) ; Introduction script Scene(defrule start-script-beat => (execute-effector (name run-script)

(params (file “intro-physical.script”))))(defrule end-intro-beat (end-effector (name run-script) (params (file “intro-physical.script”))) => (focus BULLY-ACT-PHYSICAL)) ; To the second Act

Figure 12 – Example of narrative information relative to the Introductory Act

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 13 – Generic structure for narrative information

At the simulation level, the narrative only controls the episode (scene) settings by placing the objects and characters in appropriate locales and sending a play command, letting the characters play their role within that episode. This is clearly the solution that enables more interactivity and variability, because its results depend on the character agent architecture and even on the user’s input (if the user can intervene). To control the execution of the episode, the episode’s beats are used to detect expected event patterns. When a beat event pattern is detected, its associated rules are executed or/and other beats are activated to listen for other event patterns. Associated rules may include the episode’s termination (in a successful way) or indicate a counter-measure to avoid unintended pathways. These counter-measures usually include refining the level of control.

The stage manager can directly control the characters by sending them orders (for example, to force an action that is necessary for a specific event pattern to succeed). This is a dangerous level because not only the view actions are not finely described (i.e., there is still the danger that exists at simulation level of a lack of control of the presentation), but it also requires a strong knowledge of the character roles, to avoid unbelievable behaviour. It also makes interactivity difficult, although not as hard as at the presentation level. Finally, at the presentation level, the narrative can be described as a linear sequence (a script) of view actions. This is the level that produces the best results, but requires much more work, and is completely inflexible in terms of enabling interactivity. The

-21-Deliverable 6.4.1/Report no.1/Final Version

act act act

scene

scene

scene

scene

beat

beat

beat

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

presentation level is free of domain, including view actions like “play animation”, “play sound”, “move object”, “zoom camera”. This level works as if the stage manager is directly communicating with the view manager, while the virtual space is not being used (the character agents are actually paused). In this case, the stage manager will order (through the execution of an effector) the view manager to execute a script, in the appropriate view action language (Figure 14).

Figure 14 – Partial example of a script XML for the web-based view system

In brief, each session with a user will be a scenario comprised of 3 acts (Figure 16). A first introductory act is composed of a single scripted scene, depicting the introduction of the school environment, the characters and the situation. Similarly, a final message act displays an educational message. In this case, though, the particular script presented depends on what happened in the previous act. This second act constitutes the bullying scenario itself. It starts with an initiating bullying episode that introduces the child to the problem that is occurring. In this episode, a bullying incident is absolutely essential for the rest of the interaction.

-22-Deliverable 6.4.1/Report no.1/Final Version

<?xml version='1.0' encoding="windows-1252"?><Episode> <VAction>WriteTextOnWebPage*0*Texto*The Encounter</VAction> <VAction>AddObjectToStage*0*Victim*0*85</VAction> <VAction>AddObjectToStage*0*Bully*-20*140</VAction> <VAction>AddObjectToStage*0*Sala*0*0</VAction> <VAction>RotateObject*0*Bully*-25*0</VAction> <VAction>ChangeFace*0*Bully*Neutral</VAction> <VAction>ChangeFace*0*Victim*Neutral</VAction> <VAction>MoveCamera*0*Camera*-10*30*0</VAction> <VAction>MoveObject*22*Bully*-6*80*10</VAction> <VAction>RotateObject*13*Bully*-45*7</VAction> <VAction>PlayAnimation*5*Bully*WalkBTrl2*1*true*true</VAction> <WaitAction>22</WaitAction> <VAction>FinishAction*13</VAction> <VAction>StopAnimation*5*Bully*WalkBTrl2</VAction> <WaitAction>5</WaitAction> ...</Episode>

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

In resume, each user witnesses a scenario, which includes an introduction act with one scripted scene, a bullying scenario act and a final message act with a single scripted episode. Each episode within the bullying scenario act defines a set of encounters that enacts (rather than dictate) bullying situations. Each encounter is emerging, and is defined in a way that autonomous characters, if designed according to the roles they

-23-Deliverable 6.4.1/Report no.1/Final Version

(defmodule BULLY-INCIDENT) ; Bullying Incident Scene(defrule start-scene-beat “script to initialize scene display” => (execute-effector (name run-script)

(params (file “init-classroom.script”))) (execute-effector (name move-entity)

(params (entities john luke book) (location classroom))))

(defrule end-init-beat (end-effector (name run-script)

(params (file “init-classroom.script”))) (end-effector (name move-entity)

(params (entities john luke book) (location classroom)))

=> (execute-effector (name tell)

(params (entities john luke) (message “Start”)))

(execute-effector (name start-timer) (params (identification “incident”)

(seconds 300))))(defrule end-scene-beat (end-effector (name ?x)) (bullying-act-effector ?x) => (focus DIALOGUE-SCENE))(defrule timeout-scene-beat (end-effector (name start-timer)

(params (identification “incident”))) => (execute-effector (name run-script)

(params (file “classroom-incident.script”)))) (execute-effector (name tell)

(params (entities john luke) (message “Pause”)))

(defrule end-scene-scripted-beat (end-effector (name run-script)

(params (file “classroom-incident.script”))) (focus DIALOGUE-SCENE))

Figure 15 – Narrative information relative to the Bullying Incident Scene

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

play, should effectuate the expected situation, although always in a different way, according to a multitude of factors. Nevertheless, control mechanisms overlook at the emerging behaviour and can take counter-measures to force a pedagogically accepted turn of events. This notion of encounters (Louchart 2003) led to the creation of the meta-scene, which is an abstract scene that needs to be instantiated with appropriate data. This concept was used in the (non)-bullying scene (Figure 16), which, according to appropriate initial facts, can become an episode where no bullying incidents happen or one that such incidents happen. This is implemented as a namespace with fixed rules, but whose initiating facts are asserted only at run-time. For example, to create a non-bullying episode, we can start without the bully and/or bully helpers.

Figure 16 – High-level narrative description of a session

By doing this, with each episode we give the story a “new start”, although still in consideration with the overall story that have been displayed so far. This is the same technique as the one used in theatre in order to separate different scenes.

We must note that there is not an ending to a scenario in a movie-like fashion. We need an end in the sense that either the victim or the encounter decides that it is time to end the encountering, because they fulfilled their intention or failed it. Even an unfulfilled encounter (where the potential situation has not happened) finishes, due to rules such as “after some idle time, the encounter ends”.

The ability to inspect the character’s state is an important process in the understanding of the events (Machado01). While freezing the action may be an option, this would be fatal to the credibility of the interaction (after all, bullying events cannot be frozen). The creation of a special place is the rationale for the Bullying Dialogue scene, which implements the in-between episodes phase where coping responses are given. This

-24-Deliverable 6.4.1/Report no.1/Final Version

Introduction Act

BullyingScenario

Act

Final Message

Act

Bullying Scene

(Non)Bullying

Scene

Dialogue w/Victim

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

extra possibility can help the user in deciding in conformity the best suggestion to give his/her friend.

The FearNot! demonstrator

The greater emphasis on stability and evaluation requirements channelled the limited resources for the camera-ready version of the FearNot! Demonstrator, ready in June 2004. In this application, each child, independently of their age, sex or school of origin, witness one physical bullying scenario and one relational scenario (Figure 17).

Figure 17 – Schematic overview of one scenario (physical or relational)

Description from the user’s point of view

Each child starts by writing their personal information: name, gender and age. A personal code is used to match off-line questionnaires with the responses given during the interaction.

-25-Deliverable 6.4.1/Report no.1/Final Version

1 3I 2 F QA

COPE COPE

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 18 – Initial screen

At first, the child is presented to the characters, the school and the situation.

Figure 19 – Characters for the physical scenario: a) john - victim b) luke - bully c) paul – neutral

-26-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 20 – Characters for the relational scenario: a) sarah – bully b) janet - bully helper

Figure 21 – Characters for the relational scenario: a) frances – victim b) martina - neutral

Child users will act as an advisor to the victim character in both scenarios. The victim character will ask the child user ‘what should I do to try and stop the bullying’ after a bullying incident has occurred. This will then pass to an “off-stage” phase, where the user selects an advice to give to the victim.

-27-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

Figure 22 – Off stage coping response dialogue a) physical scenario b) relational scenario

After all the episodes, an educational message is displayed (“Tell someone you trust”) remembering that, although there is not a single solution to bullying, passiveness is never a valid choice. This message is displayed with a positive tone (if the user selected that option) or with an encouraging tone (if the user has not selected that option).

After the scenario, the child fills a set of Theory of Mind (ToM) questions related to the scenario that they witnessed.

Description from an implementation point of view

The visualization system is a 3D web-based viewer, implemented using WildTangent game engine, as described in previous deliverables. Although this approach limits our audience to Windows systems, it is the one that includes more users. We have tested with several computer configurations and concluded that relatively low computer requirements are necessary, although the graphics card is the most important component for the system.

Interactivity with the user is achieved through a simple application-guided dialogue with a drop-down selection of coping answers and some open (not automatically interpreted) questions for later analysis. Current limitations of the language system led us to adopt a simple drop-down menu of choices for the coping responses. The Wizard of OZ tests enabled us to detect some recurrent patterns on the user responses that helped in building the possibilities. Nonetheless, some open questions are asked that leave the child with the freedom to give their true opinion. Such questions are treated off-line (a log is created for each child) for examining the rationale and justifications that children provide for different coping responses (e.g. do children

-28-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

who are bullies select different coping strategies and rationale than users who are victims?).

We demonstrate the adaptation to the location through the use of different characters, some with uniform (in the physical scenario) and others without uniform (in the relational scenario). The school environment is the same for both scenarios. In our experiments, we’ve noticed that it has little impact on the children’s perception of the story. Nevertheless, more experiments and studies need to be conducted towards a greater understanding of the importance of the proximity factor.

We support the three necessary languages. As we use a simple phrase-based language system, the support for other languages becomes much simpler. One just needs to translate the appropriate sentences in all the languages.

Narrative is very simplified. All episodes are scripted (they correspond to the execution of a script of view actions, as shown in Figure 14). A single rule waits for a user response in the second dialogue. If that response is in the “Tell someone” category, a “good” third episode is shown. Otherwise, a “bad” third episode is shown alternatively.

Following the rationale described in D6.3.1 and the reviewers’ recommendation, this prototype makes no use of the Sentoy affective interface. However, in the next version, we will try to include the SenToy interface just for evaluation purposes, trying to evaluate the impact that a physical interface will have on empathy.

Furthermore, we know that small-scale studies are invaluable for further progress, and the evaluations planned for next Autumn will provide us the adequate feedback.

-29-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

4. EVALUATION

We can identify two different steps in evaluation. One, the most important, is the offline process of evaluation, which is thoroughly analysed and discussed in D2.2.2. Another, relevant to the development of the interaction during the events but much harder and clearly experimental, in real-time, is useful in conducting the experiment in a way that is most effective.

The latter type of evaluation is untrivial. Possibilities to estimate (un)pleasantness may include: he/she do (not) interact with the environment; he/she always chooses the "wrong"/senseless answers. During the interaction the user will never be asked directly if he/she likes the interaction. Nevertheless, by not enjoying the application, users may behave in a destructive way, which will in turn off-line evaluation useless. Real time estimation for the user pleasantness, even if only with rules of thumb, may help in understand the user's behaviour throughout the interaction with the application and conduct the user to a fruitful interaction.

-30-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

5. CONCLUSIONS AND FURTHER WORK

So far the demonstrator is up and running and will be evaluated in June with 400 UK children. However, we do see several limitations to it, in particular:

- The interface between a generic behaviour and its visual display is a serious issue, involving believability issues like synchronization, animation blending and path planning. The characters are also an enormous consumer of design resources. The lack of visual quality in certain situations, took us to a partially scripted presentation of episodes as has more cinematographic quality. However, we believe that with a more clever “camera” (implemented as an agent in the framework) and more control of the dramatic aspects, the results will be better.

- Language issues arise under two different perspectives: language generation and language understanding. The latter falls also under interactive capabilities and will be discussed further in the section discussing interaction.

Work done so far

Some progress has been done in the production of the framework (Annex 7), and some first prototypes have been used in the execution of some tests.

Some first studies using small prototypes have been done, focusing either psychological/pedagogical and usability/technological components of evaluation (see D2.2.2; the forthcoming D7.1.1 will include further details on the evaluation).

Permanent discussions with the team specialists team in the bullying domain is taken to guarantee an understanding during the process of developing the software (see D2.1.1).

Further Work

All the progress related to the different components of the Demonstrator is to be reported in several forthcoming deliverables (character, evaluation, SenToy). An iterative evaluation process and a user-centred development is essential in the successful development of any software. Given the complex nature of the topic, it will be essential the creation of Prototypes of increasing complexity.

-31-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

The framework will allow the independent validity assessment of each of the components that compose the application. A simulation of each of the components is necessary for the swift concretisation of tests. Furthermore, the concretisation of successful autonomous characters is very difficult. Some scripting may become necessary for the successful execution of psychological/pedagogical tests.

-32-Deliverable 6.4.1/Report no.1/Final Version

Deliverables Report

IST-2001-33310 VICTEC

<May 2004>

6. REFERENCES

Aylett, R., Jin, L., Sobral, D., Paiva, A. 2003. VICTEC, IST Project IST-2001-33310, Deliverable3.2.1: High-level Functional Architecture for the Toolkit.

Louchard, S. and Aylett, R. Solving the narrative paradox in VEs – lessons from RPGs. In Proceedings of the IVA’03, 2003

Maulsby, D., Greenberg, S. and Mander, R. Prototyping an intelligent agent through Wizard of Oz. In Proceedings of the CHI'93, The Netherlands, ACM Press, 1993.

Paiva, A., Alexandre, I., Sobral, D., and Aylett, R. 2003. VICTEC, IST Project IST-2001-33310, Deliverable5.1.1: Specification of Empathic Synthetic Characters.

Sobral, D., Alexandre, I., Paiva, A. Managing Authorship in Plot Conduction. In Proceedings of the ICVS 2003, Toulouse, France

Woods, S. et al, 2003. VICTEC, IST Project IST-2001-33310, Deliverable2.1.1: Learner Scenarios: Scenarios Requirements Definition.

Woods, S. et al.: What's Going On? Investigating Bullying using Animated Characters, Proceedings of the IVA’03, 2003 (To Appear).

Zoll, C., Falcao, R., Silva, N., Hall, L., Sobreperez, P., Louchart, S. , Woods, S., Enz., Schaub, H. , VICTEC, IST Project IST-2001-33310, Deliverable2.2.2, Evaluation Methodology

-33-Deliverable 6.4.1/Report no.1/Final Version