9
An Extendable Multiagent Model for Behavioural Animation Oliver Häger German Aerospace Center (DLR) Braunschweig site, Institute Automotive [email protected] http://www.dlr.de/fs/ Soraia Raupp Musse CROMOS Laboratory PIPCA - Universidade do Vale do Rio dos Sinos (UNISINOS) São Leopoldo, Brazil [email protected] http://www.inf.unisinos.br/~cromoslab/ Abstract This paper presents a framework for visually simulating the behaviour of actors in virtual en- vironments. In principle, the environmental in- teraction follows a cyclic processing of percep- tion, decision, and action. As natural life-forms perceive their environment by active sensing, our approach also tends to let the artificial ac- tor actively sense the virtual world. This allows us to place the characters in non-preprocessed virtual dynamic environments, what we call generic environments. A main aspect within our framework is the strict distinction between a behaviour pattern, that we term model, and its instances, named characters, which use the pattern. This allows them sharing one or more behaviour models. Low-level tasks like sens- ing or acting are took over by so called sub- agents, which are subordinated modules extend- edly plugged in the character. In a demon- stration we exemplarily show the application of our framework. Therefore we place the same character in different environments and let it climb and descend stairs, ramps and hills au- tonomously. Additionally the reactiveness for moving objects is tested. In future, this approach shall go into action for a simulation of an urban environment. Keywords: behavioural animation, computer animation, multiagent system, framework

An Extendable Multiagent Model for Behavioural Animation

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Extendable Multiagent Model for Behavioural Animation

An Extendable Multiagent Model for BehaviouralAnimation

Oliver HägerGerman Aerospace Center (DLR)

Braunschweig site, Institute [email protected]://www.dlr.de/fs/

Soraia Raupp MusseCROMOS Laboratory

PIPCA - Universidade do Vale do Rio dos Sinos (UNISINOS)São Leopoldo, [email protected]

http://www.inf.unisinos.br/~cromoslab/

AbstractThis paper presents a framework for visuallysimulating the behaviour of actors in virtual en-vironments. In principle, the environmental in-teraction follows a cyclic processing of percep-tion, decision, and action. As natural life-formsperceive their environment by active sensing,our approach also tends to let the artificial ac-tor actively sense the virtual world. This allowsus to place the characters in non-preprocessedvirtual dynamic environments, what we callgeneric environments. A main aspect withinour framework is the strict distinction betweena behaviour pattern, that we term model, andits instances, named characters, which use thepattern. This allows them sharing one or morebehaviour models. Low-level tasks like sens-ing or acting are took over by so called sub-agents, which are subordinated modules extend-edly plugged in the character. In a demon-stration we exemplarily show the application ofour framework. Therefore we place the samecharacter in different environments and let itclimb and descend stairs, ramps and hills au-

tonomously. Additionally the reactiveness formoving objects is tested. In future, this approachshall go into action for a simulation of an urbanenvironment.

Keywords: behavioural animation, computeranimation, multiagent system, framework

Page 2: An Extendable Multiagent Model for Behavioural Animation

1 Introduction

Nowadays, in Computer Graphics it is possibleto generate worlds with an astonishing degree ofrealism. Especially the entertainment industrytakes advantage of those technologies and triesto capture the consumers with their more or lessbelievable virtualities. But even the most de-tailed and most realistic virtual world remainslifeless without interacting inhabitants. The vi-sual representation of such inhabitants like hu-mans is already quite advanced, but still advanc-ing within the technological progress. But onlythe visual representation is not enough to createcredible entities as long as they do not act in abelievable manner.The discipline Behavioural Animation dealswith visually simulating believable behaviour ofsuch entities, which is still a challenge. Puta-tively simple actions such as grasping objects orclimbing stairs in virtual worlds are not that sim-ple to realize. Beside solving substantial prob-lems like graphical representation, accurate col-lision detection, and simulating other physics,also the (autonomous) interaction process forthe entity needs to be realized. Mallot[1] explic-itly named such a process the perception-actioncycle, where perceiving environmental informa-tion, extracting the needful data, deciding properactions, and executing them, are the main as-pects. A vital distinction is the manipulation ofthe environment’s state through actions and theirsequential (re-)actions. Following this princi-ple we present an approach of a framework thatcan be used to simulate an entity’s behaviour forsuch dynamic manipulable environments, whichalso shall be

• Adaptive for different non-preprocessedenvironments

• Reusable and easily extendable

• Usable for various types of virtual actors(humanoids, animals, vehicles etc.)

• Applicable for a larger amount of entities

With a small case study we also demonstrate theactive senses and its reactive behaviour of hu-manoid entities in virtual environments. Those

actors are placed in a non-preprocessed en-vironment following predefined paths and au-tonomously react to some exemplarily encoun-ters, like stairs, ramps, hills, and moving objects.

2 Related Work

Within the discipline Behavioural Animationa large community deals with the visual sim-ulation of life-forms’ behaviour. Since thefirst remarkable milestone Boids in 1987 [2]the continuous progress led to today’s impres-sive results. Between Reynolds’ Boids and theawarded thesis of Tu [3] about artificial fisheswith physically realistic locomotion, perception,and believable behaviour in 1996 mainly funda-mental research, for instance on perception, rea-soning and animation, was done. Later on sev-eral works merged and refined those found re-sults.An interesting approach in that area is aboutnon-verbal interpersonal communication amongvirtual actors by postures was shown byBécheiraz and Thalmann [4]. Blumberg pre-sented an autonomous acting dog with cognitivelearning capabilities [5]. Noser and Thalmanndemonstrated perception based behaviour in atennis game simulation [6]. A way of plannednavigation in dynamic environment using a vi-sion sensor was shown by Kuffner [7]. Heextended his approach in a recently presentedwork focussing additionally on flawless motionof characters [8]. While some researchers areinterested in imitating individual believable be-haviour with realistic physics-based models as itwas already impressively demonstrated [9, 10],others are more focused on efficient solutions tobe applicable for simulation of larger amountsof individuals [8, 11, 12, 13, 14].Amongst them we found only a few, who un-derlined the usage of specifically developedframeworks for their purposes [9, 11]. Besidethose other frameworks were presented in thepast. The library AGENTLib offers the integra-tion of differently generated motion types, likekeyframing animation or inverse kinematics, ofvirtual humans [15]. It was extended by a flex-ible perception module, which allows informa-tion filtering of sensor-based input signals in vir-tual environments [16]. Millar proposed in his

Page 3: An Extendable Multiagent Model for Behavioural Animation

hypothesis a common way to model BehaviouralAnimation systems [17], which also was ex-amined and generally acknowledged [18]. An-other framework designed for agent-object in-teraction based on smart objects was devel-oped by Peters et al. [19]. Furthermore Mon-zani [20] worked out a framework for “creatingautonomous agents, capable of taking decisionsbased on their simulated perceptions and inter-nal motivations”.1

3 Model

Here we present our approach of a flexible andefficient framework for autonomous charactersin virtual environments. Its clearly structure al-lows a high extendability of the character’s be-haviour. To show this capability we used ourframework for a simple demonstration. A vir-tual human is placed within a non-preprocesseddynamic environment, which we use to callgeneric environment. When closing stairs, hillsor ramps the character autonomously starts toclimb or descend these objects using a believ-able animation. Also the reactive behaviour ondynamic objects will be demonstrated.

As others [1, 7, 12], already emphasized thebasic principle for spatial cognition, our systembases on cyclic repetition of perception, deci-sion and action illustrated in figure 1.

Figure 1: The basic principle of our frame-work bases on a Perception-Decision-Action-Loop.

For sensing the environment a virtual charactershall be equipped with a set of active sensors.The sensors extract environmental informationwhich is going to be processed to determine aproper action to be executed. These actions pos-sibly can affect the environmental state. Due to

1See in [20], page 3, second paragraph.

the character’s sensorial activity these changescan be noticed.

The aim was to offer a flexible, extendable, andefficient framework for applying behaviouralanimation in generic virtual environments. Itshall allow to plug-in various modules for differ-ent low-level tasks. For that we first organizedthe framework in three operational layers, whichare connected to each other by easy-to-use inter-faces:

• Application

• Model

• Character

The application-layer contains the whole ap-plication and is – from a programmer’s pointof view – inside the global scope. Herein wehave the global objects like the once instancedmessage handler or the data structure of theworld. The distinction between a model and itsinstance, which we call character, is vital forthe concept of the framework. A character usesa set of predefined data of animation and be-haviour, offered by the model. This feature al-lows a centralized handling of reusable data. Sowe can clump those data as we want to use thisframework also for group and crowd simulation.Because each instance’s current state is hold bythe character itself, but also necessary for rea-soning, we need a communication structure be-tween the separated layers. Hence we use theglobal message handler, who imparts messageobjects between related modules. The characternot only maintains its own current state, but alsohas a set of subordinated modules, that we use tocall sub-agents. An interface connects the exten-sions with their instance. These sub-agents areresponsible for character’s low-level tasks likeperceiving the environment or executing an ani-mation.

The schematic overview of the described ar-chitecture is presented in figure 2. The stip-pled boxed indicates the different layers. It isvalid that the system-layer is the superset of themodel-layer, which on his part is the superset ofthe character-layer. This architecture allows thedata exchange not only between characters and

Page 4: An Extendable Multiagent Model for Behavioural Animation

Figure 2: Our framework in a schematicoverview. The connection betweenthe model and its characters is donevia the global message handler. Theagents are the character’s subordi-nated modules for low-level tasks,which – if necessary – perceive theinformation from the environment.

its model, but also between any character andany model.

For our work we revert to the free graphicstoolkit OpenSceneGraph2 to keep the virtual en-vironment and also the animated virtual humansin an organized structure. The scene graph sim-plifies the access to the virtual world’s objectdramatically. Actually the animation data areprocessed by the Character Animation Library3D (Cal3D).3

As mentioned before the main aspects are per-ception, decision, and action. Perception, likeothers, is a low-level task, which we handleby a specific interface connected to the char-acter. The interface allows the exchange ofdata amongst the sub-agents for those low-leveltasks. Figure 5 demonstrates the realized char-acter concept. The subordinated modules in thisillustration are part of our exemplary demonstra-tion. Through the interface named Extension-

2http://www.openscenegraph.org/ (2006-03-14)

3http://cal3d.sourceforge.net/ (2006-03-14)

Figure 3: The decision processor for reasoningbases on a finite state machine. Itis supported by an animation handlerwho keeps the animation data for dif-ferent behaviours.

Handler it is easy to integrate other modules,which may process any low-level tasks.It shall be noticed that we simulate a visual per-ception by using a geometric field of view. Po-tential visible objects – identified by their nodes’names – within this volume are checked for theirdistance to the character and eventually trigger aperception event. Furthermore we use anothermodule that can be interpreted as a tactile sen-sor. Contact bones of the animation skeletoninform the character about ground contact andkeep it in the right height relative to the groundheight level. For the feedback between a char-acter and an attached sub-agent we follow theprinciple, which we also use for the communi-cation between the decision processor (as partof the model) and the characters:A requesting event from a sub-agent makes thecharacter ask the decision processor for a be-haviour change.That means, that we forward an initialized eventto the decision processor, who then tries to re-solve the problem. For that, the character passesits actual state with the data from the requestingsub-agent to the decision unit.

Actually the decision unit uses a finite state ma-chine (FSM) to decide which behaviour shallbe executed. The input data from the sub-agent (here: the visual perception unit) of therequesting character lead to a specific action.Combined with the character’s actual state theFSM returns eventually a new state. Depen-dent on this state a predefined behaviour as aso called high-level animation (HLA) is going

Page 5: An Extendable Multiagent Model for Behavioural Animation

Figure 4: Several core animations can beblended in a more complex high-levelanimation.

to be returned to the character. For this a sep-arated handler for animation data supports thedecision processor. All necessary informationabout Cal3D’s core animations and the HLAsare maintained inside here. Both units togetherrepresent a model, which is depicted in figure 3.For the exchange of data between the charactersand the models a global message handler is used,as we has described it already in the previoussubsection. This method guarantees a true en-capsulation between models and characters.

Basically, each character needs a module that isresponsible for the action, that has to be exe-cuted after a decision for a behaviour was made.A behaviour means the execution of a suitableanimation for the current situation. The sub-agent Animator takes up this task. The Animatorreceives a package of parameters from the deci-sion processor, which we call high-level anima-tion (HLA), and extracts its data to let Cal3Dblend into a new animation. The animation tobe executed is a compound of differently gradedcore animations based on keyframing anima-tion, like figure 4 illustrates. In our example welet the decision process calculate the grades foreach core animation. These grades depend onthe received input data from the perceived ob-jects, like the stair height of the next step to beclimbed. It shall be noticed that the visual re-sults highly depend on the quality of the used(core) animations. Because we use keyframinganimation we cannot control the limbs directly,but only know – by extrapolation – where theycould be in the next frame(s).

To build up a model and to produce its instanceswe developed a kind of a factory. Once a fac-tory is constructed the model has also been built.Due to the model-character principle an instancecan be inserted in and deleted from the scene at

runtime. Each factory creates one model. Us-ing for example the factory pattern of Gamma etal. [21], it is easily possible to control differentmodel’s and instances’ creation with a simple in-terface.

We point out that this framework can be used forvarious forms of interaction between virtual en-vironments and its inhabitants. Reducing its ar-chitecture to the here presented elements offers alarge functionality. Based on already quite effi-cient toolkits for animation and visualization theuser just has to care about his own implementa-tion.

4 Applying the framework: Acase study

For demonstration purposes we chose to simu-late the behaviour for climbing and descendingcontinuous and discrete height level variances.In general, representative objects within a vir-tual environment are stairs, hills and ramps. Asthis is still a classical, but not unresolved prob-lem in robotics, we could not find much aboutthis subject for virtual characters. For sure, suc-cessful approaches on physically realistic ani-mations, for example [9], offers the capabilityfor climbing stairs. In some.approaches [19, 22,23, 24] cover this subject by using so calledsmart objects, for which a scene needs to be pre-processed. Primarily, our aim is that our frame-work can also be used for any kind of (nearly)non-preprocessed virtual environments. So theusage of smart objects is not suitable for ourapproach. Using physically realistic animationmodels also may not be suitable as we plan toapply the framework for a larger amount of vir-tual characters. The high computational costsof such models denies the usage for our futureplans.

To notice environmental changes it is necessaryto let the characters know about possible events.For reasons of true autonomy, but also for theuse in generic environments, we equipped thevirtual human with active sensors, which sim-ulate vision and touch. The active sensors aresubordinated modules of a model’s instance.These sensors collect the necessary environmen-

Page 6: An Extendable Multiagent Model for Behavioural Animation

Figure 5: The applied framework for an exemplary demonstration. The four sub-agents connected tothe extension handler take over the different low-level tasks.

tal data of potential visible and touchable ob-jects, like stairs, ramps and moving scene par-ticipants. The received information is going tobe sent to the separated decision unit. For rea-soning we revert to a finite state machine that de-cides whether a new action shall be executed. Acharacter communicates with the decoupled de-cision processor via message objects that keepthe data needed for further processing. If itis necessary to change the behaviour the deci-sion unit responds the character’s request andreturns the animation data needed for execu-tion. Each character can be treated as an au-tonomous unit, that explores its environment byitself. The demonstration just shows reactive be-haviour of a character, but the extendibility of-fered by our framework allows a rapid integra-tion of further modules, that can take over moreadvanced functions like memorizing, learningand goal-oriented planning. The figure 5 il-lustrates the scheme of the exemplarily appliedframework. The four integrated sub-agents areseparately connected to the character. Each isresponsible for its own low-level task. The An-imator receives the animation data, and via thelinkage to Cal3D it blends into a new animation.The Perceptor simulates the synthetic vision ofthe character, while the module Stepagent is re-sponsible for the calculations regarding the ver-tical and horizontal translation. The Navigator

lets the character follow a predefined path ofnavigation points. All these modules are boundto the character via the interface named Exten-sionhandler, that facilitates the access to each ofthese modules. As it is shown, the visual repre-sentation of the character is completely decou-pled of the sub-agents. Calculated data of sub-agents, which need to be shown during the de-veloping process can optionally be activated andtherefore remain also independent of the charac-ter’s visualization.

We placed the same character in different en-vironments and let it act within these scenes.In our demonstration a character for examplearrives at a stair, checks the first step’s heightand – if within a passable height – changes to aproper animation to climb or descend the stair.If reached at the end of the stairs it returns to hiswalking animation. It is also capable to recog-nize ramps and hills and changes his behaviourin a proper manner. When reaching close tounknown or non-passable objects the charactersmoothly stops in front of the obstacle. In caseof moving hazards, it continues his way after theobject has passed. The figure 6 shows some ex-cerpts from our example. Among others it is il-lustrated that the visual perception bases on acollision detection method with a geometric vol-ume, which represents the field of view. It can be

Page 7: An Extendable Multiagent Model for Behavioural Animation

Figure 6: Excerpts from the case study.

seen, that our framework also works with manyinstances of a model. In our demonstration weuse a humanoid model with nearly 3500 poly-gons in a simple environment. We can simulta-neously draw about 60 instances in real-time (>24 frames per second). It shall be noticed thatwe did not really optimized the drawing and cal-culation routines yet. By lowering the level ofdetail of the models, we surely can draw morethan 200 characters even in real-time with theactual framework. Theoretically there’s no limitfor instances in a non-real-time application, aswe already checked with about 600 characters.4

5 Conclusions and Future Work

We presented a framework for behavioural ani-mation based on a Perception-Decision-Action-Loop. A main aspect is the decoupling betweena model and its instances, the so called char-acters. The model maintains all reusable data,as there are the animation data and the deci-sion mechanism, which together can be seenas a behavioural pattern. These parts are con-nected with a global messaging system. Thisfeature will allow not only the data exchange be-tween a character and its model, but also the ex-change amongst characters of different models.Furthermore the character’s capabilities easilycan be extended by using an easy-to-use plug-in interface. Those extensions are subordinated,but still encapsulated, modules, which we callsub-agents and which are responsible for low-level tasks, like sensing or executing animations.

4The demo runs on a commercially usual notebook withan Intel R© Pentium R©M 730 CPU at 1.6 GHz, 512 MBRAM and an ATI R© MOBILITYTMRADEONTMX600display controller with 64 MB VRAM.

Hence it is easy to integrate a once developedsub-agent, that for example take over the taskfor hearing, smelling, or even simulating thecharacter’s behaviour for waiting lines at servicepoints.

As the aim was to create a reusable system forvirtual life-forms in non-pre-processed virtualenvironments active sensing is a vital compo-nent. In a case study we presented the abilitiesof an active visual and tactile sensor on lettinga character climb and descend stairs, ramps andhills. Even moving objects – embedded in theused scene graph – are recognizable by the hu-manoids and cause a proper behaviour.

We want to apply this framework for differ-ent aspects in a simulated urban environment.The extension of the model for more complexbehaviour, like using emotional and intentionalstates, shall also be part of future works. Usingthe messaging system an “interpersonal” com-munication can easily be assimilated. Addition-ally, the integration of advanced mechanisms fornavigation, like path-planning and autonomouscollision avoidance are planned. Furthermorea character will be equipped with more activesenses like smell and hearing. The flexibilityof the framework also allows us to use it forthe simulation of vehicles in the urban environ-ment. Although the actual state of our frame-work needs some improvements regarding opti-mization and still better usability, we are con-fident that our desired prospective aims can berealized with the presented approach.

Acknowledgements

We would like to thank the German AcademicExchange Service for the granted scholarship.

Page 8: An Extendable Multiagent Model for Behavioural Animation

References

[1] J. M. Loomis H. A. Mallot, S.D. Steck. Mechanisms of spatial cog-nition:behavioral experiments in virtualenvironments. Künstliche Intelligenz,(4):24–28, 2002.

[2] C. W. Reynolds. Flocks, herds, andschools: A distributed behavioral model.In SIGGRAPH ’87: Proceedings of the14th annual conference on Computergraphics and interactive techniques, vol-ume 21, pages 25–34, New York, NY,USA, 1987. ACM Press.

[3] X. Tu. Artificial Animals for Computer An-imation. Springer, Berlin, Germany, 1999.

[4] D. Thalmann P. Bécheiraz. A modelof nonverbal communication and inter-personal relationship between actors. InProceedings of Computer Animation’96,pages 58–67. IEEE Computer SocietyPress, June 1996.

[5] B. Blumberg. Old Tricks, New Dogs:Ethology and Interactive Creatures. PhDthesis, Massachusetts Institute of Technol-ogy, 1997.

[6] H. Noser D. Thalmann. Sensor basedsynthetic actors in a tennis game simula-tion. The Visual Computer, 14(4):193–205, 1998.

[7] J. J. Kuffner. Autonomous Agents ForReal-Time Animation. PhD thesis, Stan-ford University, Stanford, CA, USA, De-cember 1999.

[8] M. Lau J.J. Kuffner. Behavior planning forcharacter animation. In ACM SIGGRAPH/ Eurographics Symposium on ComputerAnimation (SCA ’05). ACM Press, 2005.

[9] D. Terzopoulos P. Faloutsos, M. vande Panne. Composable controllers forphysics-based character animation. In Pro-ceedings of the 28th annual conference onComputer graphics and interactive tech-niques (SIGGRAPH’01), pages 251–260.ACM Press, 2001.

[10] D.C. Brogan J.F. O’Brien J.K. Hodgins,W.L. Wooten. Animating human athlet-ics. In Proceedings of the 22nd Interna-tional Conference on Computer Graphicsand Interactive Techniques, pages 71–78.ACM Press, 1995.

[11] D. Terzopolous W. Shao. Autonomouspedestrians. In SCA ’05: Proceedings ofthe 2005 ACM SIGGRAPH/Eurographicssymposium on Computer animation, pages19–28, New York, NY, USA, 2005. ACMPress.

[12] S. Donikian. Htps: A behaviour mod-elling language for autonomous agents.In AGENTS ’01: Proceedings of the fifthinternational conference on Autonomousagents, pages 401–408, New York, NY,USA, 2001. ACM Press.

[13] S. R. Musse. Human Crowd Modelingwith Various Level of Behavior Control.PhD thesis, École Polytechnique Fédéralede Lausanne, 2000.

[14] D. Thalmann S. R. Musse. Hierarchi-cal model for real time simulation of vir-tual human crowds. In IEEE Transactionson Visualization and Computer Graphics,volume 7, pages 152–164, 2001.

[15] Luc Emering Daniel Thalmann Ro-nan Boulic, Pascal Bécheiraz. Integrationof motion control techniques for virtualhuman and avatar real-time animation.In VRST ’97: Proceedings of the ACMsymposium on Virtual reality software andtechnology, pages 111–118, New York,NY, USA, 1997. ACM Press.

[16] D. Thalmann C. Bordeux, R. Boulic. Anefficient and flexible perception pipelinefor autonomous agents. Proceedings ofEUROGRAPHICS’99, 18(3):23–30, 1999.

[17] S.M. Kealy R.J. Millar, J.R.P. Hanna. Areview of behavioural animation. In Com-puters and Graphics, volume 23, pages127–143, 1999.

[18] W.M. Johnston J. R. P. Hanna, R. J. Millar.Examing the generality of a behaviouralanimation framework. In WSCG’2001:

Page 9: An Extendable Multiagent Model for Behavioural Animation

9th International Conference in CentralEurope on Computer Graphics, Visualiza-tion and Computer Vision’2001, volume 9,2001.

[19] B. MacNamee C. O’Sullivan C. Peters,S. Dobbyn. Smart objects for attentiveagents. In Short Paper Proceedings of theWinter Conference on Computer Graph-ics 2003. UNION Agency Science Press,February 2003.

[20] J.-S. Monzani. An architecture for the Be-havioural Animation of Virtual Humans.PhD thesis, Ecole Polytechnique Fédéralede Lausanne, July 2002.

[21] R. Johnson J. Vlissides E. Gamma,R. Helm. Design Patterns. Elementsof Reusable Object-Oriented Software.Addison-Wesley Professional, July 1997.

[22] R. Boulic D. Thalmann, N. Farenc. Virtualhuman life simulation and database: Whyand how. In International Symposium onDatabase Applications in Non-TraditionalEnvironments (DANTE’99), pages 471–480. IEEECS Press, 1999.

[23] D. Thalmann N. Farenc, R. Boulic. An in-formed environment dedicated to the sim-ulation of virtual humans in urban context.In Proceedings of EUROGRAPHICS’99,volume 18, pages 309–318, 1999.

[24] D. Thalmann M. Kallmann. Modeling ob-jects for interaction tasks. In Proceeedingsof EGCAS’98, Eurographics Workshop onAnimation and Simulation, pages 73–76,1998.