8
Bio-Inspired Animated Characters A Mechanistic & Cognitive View Ben Kenwright School of Media Arts and Technology Southampton Solent University United Kingdom Abstract—Unlike traditional animation techniques, which attempt to copy human movement, ‘cognitive’ animation solutions mimic the brain’s approach to problem solving, i.e., a logical (intelligent) thinking structure. This procedural animation solution uses bio- inspired insights (modelling nature and the workings of the brain) to unveil a new generation of intelligent agents. As with any promising new approach, it raises hopes and questions; an extremely challenging task that offers a revolutionary solution, not just in animation but to a variety of fields, from intelligent robotics and physics to nanotechnology and electrical engineering. Questions, such as, how does the brain coordinate muscle signals? How does the brain know which body parts to move? With all these activities happening in our brain, we examine how our brain ‘sees’ our body and how it can affect our movements. Through this understanding of the human brain and the cognitive process, models can be created to mimic our abilities, such as, synthesizing actions that solve and react to unforeseen problems in a humanistic manner. We present an introduction to the concept of cognitive skills, as an aid in finding and designing a viable solution. This helps us address principal challenges, such as: How do characters perceive the outside world (input) and how does this input influence their motions? What is required to emulate adaptive learning skills as seen in higher life-forms (e.g., a child’s cognitive learning process)? How can we control and ‘direct’ these autonomous procedural character motions? Finally, drawing from experimentation and literature, we suggest hypotheses for solving these questions and more. In summary, this article analyses the biological and cognitive workings of the human mind, specifically motor skills. Reviewing cognitive psychology research related to movement in an attempt to produce more attentive behavioural characteristics. We conclude with a discussion on the significance of cognitive methods for creating virtual character animations, limitations and future applications. Keywordsanimation, life-like, movement, cognitive, bio-mechanics, human, reactive, responsive, instinctual, learning, adapting, biological, optimisation, modular, scalable I. I NTRODUCTION Movement is Life Animated films and video games are pushing the limits of what is possible. In today’s virtual environments, animations tends to be data- driven [1], [2]. It is common to see animated characters using pre- recorded motion capture data, but it is rare to see the animated characters driven using purely procedural solutions. With the dawn of Virtual Reality (VR) and Augmented Reality (AR) there is an ever growing need for content - to create indistinguishably realistic virtual worlds quickly and cost effectively. While ren- dered scenes may appear highly realistic, the ‘movement’ of ac- tively driven systems (e.g., biological creatures) is an open area of research [2]. Specifically, the question of how to ‘automatically’ create realistic actions that mimic the real-world. This includes, the ability to learn and adapt to unforeseen circumstances in a life- like manner. While we are able to ‘record’ and ‘playback’ highly realistic animations in virtual environments, they have limitations. The motions are constrained to specific skeleton topologies, not to mention, time consuming and challenging to create motions for non-humans (creatures and aliens). What is more, the recording of animations for dangerous situations is impossible using motion capture (so must be manually done using artistic intervention). Another key thing to remember, in dynamically changing envi- ronments (video games), pre-recorded animations are unable to adapt automatically to changing situations. This article attempts to solve these problems using biolog- ically inspired concepts. We investigate neurological, cognitive and behavioural methods. These methods provide inspirational solutions for creating adaptable models that synthesize life- like character characteristics. We examine how the human brain ‘thinks’ to accomplish tasks; and how the brain solves unforeseen problems. Exploiting the knowledge of how the brain functions, we formulate a system of conditions that attempt to replicate humanistic properties. We discusses novel approaches around solving these problems, by questioning, analysing and formulat- ing a system based on the human cognitive processes. Cognitive vs Machine Learning Essentially, cognitive com- puting has the ability to reason creatively about data, patterns, situations, and extended models (dynamically). However, most statistics-based machine learning algorithms cannot handle prob- lems much beyond what they have seen and learned (match). The machine learning algorithm has to be paired with cognitive capabilities to deal with truly ‘new situation’. Cognitive science therefore raises challenges for, and draws inspiration from, ma- chine learning; and insights about the human mind to help inspire new directions for animation. Hence, cognitive computing along with many other disciplines within the field of artificial intelli- gence are gaining popularity, especially in character systems, so in the not so distant future will have a colossal impact on the animation industry. Automation The ability to ‘automatically’ generate physically correct humanistic animations is revolutionary. Remove and add behavioural components (happy and sad). Create animations for different physical skeletons using a single set of training data. Per- form a diverse range of actions, for instance, getting-up, jumping, dancing, and walking. The ability to react to external interven- tions, while completing assigned task (i.e., combining motions with priorities). These problem-solving skills are highly valued. We want character agents to learn and adapt to the situation. This includes: physically based models (e.g., rigid bodies) that are controlled through internal joint torques (muscle forces) controllable adjustable joint signals to accomplish spe- cific actions (trained) learn and retain knowledge from past experiences embed personal traits (personality) Problems We want the method to be automatic (i.e., not depend too heavily on pre-canned libraries). Avoid simply playing back captured animations, but instead paramaterizing and re-using animations for different contexts (provide stylistic advice to the training algorithm). We want the solution to have the ability to adapt on-the-fly to unforeseen situations in a natural life-like

Bio-Inspired Animated Characters A Mechanistic & Cognitive View

Embed Size (px)

Citation preview

Page 1: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

Bio-Inspired Animated CharactersA Mechanistic & Cognitive View

Ben KenwrightSchool of Media Arts and Technology

Southampton Solent UniversityUnited Kingdom

Abstract—Unlike traditional animation techniques, which attemptto copy human movement, ‘cognitive’ animation solutions mimicthe brain’s approach to problem solving, i.e., a logical (intelligent)thinking structure. This procedural animation solution uses bio-inspired insights (modelling nature and the workings of the brain)to unveil a new generation of intelligent agents. As with anypromising new approach, it raises hopes and questions; an extremelychallenging task that offers a revolutionary solution, not just inanimation but to a variety of fields, from intelligent robotics andphysics to nanotechnology and electrical engineering. Questions,such as, how does the brain coordinate muscle signals? How doesthe brain know which body parts to move? With all these activitieshappening in our brain, we examine how our brain ‘sees’ our bodyand how it can affect our movements. Through this understandingof the human brain and the cognitive process, models can becreated to mimic our abilities, such as, synthesizing actions thatsolve and react to unforeseen problems in a humanistic manner.We present an introduction to the concept of cognitive skills, asan aid in finding and designing a viable solution. This helps usaddress principal challenges, such as: How do characters perceivethe outside world (input) and how does this input influence theirmotions? What is required to emulate adaptive learning skills asseen in higher life-forms (e.g., a child’s cognitive learning process)?How can we control and ‘direct’ these autonomous proceduralcharacter motions? Finally, drawing from experimentation andliterature, we suggest hypotheses for solving these questions andmore. In summary, this article analyses the biological and cognitiveworkings of the human mind, specifically motor skills. Reviewingcognitive psychology research related to movement in an attemptto produce more attentive behavioural characteristics. We concludewith a discussion on the significance of cognitive methods for creatingvirtual character animations, limitations and future applications.

Keywords–animation, life-like, movement, cognitive, bio-mechanics,human, reactive, responsive, instinctual, learning, adapting, biological,optimisation, modular, scalable

I. INTRODUCTION

Movement is Life Animated films and video games are pushingthe limits of what is possible.

In today’s virtual environments, animations tends to be data-driven [1], [2]. It is common to see animated characters using pre-recorded motion capture data, but it is rare to see the animatedcharacters driven using purely procedural solutions. With thedawn of Virtual Reality (VR) and Augmented Reality (AR) thereis an ever growing need for content - to create indistinguishablyrealistic virtual worlds quickly and cost effectively. While ren-dered scenes may appear highly realistic, the ‘movement’ of ac-tively driven systems (e.g., biological creatures) is an open area ofresearch [2]. Specifically, the question of how to ‘automatically’create realistic actions that mimic the real-world. This includes,the ability to learn and adapt to unforeseen circumstances in a life-like manner. While we are able to ‘record’ and ‘playback’ highlyrealistic animations in virtual environments, they have limitations.The motions are constrained to specific skeleton topologies, notto mention, time consuming and challenging to create motions fornon-humans (creatures and aliens). What is more, the recording ofanimations for dangerous situations is impossible using motion

capture (so must be manually done using artistic intervention).Another key thing to remember, in dynamically changing envi-ronments (video games), pre-recorded animations are unable toadapt automatically to changing situations.

This article attempts to solve these problems using biolog-ically inspired concepts. We investigate neurological, cognitiveand behavioural methods. These methods provide inspirationalsolutions for creating adaptable models that synthesize life-like character characteristics. We examine how the human brain‘thinks’ to accomplish tasks; and how the brain solves unforeseenproblems. Exploiting the knowledge of how the brain functions,we formulate a system of conditions that attempt to replicatehumanistic properties. We discusses novel approaches aroundsolving these problems, by questioning, analysing and formulat-ing a system based on the human cognitive processes.

Cognitive vs Machine Learning Essentially, cognitive com-puting has the ability to reason creatively about data, patterns,situations, and extended models (dynamically). However, moststatistics-based machine learning algorithms cannot handle prob-lems much beyond what they have seen and learned (match).The machine learning algorithm has to be paired with cognitivecapabilities to deal with truly ‘new situation’. Cognitive sciencetherefore raises challenges for, and draws inspiration from, ma-chine learning; and insights about the human mind to help inspirenew directions for animation. Hence, cognitive computing alongwith many other disciplines within the field of artificial intelli-gence are gaining popularity, especially in character systems, soin the not so distant future will have a colossal impact on theanimation industry.

Automation The ability to ‘automatically’ generate physicallycorrect humanistic animations is revolutionary. Remove and addbehavioural components (happy and sad). Create animations fordifferent physical skeletons using a single set of training data. Per-form a diverse range of actions, for instance, getting-up, jumping,dancing, and walking. The ability to react to external interven-tions, while completing assigned task (i.e., combining motionswith priorities). These problem-solving skills are highly valued.We want character agents to learn and adapt to the situation. Thisincludes:

• physically based models (e.g., rigid bodies) that arecontrolled through internal joint torques (muscle forces)

• controllable adjustable joint signals to accomplish spe-cific actions (trained)

• learn and retain knowledge from past experiences• embed personal traits (personality)

Problems We want the method to be automatic (i.e., not dependtoo heavily on pre-canned libraries). Avoid simply playing backcaptured animations, but instead paramaterizing and re-usinganimations for different contexts (provide stylistic advice tothe training algorithm). We want the solution to have the abilityto adapt on-the-fly to unforeseen situations in a natural life-like

Page 2: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

manner. Having said that, we also want to accommodate a diverserange of complex motions, not just balanced walking, but getting-up, climbing, and dancing actions. With a physics-based modelat the heart of the system (i.e., not just a kinematic skeletonbut joint torques/muscles), we are able to ensure a physicallycorrect solution. While a real-world human skeleton has a hugenumber of degrees-of-freedom, we accept that a lower fidelitymodel is able to represent the necessary visual characteristics(enable reasonable computational overheads). Of course, even asimplified model possesses a large amount of ambiguity withsingularities. All things considered, we do not want to focus onthe ‘actions’ - but embrace the autonomous emotion, behaviourand cognitive properties that sit on top of the motion (intelligentlearning component).

Figure 1. Homunculus Body Map - The somato-sensory homunculus is a kindof map of the body [3], [4]. The distorted model/view of a person (see Figure

2) represents the amount of sensory information a body part sends to the centralnervous system (CNS)

Geometric to Cognitive Synthesizing animated characters forvirtual environments addresses the challenges of automating avariety of difficult development tasks. Early research combinedgeometric and inverse kinematic models to simplify key-framing.Physical models for animating particles, rigid bodies, deformablesolids, fluids, and gases have offered the means to generate co-pious quantities of realistic motion through dynamic simulation.Bio-mechanical models employ simulated physics to automate thelifelike animation of animals with internal muscle actuators. Inrecent years, research in behavioral modeling has made progresstowards ‘self-animating’ characters that react appropriately toperceived environmental stimuli [5], [6], [7], [8]. It has remaineddifficult, however, to instruct these autonomous characters so thatthey satisfy the programmer’s goals. As pointed out by Fungeet al. [9], the computer graphics solution has evolved, fromgeometric solutions to more logical mathematical approaches, andultimately cognitive models, as shown in Figure 3.

A large amount of work has been done into motion re-targeting (i.e., taking existing pre-recorded animations and mod-ifying them to different situations) [10], [11], [12]. Targetedsolutions that generate animations for specific situations, such as,locomotion [13] and climbing [14]. Kinematic models do not takeinto account the physical properties of the model, in addition, areonly able to solve local problems (e.g., reading and stepping andnot complex rhythmic actions) [15], [16], [17]. Procedural modelsmay not converge to natural looking motions [18], [19], [20].Cognitive models go beyond behavioral models, in that theygovern what a character knows, how that knowledge is acquired,

and how it can be used to plan actions. Cognitive models areapplicable in instructing a new breed of highly autonomous,quasi-intelligent characters that are beginning to find use in in-teractive virtual environments. We decompose cognitive modelinginto two related sub-tasks: (1) domain knowledge specificationand (2) character instruction. This is reminiscent of the classicdictum from the field of artificial intelligence (AI) that tries topromote modularity of design by separating out knowledge fromcontrol.

knowledge + instruction = intelligent behavior (1)

Domain (knowledge) specification involves administeringknowledge to the character about its world and how that world canchange. Character instructions tell the character to try to behavein a certain way within its world in order to achieve specific goals.Like other advanced modeling tasks, both of these steps can befraught with difficulty unless developers are given the right toolsfor the job.

Components We wanted to avoid a ‘single’ amalgamated al-gorithm (e.g., Neural Networks or connectionist models [21]).Instead we investigate modular or dissectable learning modelsfor adapting joint signals to accomplish tasks. For example,genetic algorithms [18], in combination with Fourier methodsto subdivide complex actions into components (i.e., extract andidentify behavioural characteristics [22]). Coupled with the factthat, joint motions are essentially signals, while the physics-based model ensures the generated motions are physically correct[23]. To say nothing of the advancements in parallel hardware -we envision the exploitation of massively parallel architectureconstitutional.

Figure 2. Homunculus Body Map - Reinert et al [4], presented a graphicalpaper on mesh deformation to visualize the somato-sensory information of thebrain-body. The figure conveys the importance of the neuronal homunculus -

i.e., the human body part size relation to neural density and the brain.

Contribution The novel contribution of this technical articleis the amalgamation of numerous methods, for instance, bio-mechanics, psychology, robotics, and computer animation, toaddress the question of ‘how can we make virtual characters solveunforeseen problems automatically and in a realistic manner?’(i.e., mimic the human cognitive learning process).

Page 3: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

Figure 3. Timeline - Computer Graphics Cognitive Development Model (Geometric, Kinematic, Physical, Behavioural, and Cognitive) ([9]. Simplified illustrate ofmilestones over the years that have contributed novel animation solutions - emphasises the gradual transition from kinematic and physical techniques to intelligentbehavioural models. [A] [24]; [B] [20]; [C] [19]; [D] [25]; [E] [26]; [F] [27]; [G] [28]; [H] [18]; [I] [29]; [J] [30]; [K] [31]; [L] [32]; [M] [33]; [N] [34]; [O] [35];

[P] [8]; [Q] [36]; [R] [7]; [S] [5]; [T] [6]; [U] [37]; [V] [38];

II. BACKGROUND & RELATED WORK

Literature Gap The research in this article brings togethernumerous diverse concepts and while in their individual field theyare well studied, in their whole and applied to virtual characteranimations, there is a serious gap in the referential literature.Hence, we begin by exploring branches of research from cognitivepsychology and bio-mechanics before taking them across andcombining them with computer animation and robotics concepts.

Autonomous Animation Solutions Formal approaches to an-imation, such as, genetic algorithms [18], [19], [20], may notconverge to natural looking motions without additional work,such as, artist intervention or constrained/complex fitness func-tions. This causes limitations and constrains the ‘automation’factor. We see autonomy as the emergent of salient, novel, actiondiscovery, through self organisation of high level goal directedorders. The behavioural aspect emerges from the physical (orvirtual) constraints and fundamental low level mechanisms. Weadapt bodily motor controls (joint signals) from randomness topurposeful actions based on cognitive development (Lee [39]referred to this process as evolving from babbling to play).Interestingly, this intrinsic method of behavioural learning hasalso been demonstrated in biological models (known as actiondiscovery) [40].

Navigation/Controllers/Mechanical Synthesizing humanmovement that mimics real-world behaviours ‘automatically’ isa challenging and important topic. Typically, reactive approachesfor navigation and pursuit [24], [41], [42], [27], may not readilyaccommodate task objectives, sensing costs, and cognitiveprinciples. A cognitive solution adapts and learns (finds answersto unforeseen problems).

Expression/Emotion Humans exhibit a wide variety of ex-pressive actions, which reflect their personalities, emotions, andcommunicative needs [25], [26], [28]. These variations often in-fluence the performance of simpler gestural or facial movements.

Components Essential Components:

• Fourier - subdivide actions into components, extract andidentify behavioural characteristics [22]

• Heuristic Optimisation [18] - adapting non-linear sig-nals (with purpose)

• Physics-Based [43], [23] - torques and forces to controlthe model

• Parallel Architecture - exploit massively parallel pro-cessor architecture, such as, the graphical processing unit(GPU)

• Randomness - inject awareness and randomness (bloodflow, repository signals, background noise) [44], [45]

Brain Body Map As shown in Figure 1, we are able to mapthe minds awareness of different body parts. This is known as thehomunculus body map. So why is it important for movement?Helps understanding the neural mechanisms of human sensori-motor coordination and cognitive connection. While we are acomplex biological organism, we need feedback and information(input) to be able to move and thus live (i.e., movement is life).The motor part of the brain relies on information from the sensorysystems. The control signals are dynamically changing dependingon our state. Simply put, the better the central representation,the better the motor output will be and the more life-like andrealistic the final animations will be. Our motor systems needto know the state of our body. If the situation is not known or notvery clear, the movements will not be good, because the motorsystems will be ‘afraid’ to go all out. Very similar to driving a caron an unknown road in misty conditions with only an old, wornand worm eaten map. We drive slow and tense, to avoid hittingsomething or getting of road. This is safety behaviour: safe, buttaxing on the system.

Cognitive Science The cognitive science of motion is an inter-disciplinary scientific study of the mind and its processes. Weexamines what cognition motion is, what it does and how itworks. This includes research in to intelligence and behaviour,especially focusing on how information is represented, processed,

Page 4: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

Figure 4. Brain and Actions - The phases (left-to-right) the human brain goes through - from thinking about doing a task to accomplishing it (e.g., walking to thekitchen to get a drink from the cupboard).

and transformed (in faculties such as perception, language, mem-ory, attention, reasoning, and emotion) within nervous systems(humans or other animals) and machines (e.g. computers). Cog-nitive motion science consists of multiple research disciplines,including robotics, psychology, artificial intelligence, philosophy,neuroscience, linguistics, and anthropology. The subject spansmultiple levels of analysis, from low level learning and decisionmechanisms to high level logic and planning; from neural cir-cuitry to modular brain organization. However, the fundamentalconcept of cognitive motion is the understanding of instinctualthinking in terms of the structural mind and computationalprocedures that operate on those structures. Importantly, cognitivesolutions are not only adaptive but also anticipatory andprospective, that is, they need to have (by virtue of their phy-logeny) or develop (by virtue of their ontogeny) some mechanismto rehearse hypothetical scenarios.

Neural Networks and Cognitive Simulators ComputationalNeuroscience [46], [29], [47] biologically inspired solutions forneural models for simulating information processing and cog-nition and behaviour modelling. The majority of the researchhas focused on modelling ‘isolated components’. Cognitive ar-chitectures [48] using biologically based models for goal drivenlearning and behaviours. Publically available neural networksimulators are available [49].

Motor Skills Our brain sees the world in ‘maps’. The mapsare distorted, depending on how we use each sense, but theyare still maps. Almost every sense has a map. Most senses havemultiple maps. We have a ‘tonotopic’ map, which is a map ofsound frequency, from high pitched to low pitched, which is howour brain processes sound. We have a ‘retinotopic’ map, whichis a reproduction of what you are seeing, and it is how the brainprocesses sight. Our brain loves maps. Most importantly, we havemaps of our muscles. The mapping from sensory information tomotor movement is shown in Figure 1. For muscle movements,the finer, more detailed the movements are, the more brain spacethose muscles have. Hence, we can address which muscles takepriority and under what circumstances (i.e., sensory input). Thisalso opens the door to lots of interesting and exciting questions,such as, what happens to the maps if we lose a body part, suchas, a finger.

Psychology Aspect A number of interesting facts are hiddenin the psychology aspect of movement that are often taken forgranted or overlooked. Incorporating them in a dynamic systemallows us to solve a number of problems. For example, when weobserve movements which are slightly different from each otherbut possess similar characteristics. The work by Armstrong [50],showed that when a movements sequence is speeded up as a unit,

the overall relative movement or ‘phasing’ remains constant. Ledto the discovery of relative forces or the relationship among forcesin the muscles participating in the action.

How the Brain Controls Muscles Let us pretend that we wantto go to the kitchen, because we are hungry. First, an area inour brain called the parietal lobe comes up with a lots ofpossible plans. We could get to the kitchen by skipping, sprinting,uncoordinated somersaulting, or walking. The parietal lobe sendsthese plans to another brain area called the basal ganglia. Thebasal ganglia picks ‘walking’ as the best plan (with uncoordinatedsomersaulting as close second option). It tells the parietal lobe theplan. The parietal lobe confirms it, and sends the ‘walk to kitchen’plan down the spinal cord and to the muscles. The muscles move.As they move, our cerebellum kicks into high gear, making surewe turn right before we crash into the kitchen counter, and thatwe jump over the dog. Part of the cerebellum’s job is to makequick changes to muscle movements while they are happening(see Figure 4).

Visualizing the Solution (Offline) We visualize a goal. In ourmind, over and over and over again. We picture the movements.We see ourself catching that ball. Dancing that toe touch. Swim-ming that breaststroke. We watch it in the movie of our mindwhenever we can. Scrutinize it. Is our wrist turning properly? Isour kick high enough? If not, we change the picture. See ourselfdoing the movement perfectly. As far as our parietal lobe andbasal ganglia are concerned, this is exactly the same as doingthe movement. When we visualize the movement, we activate allthose planning pathways. Those neurons fire, over and over again.Which is what needs to happen for our synapses to strengthen.In other words, by picturing the movements, we are actuallylearning them. This makes it easier for the parietal lobe to sendthe right message to the muscles. So when we actually try toperform a movement, we will get better, faster. We will need lessphysical practice to be good at sports. This does not work forgeneral fitness (i.e., increased strength). We still need to train ourmuscles, heart, and lungs to become strong. However, its goodfor skilled movements. Basketball lay ups. Gymnastics routines.For improved technique, visualization works. We train our brain,which makes it easier to control our muscles. What does thishave to do with character simulations? We are able to mimicthe ‘visualization’ approach by having our system constantly runsimulations in the background. Exploit all that parallel processingpower. Run large numbers of simulations one or two seconds inadvance and see how the result leads out. If the character’s foodit a few centimetres forward, if we use more torque on the kneemuscle, how does this compare with our ideal animation we areaiming for? As we find solutions, we store them and improveupon them each time a similar situation arises.

Page 5: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

Figure 5. Overview - High level view of interconnected components and their justifications. (a) We have a current (starting) state and a final state. The unknownmiddle transitioning states is what we are searching for. The transition state is a dynamic problem that is specific to the problem. For instance, the terrain or the

situation may vary (slopes or crawling under obstacles). (b) A heuristic model would be able to train a set of trigonometric functions (e.g., Fourier series), to createrhythmic motions that are able to accomplish the task. The low level task (fitness function), being a simple ‘overall centre of mass trajectory’. (c) With (b) on its

own, the solution is plagued with issues, such as, how to steer or control the type of motion and if the final motion is ‘humanistic’ or ‘life-like’. Hence, we have a‘pre-defined’ library of motions that are chosen based on the type of animation we are leaning towards (standard walk or hopping). The information from the

animation is fed back into the fitness function in (b). Providing a multi-objective problem, centre of mass, end-effectors, and frequency components for ‘style’. (d)The solution from each problem is ‘stored’ in a sub-bank of the animation and used for future problems. This builds upon using previous knowledge to help solve

new problems faster in a coherent manner (e.g., previous experiences will cause different characters to create slightly different solutions over time).

Physically Correct Model Our solution controls a physicsbased model using joint torques as in the real world. This mimicsthe real world more closely, not only do we require the modelto move in a realistic manner but it also has to control jointmuscles in sufficient ratios to achieve the final motion (e.g.,balance control). Adjusting the physical model, for instance,muscle strength or leg lengths, allows the model to retrain toachieve the action.

(Get Up) Rise Animations Animation is diverse and complexarea, so rather than try and create solutions for every possiblesituation, we focus on a particular set of actions, that is, risingmovements. Rise animations require a suitably diverse rangeof motor skills. We formate a set of tasks to evaluate ouralgorithm, such as, get up from front, get up from back, get

up on uneven ground and so on. The model also encapsulatesunderlying properties, such as, visual attention and expressivequalities (tired, unsure, eager) and human expressiveness. Weconsider a number of factors, such as, inner and outer information,emotion, personality, primary and secondary goals.

III.OVERVIEW

High Level Elements The system is driven by three key sourcesof information:

1) the internal information (e.g., logistics of the brain,experience, mood)

2) the aim or action3) external input (e.g., environmental, contacts, comfort,

lighting)

Page 6: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

4) memory and information retrieval (e.g., parallel modelsand associative memory)

Motion Capture Data (Control) We have a library of actionsas reference material for look-up and comparison. Some form of‘control’ and ‘input’ to steer the characters to perform actions ina particular way (e.g., instead of the artist creating a large look-uparray of animations for every single possible solution), we providefundamental poses and simple pre-recorded animations to ‘guide’the learning algorithm. As search models are able to exploretheir diverse search-space to reach the goal (e.g., heuristicallyadjusting joint muscles), however, a reference ‘library’ allows usto steer the solution towards what is ‘natural-looking’. As thereare a wide number of ways of accomplishing a task - but what is‘normal’ and what is ‘strange’ and uncomfortable. The key pointswe concentrate on are:

1) the animations requires basic empirical information(e.g., reference key-poses) from human movement andcognitive properties;

2) the movement should not simply reply pre-recorded mo-tions, but adapt and modify them to different contexts;

3) the solution must react to disturbances and changes inthe world while completing the given task;

4) the senses provide unique pieces of information, whichshould be combined with internal personality and emo-tion mechanisms to create the desired actions and/or re-actions.

Blending/Adapting Animation Libraries During motor skillacquisition, the brain learns to map between ‘intended’ limbmotion and requisite muscular forces. We propose that regions(i.e., particular body segments) in the animation library area areblended together to find a solution that is aesthetically pleas-ing. (i.e., based upon pre-recorded motions instead of randomlysearching).

Virtual Infant (or Baby) Imagine a baby with no knowledgeor understanding. As we explained, a bottom up view, startingwith nothing and educating the system to mimic humanistic(organic) qualities. Learning algorithms to tune skeletal motorsignals to accomplish high-level tasks. As with a child - ‘trial-and-error’ approach to learning - exploring what is possibleand impossible - to eventually reach a solution. This requirescontinuously integrating in corrective guidance (as with a child- without knowing what is right and wrong - the child willnever learn). This guidance is through fitness criteria and examplemotion clips (as children do - see and copy - or try to). Performingmultiple training exercises over and over again to learn skills.Having the algorithm actively improve (e.g., proprioception - howthe brain understands the body). As we learn to perform motions,there are thousands of small adjustments that our body as awhole is making every millisecond to ensure optimal (quickest,energy efficient, closest idea/style). Constantly monitoring thebody by sending and receiving sensory information (e.g., to andfrom every joint, limb, and contact). Over time, the experiencestrengthens the model’s ability to accomplish tasks quicker andmore efficiently.

Stability Autonomous systems have ‘stability’ issues (i.e., theyare far from equilibrium stability) [51]. Due to the dynamicnature of a character’s actions, they are dependent for theirenvironment (external factors) requiring interaction, which areopen processes (exhibit closed self-organization). However, wecan measure stability in relation to reference poses, energy, and

balance to draw conclusions of the effectiveness of the learnedsolution.

Memory Learn through explorative searching (i.e, with quan-tative measures for comfort, security, and satisfaction). While acharacter may find an ‘optimal’ solution that meets the specifiedcriteria - it will continue to expand its memory repertoire ofactions. This is a powerful component, increasing the efficiency inachieving a goal (e.g., the development of walking and retentionof balanced motion in different circumstances would be moreeffective). The view that exploration and retention (memory)is crucial to ontogenetic development, which is supported byresearch findings in developmental psychology [52]. Hofsten [53]explains that it is not necessarily success at achieving task-specific goals that drives development but the discovery of newway of doing something (through exploration). Forms a solutionthat builds upon ‘prior knowledge’ with an increased relianceon machine learning and statistical evaluation (i.e., for tuningthe system parameters). This leads to an model that constantlyacquires new knowledge both for the current and future task.

IV.COMPLEXITY

Experimenting with optimisation algorithms (i.e., differentfitness criteria for specific situations). Highly dynamic animations(jumping or flying through the air). Close proximity simulations(dancing, wrestling, getting in/out of a vehicle). Exploring ‘be-yond’ human but creative creatures (multiple legs and arms).Instead of aesthetic qualities, investigate ‘interesting’ behaviours.As the system and training evolves to use a ‘control language’to give orders. Not just limited to generic motions (i.e., walkingand jumping), but the ability to learn and search for solutions(whatever the method). Introduce risk, harm, and comfort to‘limit’ the solutions to be more ‘human’ and organic. Avoidunsupervised learning since it leads to random unnatural anduncontrollable motions. Simple examples (i.e., training data) tosteer the learning. Gather knowledge and extend the memoryof experiences to help solve future problems (learn from pastproblems). This method is very promising for building organicreal-life systems (handle unpredictable situations in a logicalnatural manner). Technique is scalable and generalizes acrosstopologies. Learned solutions can be shared and transferredbetween characters (i.e., accelerated learning through sharing).

Figure 6. Complexity - As animation and behavioural character models becomeincreasing complex, it becomes more challenging and time consuming to

customize and create solutions for specific environments/situations.

An physically correct, self-adapting, learning animation sys-tem to mimic human cognitive mechanics is a complex task

Page 7: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

that embodies a wide range of biologically based concepts. Abottom up approach (i.e., starting with nothing). This forms afoundation from which greater details can be added. As the modelgrows in complexity and details more expressive and autonomousanimations appear. Leading on to collaborative agents, i.e., sociallearning and interaction (i.e., behaviour in groups). The enormouscomplexity of the human brain and its ability to problem solvecannot be underestimated - however, through simple approxima-tions we are able to develop autonomous animation models thatembody and possess humanistic qualities, such as, cognitive andbehavioural learning abilities.

Tackle a complex problem - our movement allows us toexpress a vast array of behaviours in addition to solving physicalproblems, such as, balance and locomotion. We have only scrapedthe surface of what is possible - constructing and explaininga simple solution (for a relatively complex neuro-behaviouralmodel) - to investigate a modular extendible framework tosynthesize human movement (i.e., mapping functionality, problemsolving, mapping of brain to anatomy, and learning/experience).

Body Language The way we ‘move’ says a lot. How we standand how we walk expels ‘emotional’ details. We humans arevery good at spotting these underlying characteristics. Thesefundamental physiological motions are important in animation- if we want to synthesize life-like characters. While thesesubtle underlying motions are aesthetic (i.e., sitting on top of thephysical action or goal), they are non the less equally important.Emotional synthesis is often classified as a low-level biologicalprocess [54]. Chemical reactions in the brain for stress and pain- correlate and modulate various behaviours (including motorcontrol) - vast array of effects - influencing sensitivity, mood,and emotional responses. We have took a view that the motionand learning is driven by a high level cognitive model (avoid thevarious underlying physiological and chemical parameters).

Input (Sensory Data) The brain has a vast array of sensorydata, such as, the eyes, sound, temperature, smell, and feelings,that feed in to make the final decision. Technically, our simpleassumption is analogous to a blind person taking lots of shortexploratory motions to discover how to accomplish the task.Reduce the skeleton complexity compared to a full human model(numerical complexity). Physical information from the environ-ment, like contacts, centre of mass, and end-effector locations.The output motor control signals - with behavioural selection,example learning motion library, emotion, and fitness evaluation.

V. CONCLUSION

We have specified a set of simple constraints to steer andcontrol the animation (e.g., get-up poses). We developed a modelbased on biology, cognitive psychology, and adaptive heuristicsto create animations to control a physics-based skeleton thatadapts and re-trains parameters to meet changing situations (e.g.,different physical and environmental information). We injectpersonality and behavioural components to create animations thatcapture life-like qualities (e.g., mood, tired, and scared).

This article addresses several possibilities for future work.It would be valuable to do further tests on specific hypothesesand assumptions by constructing more focused and rigorousexperiments. However, these hypotheses are hard to state pre-cisely, and thus have mixed feelings - since we are trying tomodel humanistic cognitive abilities. A practical approach mightbe to directly compare and contrast real-world and synthesizedsituations. For instance, an experiment of an actor dealing with

difficult situations, such as, stepping over objects and walking un-der bridges. Younger children approach the problem in a differentway - similar to our computer agent - learning through trial anderror, behaving less mechanically and more consciously. Further,communication between director (e.g., example animations andposses for control) might lead to more formal languages ofcommands. This would help us learn precisely what sorts ofcommands are needed and when there should be issued. Finally,we could go further by developing richer cognitive modelsand control languages for describing motion and style to solvequestions not even imagined.

We have taken a simplified view of cognitive modelling. Wewill continue to see cognitive architectures develop over thecoming years that are capable of adapting and self-modifying,both in terms of parameter adjustment phylogenetic skills. Thiswill be through learning and, more importantly, through themodification of the very structure and organization of the systemitself (memory and algorithm) so that it is capable of altering itssystem dynamics based on experience, to expand its repertoire ofactions, and thereby adapt to new circumstances [52]. A variety oflearning paradigms will need to be developed to accomplish thesegoals, including, but not necessarily limited to, unsupervised,reinforcement, and supervised learning.

Learning through watching Providing the ability to translate2D video images to 3D animation sequences would allow cog-nitive learning algorithms the ability to constantly ‘watch’ andlearn from people. Watching people in the street walking andavoiding one another, climbing over obstacles, and interacting toreproduce similar characteristics virtually.

REFERENCES[1] D. Vogt, S. Grehl, E. Berger, H. B. Amor, and B. Jung, “A data-driven

method for real-time character animation in human-agent interaction,” inIntelligent Virtual Agents. Springer, 2014, pp. 463–476.

[2] T. Geijtenbeek and N. Pronost, “Interactive character animation usingsimulated physics: A state-of-the-art review,” in Computer Graphics Forum,vol. 31, no. 8. Wiley Online Library, 2012, pp. 2492–2515.

[3] E. N. Marieb and K. Hoehn, Human anatomy & physiology. PearsonEducation, 2007.

[4] B. Reinert, T. Ritschel, and H.-P. Seidel, “Homunculus warping: Conveyingimportance using self-intersection-free non-homogeneous mesh deforma-tion,” Computer Graphics Forum (Proc. Pacific Graphics 2012), vol. 5,no. 31, 2012.

[5] T. Conde and D. Thalmann, “Learnable behavioural model for autonomousvirtual agents: low-level learning,” in Proceedings of the fifth internationaljoint conference on Autonomous agents and multiagent systems. ACM,2006, pp. 89–96.

[6] F. Amadieu, C. Marine, and C. Laimay, “The attention-guiding effect andcognitive load in the comprehension of animations,” Computers in HumanBehavior, vol. 27, no. 1, 2011, pp. 36–40.

[7] E. Lach, “fact-animation framework for generation of virtual charactersbehaviours,” in Information Technology, 2008. IT 2008. 1st InternationalConference on. IEEE, 2008, pp. 1–4.

[8] J.-S. Monzani, A. Caicedo, and D. Thalmann, “Integrating behaviouralanimation techniques,” in Computer Graphics Forum, vol. 20, no. 3. WileyOnline Library, 2001, pp. 309–318.

[9] J. Funge, X. Tu, and D. Terzopoulos, “Cognitive modeling: knowledge,reasoning and planning for intelligent characters,” in Proceedings of the26th annual conference on Computer graphics and interactive techniques.ACM Press/Addison-Wesley Publishing Co., 1999, pp. 29–38.

[10] S. Tak and H.-S. Ko, “A physically-based motion retargeting filter,” ACMTransactions on Graphics (TOG), vol. 24, no. 1, 2005, pp. 98–117.

[11] S. Baek, S. Lee, and G. J. Kim, “Motion retargeting and evaluation forvr-based training of free motions,” The Visual Computer, vol. 19, no. 4,2003, pp. 222–242.

[12] J.-S. Monzani, P. Baerlocher, R. Boulic, and D. Thalmann, “Using anintermediate skeleton and inverse kinematics for motion retargeting,” inComputer Graphics Forum, vol. 19, no. 3. Wiley Online Library, 2000,pp. 11–19.

Page 8: Bio-Inspired Animated Characters A Mechanistic & Cognitive View

[13] B. Kenwright, R. Davison, and G. Morgan, “Dynamic balancing andwalking for real-time 3d characters,” in Motion in Games. Springer, 2011,pp. 63–73.

[14] C. Balaguer, A. Gimenez, J. M. Pastor, V. Padron, and M. Abderrahim,“A climbing autonomous robot for inspection applications in 3d complexenvironments,” Robotica, vol. 18, no. 03, 2000, pp. 287–297.

[15] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovic, “Style-basedinverse kinematics,” in ACM Transactions on Graphics (TOG), vol. 23,no. 3. ACM, 2004, pp. 522–531.

[16] D. Tolani, A. Goswami, and N. I. Badler, “Real-time inverse kinematicstechniques for anthropomorphic limbs,” Graphical models, vol. 62, no. 5,2000, pp. 353–388.

[17] T. B. Moeslund, A. Hilton, and V. Kruger, “A survey of advances in vision-based human motion capture and analysis,” Computer vision and imageunderstanding, vol. 104, no. 2, 2006, pp. 90–126.

[18] B. Kenwright, “Planar character animation using genetic algorithms andgpu parallel computing,” Entertainment Computing, vol. 5, no. 4, 2014,pp. 285–294.

[19] K. Sims, “Evolving virtual creatures,” in Proceedings of the 21st annualconference on Computer graphics and interactive techniques. ACM, 1994,pp. 15–22.

[20] J. T. Ngo and J. Marks, “Spacetime constraints revisited,” in Proceedingsof the 20th annual conference on Computer graphics and interactivetechniques. ACM, 1993, pp. 343–350.

[21] J. A. Feldman and D. H. Ballard, “Connectionist models and their proper-ties,” Cognitive science, vol. 6, no. 3, 1982, pp. 205–254.

[22] M. Unuma, K. Anjyo, and R. Takeuchi, “Fourier principles for emotion-based human figure animation,” in Proceedings of the 22nd annual confer-ence on Computer graphics and interactive techniques. ACM, 1995, pp.91–96.

[23] P. Faloutsos, M. Van de Panne, and D. Terzopoulos, “Composable con-trollers for physics-based character animation,” in Proceedings of the 28thannual conference on Computer graphics and interactive techniques. ACM,2001, pp. 251–260.

[24] H. Noser, O. Renault, D. Thalmann, and N. M. Thalmann, “Navigation fordigital actors based on synthetic vision, memory, and learning,” Computersand graphics, vol. 19, no. 1, 1995, pp. 7–19.

[25] H. H. Vilhjalmsson, “Autonomous communicative behaviors in avatars,”Ph.D. dissertation, Massachusetts Institute of Technology, 1997.

[26] J. Cassell, H. H. Vilhjalmsson, and T. Bickmore, “Beat: the behaviorexpression animation toolkit,” in Life-Like Characters. Springer, 2004,pp. 163–185.

[27] X. Tu and D. Terzopoulos, “Artificial fishes: physics, locomotion, percep-tion, behavior,” in Proceedings of the 21st annual conference on computergraphics and interactive techniques. ACM, 1994, pp. 43–50.

[28] J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket,B. Douville, S. Prevost, and M. Stone, “Animated conversation: rule-basedgeneration of facial expression, gesture & spoken intonation for multipleconversational agents,” in Proceedings of the 21st annual conference onComputer graphics and interactive techniques. ACM, 1994, pp. 413–420.

[29] X. Yao, “Evolving artificial neural networks,” Proceedings of the IEEE,vol. 87, no. 9, 1999, pp. 1423–1447.

[30] H. A. ElMaraghy, “Kinematic and geometric modelling and animation ofrobots,” in Proc. of Graphics Interface’86 Conference. ACM, 1986, pp.15–19.

[31] C. W. Reynolds, “Computer animation with scripts and actors,” in ACMSIGGRAPH Computer Graphics, vol. 16, no. 3. ACM, 1982, pp. 289–296.

[32] N. Burtnyk and M. Wein, “Interactive skeleton techniques for enhancingmotion dynamics in key frame animation,” Communications of the ACM,vol. 19, no. 10, 1976, pp. 564–569.

[33] C. Csuri, R. Hackathorn, R. Parent, W. Carlson, and M. Howard, “To-wards an interactive high visual complexity animation system,” in ACMSIGGRAPH Computer Graphics, vol. 13, no. 2. ACM, 1979, pp. 289–299.

[34] R. A. Goldstein and R. Nagel, “3-d visual simulation,” Simulation, vol. 16,no. 1, 1971, pp. 25–31.

[35] A. Bruderlin and T. W. Calvert, “Goal-directed, dynamic animation ofhuman walking,” ACM SIGGRAPH Computer Graphics, vol. 23, no. 3,1989, pp. 233–242.

[36] I. Mlakar and M. Rojc, “Towards ecas animation of expressive complexbehaviour,” in Analysis of Verbal and Nonverbal Communication andEnactment. The Processing Issues. Springer, 2011, pp. 185–198.

[37] M. Soliman and C. Guetl, “Implementing intelligent pedagogical agents in

virtual worlds: Tutoring natural science experiments in openwonderland,” inGlobal Engineering Education Conference (EDUCON), 2013 IEEE. IEEE,2013, pp. 782–789.

[38] J. Song, X.-w. Zheng, and G.-j. Zhang, “Method of generating intelligentgroup animation by fusing motion capture data,” in Ubiquitous ComputingApplication and Wireless Sensor. Springer, 2015, pp. 553–560.

[39] M. H. Lee, “Intrinsic activitity: from motor babbling to play,” in Develop-ment and Learning (ICDL), 2011 IEEE International Conference on, vol. 2.IEEE, 2011, pp. 1–6.

[40] K. Gurney, N. Lepora, A. Shah, A. Koene, and P. Redgrave, “Actiondiscovery and intrinsic motivation: a biologically constrained formalisa-tion,” in Intrinsically Motivated Learning in Natural and Artificial Systems.Springer, 2013, pp. 151–181.

[41] W.-Y. Lo, C. Knaus, and M. Zwicker, “Learning motion controllerswith adaptive depth perception,” in Proceedings of the ACM SIG-GRAPH/Eurographics Symposium on Computer Animation. EurographicsAssociation, 2012, pp. 145–154.

[42] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioralmodel,” in ACM Siggraph Computer Graphics, vol. 21, no. 4. ACM,1987, pp. 25–34.

[43] K. Erleben, J. Sporring, K. Henriksen, and H. Dohlmann, Physics-basedanimation. Charles River Media Hingham, 2005.

[44] K. Perlin, “Real time responsive animation with personality,” Visualizationand Computer Graphics, IEEE Transactions on, vol. 1, no. 1, 1995, pp.5–15.

[45] B. Kenwright, “Generating responsive life-like biped characters,” in Pro-ceedings of the The third workshop on Procedural Content Generation inGames. ACM, 2012, p. 1.

[46] T. Trappenberg, Fundamentals of computational neuroscience. OUPOxford, 2009.

[47] P. Dayan and L. Abbott, “Theoretical neuroscience: computational andmathematical modeling of neural systems,” Journal of Cognitive Neuro-science, vol. 15, no. 1, 2003, pp. 154–155.

[48] A. V. Samsonovich, “Toward a unified catalog of implemented cognitivearchitectures.” BICA, vol. 221, 2010, pp. 195–244.

[49] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M. Bower,M. Diesmann, A. Morrison, P. H. Goodman, F. C. Harris Jr et al., “Sim-ulation of networks of spiking neurons: a review of tools and strategies,”Journal of computational neuroscience, vol. 23, no. 3, 2007, pp. 349–398.

[50] T. R. Armstrong, “Training for the production of memorized movementpatterns,” Ph.D. dissertation, The University of Michigan, 1970.

[51] M. H. Bickhard, “Autonomy, function, and representation,” Communicationand Cognition-Artificial Intelligence, vol. 17, no. 3-4, 2000, pp. 111–131.

[52] D. Vernon, G. Metta, and G. Sandini, “A survey of artificial cognitive sys-tems: Implications for the autonomous development of mental capabilitiesin computational agents,” IEEE Transactions on Evolutionary Computation,vol. 11, no. 2, 2007, p. 151.

[53] C. von Hofsten, On the development of perception and action. London:Sage, 2003.

[54] M. Sagar, P. Robertson, D. Bullivant, O. Efimov, K. Jawed, R. Kalarot, andT. Wu, “A visual computing framework for interactive neural system modelsof embodied cognition and face to face social learning,” in UnconventionalComputation and Natural Computation. Springer, 2015, pp. 71–88.