20
Believable and Challenging Video Game AI How Yan Jhong, Lim Chow Siang, Lau Yong Chern and Dr Ting Tin Tin July 31, 2015 Faculty of Applied Sciences and Computing, Tunku Abdul Rahman University College, Kuala Lumpur, Malaysia [email protected],[email protected], [email protected], [email protected] Abstract One of the most challenging problems in video games is AI. A good AI can make games enjoy- able to players while at the same time provide an immersive experience to them. To do this, AI programmers have to use several ways to accomplish that. This paper contains the categorization of research papers in which the researchers had evaluated their AI mainly based on the winning rate against default built-in game AI in recent years, categorization of believable AI, and finally categorization of the technical problems encountered in creating these believable and challenging AI. The aim of the paper is to serve as a reference point for every individual/group who is interested in believable and challenging AI, by providing a complete and structured literature review. Keywords Artificial Intelligence, Video Game, Believable and Challenging, Categorization, Technical Chal- lenge 1 Introduction AI (artificial intelligence) is a crucial component in most video games. It is one of the core recipes that makes a game immersive. It brings life to games, making players believe that they are facing off against a true, living person behind every AI-driven character. AI do a lot in a game. Some AI plays the role of Non-player characters (NPC) to provide interaction with the players, while some AI runs the matchmaking system to group players together (Delalleau et al. 2012). As the market of digital entertainment products such as video games grows, these products get higher and higher expectations from the users. In the context of video game, users expect more complex AI (Felipe & Karlsson 2003, Pirovano et al. 2012). There are many other factors that might affect AI in delivering an enjoyable experience to player. Challenge is an important factor for players to play games (Poels et al. 2012). For a player to enjoy a game, the player needs a challenge that suits his/her skill level (Tremblay & Verbrugge 2013, Yeruva 2012). Typical bots in a game are boring to be played against because of their repetitive and predictable behaviour (Patel et al. 2012). In this paper, we are interested in showing that new AI that can outperform the old default game AI.

Document

  • Upload
    haor

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Document

Believable and Challenging VideoGame AI

How Yan Jhong, Lim Chow Siang, Lau Yong Chern and Dr Ting Tin Tin

July 31, 2015

Faculty of Applied Sciences and Computing, Tunku Abdul Rahman UniversityCollege, Kuala Lumpur, Malaysia

[email protected], [email protected],

[email protected], [email protected]

AbstractOne of the most challenging problems in video games is AI. A good AI can make games enjoy-able to players while at the same time provide an immersive experience to them. To do this, AIprogrammers have to use several ways to accomplish that. This paper contains the categorizationof research papers in which the researchers had evaluated their AI mainly based on the winningrate against default built-in game AI in recent years, categorization of believable AI, and finallycategorization of the technical problems encountered in creating these believable and challenging AI.The aim of the paper is to serve as a reference point for every individual/group who is interestedin believable and challenging AI, by providing a complete and structured literature review.

KeywordsArtificial Intelligence, Video Game, Believable and Challenging, Categorization, Technical Chal-lenge

1 Introduction

AI (artificial intelligence) is a crucial component in most video games. It is one of thecore recipes that makes a game immersive. It brings life to games, making players believethat they are facing off against a true, living person behind every AI-driven character. AIdo a lot in a game. Some AI plays the role of Non-player characters (NPC) to provideinteraction with the players, while some AI runs the matchmaking system to group playerstogether (Delalleau et al. 2012). As the market of digital entertainment products suchas video games grows, these products get higher and higher expectations from the users.In the context of video game, users expect more complex AI (Felipe & Karlsson 2003,Pirovano et al. 2012). There are many other factors that might affect AI in delivering anenjoyable experience to player. Challenge is an important factor for players to play games(Poels et al. 2012). For a player to enjoy a game, the player needs a challenge that suitshis/her skill level (Tremblay & Verbrugge 2013, Yeruva 2012). Typical bots in a game areboring to be played against because of their repetitive and predictable behaviour (Patelet al. 2012). In this paper, we are interested in showing that new AI that can outperformthe old default game AI.

Page 2: Document

However, a super intelligent AI does not necessarily means it has fulfilled the role ofmaking the game enjoyable. A smart AI that is "optimal" might fulfill their goals quickly,but doing so may leave the player without any chance to win. The AI has to entertain theplayers/audience, and if the AI doesn’t make the game a better experience, any intelligenceit is said to have will be irrelevant (Mozgovoy & Umarov 2011). An effective AI has to bebelievable at a level where the AI itself is believed to be an actual living being, or that theperson controlling it is human (Shaker et al. 2013).

The most tedious issue of developing AI is the AI implementation. Not only is theAI architecture complex, the architecture that is being developed has to suit the hard-ware specification (Rabin 2015, Lucas et al. 2013). In this paper, we will investigate thechallenges behind creating an AI for better game experience (Cui & Shi 2011, Dill 2011).

This paper is organized by writing the methodology, categorizing papers which theresearchers had evaluated their AI mainly based on winning rate against default built-ingame AI, categorizing the technical challenge found in making believable and competitivevideo game AI and discussion on results of existing papers under the categorization of AItechnique used in making challenge AI, before ending with a conclusion.

2 Categorization of AI technique used to outperformdefault game AI

The papers categorized in this section are the papers which the authors had evaluatedtheir AI mainly based on winning rate against default built-in game AI. Papers such as(Guan et al. 2013, Young & Hawes 2012, McPartland & Gallagher 2011, Patel et al. 2011)compared the proposed AI performance among themselves. These papers are not listedhere as authors did not compare the suggested AI with default AI. One could argue that anAI performance should be evaluated by fighting against real human players. There are onlya few papers where the evaluation is based on proposed AI fighting against real playerswith a certain benchmark (Barata et al. 2011, Weber & Mateas 2008). This approachof evaluation is rare and difficult to accomplish due to the fact that many proposed AIare prone to crashes, and there are only a few AI which can play an entire game whichnormally consist of many phases and it is difficult to obtain a large player sample size forAI to be played against for evaluation purpose.

2.1 Evolutionary algorithm

Rodrigo de Freitas Pereira, Claudio Fabiano Motta Toledo, Marcio Kassouf Crocomo andEduardo do Valle SimÃţes developed an evolutionary algorithm for their AI in the Real-Time Strategy (RTS) game of Bos Wars (Pereira et al. 2012). The population was takenfrom the 4 best individuals from a RTS game they have created in their previous work.The individual is constituted of genes, which represent a sequence of actions. The authorshave configured their individuals AI to have 100 actions and a mutation rate of 5%. TheEA was tested against 4 default scripts of the game and the win percentage is 72%.

2.2 Evolutionary Multi-Agent Potential Field

ThomasWiller Sandberg implemented an Evolutionary Multi-Agent Potential Field (EMAPF)based AI in a small scale combat scenario in RTS game StraCraft : BloodWar. The au-thor used evolutionary algorithm on Multi-Agent Potential Field to find the optimal ornear optimal strategy for destroying all the enemy units on a SSC map in StarCraft:Brood

Page 3: Document

War, surviving with as many units as possible and with as many hit points and shieldsleft as possible (Sandberg 2011). To show the result of EMAPF, the author created 11small scale combat scenarios. The results shown that EMAPF-based AI won convincinglyagainst built-in AI even in outnumbered and disadvantage in terms of hit points situations.

2.3 Imitation Learning

In-Seok Oh, Ho-Chui Cho and Kyung-Joong Kim created an imitation learning AI for theRTS game StarCraft. The authors reported that the imitation usually performs poorlybecause of the mismatch between human perception and the low level sensory inputs togame AI bots. The proposed method is by using replay analysis software to extract therelevant events from each frame and creating influence map using potential field before thegame was played (Oh 2014). The imitation learning AI has the winning percentage of 73%as average between the 12 vs 12 and 24 vs 24 Dragoon battles against the default gameAI. It is a very promising result as bots can perform well just by imitating the behaviorfrom expert human replays.

2.4 Multi Agent System

Alberto Uriarte PÃľrez created a AI which utilized the multi-agent system for the RTSgame StarCraft. Multi-agent system combines many AI techniques such as steering be-haviours, potential fields, decision trees, finite state machines, threat maps and the black-board architecture (Pérez 2011). The AI achieved approximately 95% win rate againstProtoss, 90% win rate against Terran and 49% win rate against Zerg due to bug andcrashes that were unable to be fixed by the author.

2.5 Neural Network

Stelios Petrakis and Anastasios Tefas proposed to use neural network training for weaponselection for AI in Unreal tournament FPS game. Neural network(NN) learns from userâĂŹsaction (input) during gameplay. After going through many layers of processing, the out-put is created (Petrakis & Tefas 2010). The bots had a competition against the UnrealTournament bots with different level of difficulty. The competition had shown that the botachieved higher performance than the average dummy bot. Furthermore, bot had shownincrease in performance when facing against more difficult enemies. Just by applying theweapon selection method in a bot which does nothing else other than randomly movingthrough the map, the bot score had an immediate major improvement. It is unsure howmany games are played for their experiment. The authors used frag ratio (kill ratio) todetermine the performance of their AI.

2.6 Q-learning

Stefan Wender and Ian Watson applied reinforcement learning (RL) algorithms in a com-mercial RTS game StarCraft:Broodwar (SC:BW). Q-Learning is an On-Policy algorithm.(Fang & Ting 2010) explains that Q-learning means to compute the Q-value (a value thatvaries according to the actions and states). AI agent using Q-learning was able to learna strategy that beats the built-in game AI in the chosen small scale combat scenario inapproximately 95% in 1000 episodes played and approximately 40% in 500 episodes played.

Page 4: Document

2.7 Real-time Neuroevolution of Augmenting Topologies

Iuhasz Gabriel, Viorel Negru and Daniela Zaharie created Multi-agent system (MAS) whichutilized real-time Neuroevolution of Augmenting Topologies (rtNEAT) to train their AI tofight against other AI. Neuroevolution is a technique that uses an evolutionary algorithm tocreate or train an artificial neural network (Gabriel et al. 2012). Overall results of rtNEATAI vs built-in AI were positive as Range vs Range achieved 77% winning percentage,Melee vs Melee resulted in 74% winning percentage, Melee vs Range obtained 65% winningpercentage and lastly Range vs Melee obtained 89% winning percentage.

2.8 Robust On-line Case-Based Planning-based architecture

Abdelrahman Elogeel, Andrey Kolobov, Matthew Alden and Ankur Teredesai presentedconcurrent plan augmentation (CPA), a technique that helps pick robust strategic andtactical plans. They implemented CPA in a Robust On-line Case-Based Planning-basedarchitecture called ROLCBP and extensively test it on complex commercial RTS games ofStarCraft and Wargus. ROLCBP identifies promising strategies and tactical realizationsfor them from game logs without using a conventional planner (Elogeel et al. 2015). Theexperiment was conducted for 3 races in 3 different maps using 2 different strategies. Inother word, for each race, 3 different maps were used. For each map, 2 different strategieswere deployed. The average results of terran ROLCBP in 3 maps against 3 races (includingfight against its own race) using 2 strategies were 90%, 48% and 5%. Due to the poor resultin the 3rd map, the authors used another strategy for their AI and it produces the winrate of 60%. ROLCBP’s win rates against the built in-AI Wargus were measured. TheAI controlled the orcs and played 20 games on 3 different maps. On all Wargus maps,ROLCBP wins over half of the games against built-in AI.

2.9 SARSA

Aleksandar MiciÄĞ, DavÃŋÃř Arnarsson and Vignir JÃşnsson used SARSA reinforcementlearning algorithm on units called Dragoon and Zealot in the RTS game StarCraft (Micićet al. 2011). The experiment involved the fight between one RL Dragoon unit and onebuilt-in AI Zealot unit. Another experiment was conducted which involved 4 RL Dragoonfighting against 4 game AI Dragoon. The result of the experiment showed that RL Dragoonobtained 72% winning percentage against a Zealot. Default game AI Dragoon could neverachieve a victory against a Zealot in the same scenario. In the 4 Dragoons vs 4 Dragoonsfight, the RL agent could only achieve 32% win percentage. The results showed goodperformance in small scale battle as it is feasible but limitation was shown when complexityincrease.

Stefan Wender and Ian Watson used reinforcement learning (RL) algorithms in Star-Craft:Broodwar (SC:BW). The Sarsa algorithm is an On-Policy algorithm. The resultshowed that AI using SARSA algorithm was able to win approximately 90% of the timein 1000 episodes played and approximately 25% win rate in 500 episodes played when theSARSA AI was pitted against default game AI (Wender & Watson 2012).

2.10 SARSA ( λ )

Stefan Wender and Ian Watson applied reinforcement learning (RL) algorithms in Star-Craft:Broodwar (SC:BW). When eligibility traces are added to the Sarsa algorithm itbecomes the Sarsa (λ) algorithm (Gupta 2002). The result showed that AI using SARSA

Page 5: Document

Table 1: Categorization of AI technique used to outperform default game AI

Page 6: Document

(λ) algorithm achieved approximately 85% win rate in 1000 episodes played and approxi-mately 50% win rate in 500 episodes played against the default game AI (Wender &Watson2012).

2.11 Watkins’s Q (Q( λ ))

Stefan Wender and Ian Watson applied reinforcement learning (RL) algorithms in Star-Craft:Broodwar (SC:BW). The Q( λ ) algorithm is similar to the Q-learning algorithmexcept that it uses eligibility traces and learning for an episode stops at the first non-greedyaction taken. The result showed that AI using Q( λ ) algorithm obtained approximately80% win rate in 1000 episodes played and approximately 50% win rate in 500 episodesplayed against the built-in AI (Wender & Watson 2012).

3 Believable AI

AI is a core component of video games, whether they play the role of the enemy, theplayer’s allies or the game world and events itself. A well designed AI can enhance thegaming experience as much as good graphics or gameplay-related technical aspects (Charles2003). However, a strong and effective AI does not always work in games. Maximizing theAI’s capability to reach its own goals does not necessarily make the game more enjoyablefor players. Instead, the most important factors for making a game good and contributingto the highest user satisfaction is fun and realism. As such, AIs that are deemed "fun" areexpected to be believable, realistic and fun to interact with (Mozgovoy & Umarov 2010).

One of the core components to believable AI is adaptability. An adaptive AI is able toreact to the player’s actions and status, hence making the game easier for the player, andby doing so is able to act life-like, giving overall depth to the AI and providing immersion.

3.1 Believability in First Person Shooters (FPS)

3.1.1 Adaptive Companion AI

One of the AI designed to be realistic is the Adaptive Companion AI, designed by Tremblayand Verbrugge, modeled after the DDA (Dynamic Difficulty Adjustment) System foundin Valve’s FPS game, Left 4 Dead. Like the DDA System, the Adaptive Companion AIused a game intensity metric to adjust the game, making it enjoyable to the player. Gameintensity is the measurement of how challenging the game might be to the player, takingimportant details such as number of enemies on the map, player’s lives and ammo etc.into consideration.The AI uses three distinct behaviours while accompanying the player:cautious, support, and aggressive.

The Adaptive Companion AI switches between these behaviours by measuring the gameintensity metric. If the game intensity metric is over a certain level, X, the AI-controlledcompanion will pick the aggressive behaviour to aggressively engage the enemy in order toreduce game intensity. While assuming that the game intensity level is moderate, Y, whereY < X, the companion will be in support behaviour. Low game intensity will let the AIassume that the player is in control of the game, and it will use the cautious behaviour,letting the player take on the majority of combat roles and decisions.

To increase realism, the adaptive companion is also there to generate narrative value.This makes the AI more believable and realistic as opposed to having a companion withpredetermined dialogues. Several in-game metrics were developed to help relate to theplayer’s in-game experiences and performance, as well as the companion’s own performance,

Page 7: Document

in order to create a meaningful and helpful companion. The metrics are game intensity,personal space, and player performance.

The Adaptive Companion AI is tested in an experiment to evaluate the behavioursof the adaptive companion and compare it with the basic non-controllable companion. Agame prototype was designed that has different scenarios that were commonly found inFPS games. Each scenario was run 20 times for each type of companion, using a basic AIplayer to fulfill the role of the human participant. The time taken to complete the level,time spent by the AI in front of the companion during combat, and the number of timesthe AI accidentally fired at the player and vice-versa were measured in these scenarios.

Cul-de-Sac Scenario - The map for this scenario has narrow, dead-end paths with areward at the end. The player’s objective is to enter the maze, retrieve the reward, andexit. The adaptive companion knows that it should move out of the way when the playerapproaches and does not have to be pushed out of the way, as opposed to the basic AIcompanion. It also spends less time by being in front of the player’s line of sight, and alsoless time in the player’s personal space. The base companion constantly collides with theplayer, but the adaptive companion makes an effort to avoid that.

Pillar Scenario - In this scenario, the map is a typical combat zone, used to testthe AI for the companion. The map has pillars, and is an obstacle-rich combat area.Pillars can be used by the player, the companion and the opponents as cover. As with thecul-de-sac scenario, the time taken to complete the scenario is shorter with the adaptivecompanion. As mentioned above, the adaptive companion is able to adjust and choose themost suitable behaviour according to the game’s metrics.

Level Scenario - The map of the level scenario is a mix of cul-de-sac/maze and combatsituations along with a boss fight at the end, This level is designed to test the companionin a realistic game environment. Here the adaptive companion is seen to be more efficient,successfully reducing average level completion time.

3.1.2 ICARUS AI

Another believable AI used for FPS games is ICARUS developed by Dongkyu Choi, TolgaKonik, Negin Nejati, Chunki Park, and Pat Langley (Choi et al. 2007). The experimenttestbed for this AI is Urban Combat, a mod of Quake 3 Arena. The ICARUS AI sharesassumption with other AI architects used previously such as Soar (Laird & van Lent 2001)and ACT-R (Anderson 1993). Unlike the two, ICARUS can distinguish between short-term memory and long-term memory, and it uses symbolic lists along with these memory.ICARUS’s learning is also incremental, and is interleaved with performance.

ICARUS perceives the environment and makes inferences about its situation. If a paththrough the hierarchy of the architecture that matches its beliefs and objectives is found,then the agent would progress forward, thus changing the environment and continuing thenext cycle while learning and memorizing new skills that produce more efficient resultswhen solving future problems. The concept of exploring and learning and memorizing newskills makes the AI appear human.

While exploring, ICARUS uses two methods, One, is exploring in physical space, theother is exploring in action space. Sometimes when the agent finds itself in a situationwhere there are obstacles or objects blocking its path through a gateway or to a goal,the AI system performs exploration in action space in order to find the activity that willovercome the obstacle.

For learning, ICARUS learns based on using background knowledge to explain andconstruct new skills from traces of previous successful problem solving.

Page 8: Document

In an experiment, the AI is given multiple scenarios where it had to reach an areato defuse a bomb. Each scenario’s maps had different obstacle types, configurations anddifferent bomb locations. As the agent progresses through each scenario, it learned cumu-latively. the initial scenarios had the agent relying mostly on knowledge-driven explorationas it was new to the map, but still demonstrated goal-directed behaviour. For example, ifthe agent saw the bomb, it tries to overcome the obstacles that were blocking it in order toreach the goal. As it overcomes obstacles, it learned how to do so. In later runs, the agentuses skills it acquired/learned from previous scenarios to find paths and overcome obstacleswith little or no searching. For example, when the agent encountered an air vent on theground, it will automatically crouch and crawl through it with little to no hesitation. Themore it learns, the easier it is for the agent to find the bomb and defuse it without muchsearching or exploring (Choi et al. 2007).

The agent was able to smoothly combine several of its exploration strategies withlearned knowledge and adapt to unexpected changes well. For example, when it knewthe location of the bomb but started in an unknown location, the agent will explore itssurroundings until it found a familiar location and it will use a planned route memorizedfrom previous runs. One weakness of the ICARUS AI is that the agent retains perfectmemories of all visited regions.

3.2 Believability in Sports Games

While AI in most games focus on robustness, reliability and effectiveness, in sports game,believability is the key factor to a successful AI. Sports game are most life-like, and as suchshould provide suspension of disbelief to players. A believable agent would possess human-like characteristics (such as the capability to learn, to show doubt and hesitate, makemistakes, and adjust own strategy in response to other player’s moves). Furthermore, abelievable agent should be able to exhibit its own unique behavioural style.

An AI system with high believability requirements in a sports game is a computer-controlled player or team. An agent in such a game usually exhibits their own distinctplay style in addition to pure game-playing skills. In team-based games, entire teamsare shown to have their own distinctive "team style", different and distinct from otheragents. An agent such as this is a fun and believable opponent to be contended with.Additionally, if a virtual computer-controlled player has a real prototype (modeled afteran athlete or sportsman in real life), the AIâĂŹs agent should be able to replicate his orher style (Mozgovoy & Umarov 2011).

3.2.1 Boxing AI

In a 3D boxing video game environment, an AI designed by Maxim Mozgovoy and IskanderUmarov (Mozgovoy & Umarov 2010) is designed to experiment with:

1. Ensuring believable behaviour through learning by observations and case-based rea-soning

2. Believability evaluation with Turing tests and automated schemes

3. Optimizing AI behaviour in terms of effectiveness by means of reinforcement learning

The implemented AI system for controlling virtual boxers are designed with the followinggoals in mind:

• complex, non-repetitive behaviour of AI agents

Page 9: Document

• distinct personalities of AI boxers that exhibit a variety of skill levels and playingstyles

• capability to design, edit and adjust AI’s behaviour for a game designer

• "Train your own boxer" mode as a user end feature

The project includes two stage: implementing a learning-by observation based AI systemcapable of reproducing human behaviour, where several AI-controlled boxers were trainedand their believability verified by means of Turing-based and automated believability tests,and employing reinforcement learning to optimize the agent’s effectiveness by encouragingrepetition of the most successful behavioural patterns. Using a Turing test-based technique,a number of one minute game fragments between random opponents were recorded totest believability. The opponent could be a human player, a boxer controlled with theexperimented AI, or a boxer controlled with a boxing engine’s built-in AI system, basedon a simple handcrafted finite state machine.

For the Turing test, six people with various gaming experience were asked to watch thegame fragments and to guess whether the opponents were AI or player controlled. Each ofthese observers were asked to analyze nine video clips. In 80.6% of cases, the action graphAI was identified as human, while the built-in AI was only able to deceive watchers in only14% of cases. Surprisingly, human players were only identified as human only in 77.6% ofcases, which is less than the action graph AI.

During these experiments, it is discovered that behaviour optimization through rein-forcement learning does not reduce the agent’s believability (Umarov & Mozgovoy 2012).A believable agent remains so even after adjustments made by reinforcement learning.However, the style of such an agent resembles its human trainer to a lesser extent. But inauthor’s case, reinforcement learning introduces no new behavioural patterns, as the boxerwill still use the same human-supplied action sequences. Only the actions’ weights, usedby the weighted random choice procedure, are adjusted.

3.2.2 Team Sports Game AI

For team sports games, an AI developed by Mozgovoy & Umarov was tested (Mozgovoy& Umarov 2011). The testbed for the AI was a simplified five-a-side 2D soccer simulator.Their AI generally had two modes to function. One is the learning mode, and the other isthe acting mode.

In the learning mode, a behavior-capture agent (BC Agent) observes the actions of ahuman expert player, which is its basic "trainer". Every time the player does something,including "doing nothing"âĂİ, the BC Agent will store the executed action along with therepresentation of the game world’s current state or status into its knowledgebase. For the2D soccer simulator testbed, the actions of each player on a team is restricted to onlymoving in eight possible directions, passes, and kicks towards the goal line.

The acting mode for the BC Agent is much simpler. The BC Agent merely uses itsknowledgebase to retrieve the most suitable action for a given game state upon the AI’srequest.

The testing of the AI begins with using the "player with the ball" scenario. A humanexpert player, controlling the in-game player with the ball, played 8000 frames of the gametime (which is around 400 seconds). Then two different BC Agents (BC1 and BC2) weretrained on 2000, 4000, 6000 and 8000-framed samples of the human player’s data.

Then, the performance of BC1 and BC2 was compared with the human expert player’s.Their performance was judged by the average number of goals scored per 2000 frames of

Page 10: Document

play, the average number of player’s passes, intercepted by the opponent’s team every 2000frames of play, and the average percentage of a player’s team ball possession per 2000frames. The remaining team members as well as the opponent’s players were controlled bya simple rule-based AI system.

In the experiment for goals scored, it is seen that while the human expert player startedoff with more goals at 2000 frames, both BC Agents were able to catch up and score 4 goalsat the 8000th frame, which is the same number of goals scored by the human expert playerat that time. Next, for passes intercepted, while at the start both BC Agents had theirpasses intercepted more than 10 times (in contrast with the player, which is 10), but at theend of the session, they have averaged out to 10 as well, same with the player. Lastly, forthe ball possession experiment, the human expert had a possession of 30% which increasedto 40% temporarily, but then teetered back down to 30% at the end. In contrast, bothBC Agents started off with only 5% possession, but soon improved up to 40%, in BC2’scase, while BC1 only managed to reach 25% possession (Umarov & Mozgovoy 2012). Thisproves that there is variant between the agents.

3.3 Believability in Real-Time Strategy (RTS) Games

Real-time strategy games are difficult for human players to master, since this genre requiresplayers to perform hundreds of actions per minutes in a partial information environment.Multitasking is needed to perform in game activities such as building an army, researchingupgrades, maintaining economy and working towards high level goals. For AI in an RTSgame to be believable, it has to be human-like in a sense that it is not allowed to utilizethe game state information such as knowing where non-visible units are, and the set ofactions provided to it should be the same as those provided through the game’s interface(Weber et al. 2011).

The believable AI that is to be tested for the RTS genre is the EISBot Architecture. Itis an extension of (Coy & Mateas 2008) integrated agent framework, which partitions thebehaviours in a planning agent into managers that specialize in specific parts of gameplay.The core of the agent is a reactive planner that does three important things: interface withthe game environment, manage active goals, and handle action execution.

Like McCoy and Mateas’s, the EISBot is composed of managers. It is implemented asa collective of behaviors in the system. Several reactive planning idioms were used to breakdown gameplay into subproblems that could be analyzed individually, while maintainingthe ability to handle other concerns such as resource contention between managers.

EISBot consists of the following managers: strategy manager (responsible for strategyselection and attack timing competencies), income manager (handles worker units, resourcecollection, and expanding bases), construction manager (responsible for managing requeststo build structures) tactics manager (performs combat tasks and micromanagement behav-iors), and recon manager (a small manager that implements scouting behavior).

Each of the managers implements an interface for interacting with other components.They are used to define how working memory elements are posted to and consumed fromworking memory. This enables a modular agent design where new implementations ofmanagers can be swapped into the agent. It also supports the decomposition of specifictasks, like constructing a building. In EISBot, the decision process for selecting whichstructures to construct is decoupled from the construction process. The decoupling alsoenables specialized behavior to be used in a manager, such as movement behaviors in thetactic manager.

Page 11: Document

3.3.1 Testing

The performance of the EISBot Architecture AI system is determined through competingit against human players at ICCup (Internation Cyber Cup). The scoring system of thecompetition works as follows: players get 1000 score at the start of the competition whichincreases after a win and decreases after a lost. Players have an average score of 1205 witha standard deviation of 1660. The EISBot is competed in 250 games against these humanplayers.

The EISBot had an average win rate of 32% against human opponents and achieved ascore of 1063 points, which outranks 33% of players on the ladder. It also had a win rateof 78% against the build-in AI of StarCraft.

3.4 Believability in Role-Playing Games (RPG)

To role-play means to behave accordingly to a fictional character’s role and enacting it byusing speech and actions within the game rules (Bernacchia 2014). RPG games take placein a fictional world where the player controls one or more characters at a time. Charactersthat players cannot control are known as NPC.

There are several types of CRPGs (computer RPG), and while some can be structuredvery differently, all of them have a combat phase. Sometimes the transition into thecombat phase is not visible or obvious, while some other games have a combat mode whichis enabled when encountering an enemy. While an NPC can be scripted beforehand interms of dialogue (like what he will say at a point on the map), its position and its relationto the main character, the combat phase for the NPC will be entirely different. Theirbehavior in combat cannot be prepared in advance, but must be generated on the spot byan AI (Bernacchia 2014).

The AI to be tested is based on the set of requirements for believability set out bythe Oz group for an AI in a turn-based RPG game. Firstly, the character must havepersonality. This makes the character unique. Personality is in the details and specificexpressions of a given character. Next is emotion, and believable characters should haveemotional reactions and express them based on their personalities. Other than that isself-motivation, and characters should react to events and have their own internal drivesand desires. Other believability requirement includes change, which states that as timepasses, a character changes through its experiences. Social relationships make it so thatbelievable characters are able to interact with other characters. Next, a character mustappear to be situated in an environment and act dynamically in response to the situation.Lastly, a character should have individuality. A character is an independent entity that isunique in both thinking and character.

3.4.1 System

Characters in an RPG need to have their own mental state, knowledge and goals kept upwhile they keep interacting with the world and other units limited to their local physi-cal interfaces. The components of a system include the beliefs, which are the character’sknowledge and serves as its long-term memories. No logic programming is involved, unlikemost other BDI (Belief-desire-intention) implementations. Belief can be internal (informa-tion present only inside AI), external (data located outside, such as in game engine), anddynamic (evaluated when accessed and are implemented as functions). The other compo-nent would be goal, which is the state that the agent is trying to achieve. Plans is about acourse of actions designed to achieve a certain goal, and it is the building block of an agent’sbehavior. It contains specifications about which goals it can accomplish, along with the

Page 12: Document

necessary conditions. Plans can be put in idle state while waiting for events or sub-goalsto be achieved. Besides that, there is intentions, which is about the character’s activelypursued goals. Multiple intentions can be executed at the same time. Last but not least,devices handles all input and output within that environment, An agent can have manydevices, with sensors (used by the agent to intercept external event like hearing of soundwithin a radius) and actuators. Another part of the AI’s system is the BDI interpreter.When the turn of a character in a turn-based RPG begins, his associated agent is updated,repeating its step function until it does not have to do anymore thinking for that turn.On each iteration, events that are accumulated in the event queue are processed, then anintention is selected and executed. All other computations are triggered by events, whichare system events (events generated by the AI or game engine), external events (eventsgenerated by the agent sensors when they detect events coming from the environment) andinternal events (events generated by belief updates, goal changes).

Next, each agent should have its own belief set. The information retained should notonly relate to the current fight, but also events and experiences from outside and beyondthe combat phase. This lets the agent change and learn and adapt its behavior as timepasses. The character has goals to promote long term objectives, and aside from that,priorities of different characters can be influenced by relationships and emotions. A set ofplans can depend on character parameters (race, class etc.), personal history and can shapecombat tactics and personality. Plans for the characters are dynamic, as new plans can belearned from battle experience or possibly gaining levels. When there are multiple plansto achieve, a meta plan is used, which is a plan to select a plan, and that can be exploitedto express a way of thinking. Seeing and doing is done through sensors and actuators,and limits to what the character can sense and do. If there are communication sensors,the agents can exchange data and allow additional or new behaviors, such as conversingduring combat and by exchanging goals and plans. Intentions allow parallel or interleavedexecution of plans, allowing the character to do multiple things at the same time.

To test the AI, a simple turn-based RPG was used which consists of isometric gridof square tiles where the characters can move on. Time is organized in turns where eachcharacter acts in a specific order (such as a character with a faster speed moving first).Since characters can only act during their turn, the AI will be executed one instanceat a time, and store any event that happens during the game in the event list for laterprocessing.

A sample scenario: A character faces an enemy without having prior knowledge aboutit. In the second phase, the agent is able to evaluate requirements through learning andshaping its combat tactics. In another case, during two phases, two different characters(with different personalities, goals and motivation etc.) with identical parameters (class,level, equipment) are presented with the same battle situation. Both act differently basedon their behaviours, much like how no two humans think alike.

Lastly, a group of characters cooperate to defeat an opponent via information sharing,orders and requests. Both have a place through verbal communications visible to the player.This gives a sense of realism to the player, in which they believe that the characters haveto communicate with each other verbally.

Page 13: Document

Table 2: Categorization of Believable AI in Different Genres

Page 14: Document

4 Technical Problem

4.1 PathFinding

Implementing a computer game usually involves moving agents that find paths from oneplace to another through a (potentially complex) world space. Pathfinding in computergames had been researched for many years and it is probably the most hot topic andfrustrating game AI challenge in the industry.

Fortunately, efficient solution to basic form of this problem have existed since 1959 (Di-jkstra 1959) and continuously improves until A* algorithms (Cui & Shi 2011) is consideredas the optimal solution for pathfinding. Furthermore, with the aid of navigation mesh,it solves waypoint graph problem that is hard to configure and makes dynamics obstacleavoidance very hard to achieve (Cui & Shi 2012). But still, more and more new challengespop up as computers games evolve to become more and more complex. Although naviga-tion mesh is an almost ubiquitous solution now, when the environment can be dynamicallychanged during gameplay in unpredictable way, it becomes a problem. Solutions do ex-ist but are not well understood or publicized and are challenging to implement (Van Tollet al. 2012). Even a static navigation mesh is not possible to ensure shortest "real" pathwhen searching an abstract representation of approximate space as it is fundamentally anapproximation of space.

On that note, most pathfinding techniques are only concerned with finding the shortestroute or quickest route from point to point. What if agents want to find a path thatminimize their visibility to enemy? What if an agent has so many different ways to movearound such as jumping, swimming and flying? How do all these options get consideredeffectively?

When you play a game with AI character that can move freely in the game world,try looking for AI that favors "shortest" path over more genuinely or realistic optimalpath. You will find that it often lose all credibility and will spot pathological cases such asagents stuck against the wall or running into each other in the wild. There are plenty ofopportunities for improving when u watching game AI agents navigate their surrounding.

Finding the shortest path is very closed to a solved problem (assume the environment issimple enough or amenable to an accurate search heuristic), but finding truly great paths(the paths that is suit the condition of the environment instead of just shortest paths)remains an area of ongoing research and exploration (Kapadia et al. 2013).

4.2 Conversations

Games that offer the opportunity for conversation with AI character traditionally faceda series of nasty obstacles. Usually, game conversations are purely scripted meaning theplayer may watch a non-interactive "cutscene" of the conversation or at best, powered bysimple dialogue trees. This problem is due to the difficulty of making genuine conversationalinteractions possible with an AI agent.

Dialogue trees remain as popular solution but suffer from a geometric increase in theamount of necessary content as the degree of branching goes up (Blackman 2013). It doesprovide more possible conversation that can be explored and longer conversation mightlast but more texts must be written and perhaps audio needs to be recorded to cover allthe options. In the end, most dialogues tree stay quite limited and often seem artificiallyconstrained to the player.

To achieve better conversational interactions with AI, people start researching on nat-ural language processing (NLP) territory (Horswill 2014). Eventually, working on dialogue

Page 15: Document

trees becomes impractical to record all the possible dialogues. The extreme option is toparse player’s input on the fly and it will generate believable responses back dynamically.

This solution sounds fascinating but natural language processing is widely regarded ashighly difficult AI problem. While great breakthrough have been made in the area of pastdecade, the natural language field still has a lot of challenges to overcome. Intuitively,it seems like there must be some intermediate solution between canned dialogue and fullnatural language conversations. There are some approaches available that rely on an largeamount corpus of knowledge to operate but it depends on real world-data such as Internetresources make it useless for an AI that doesn’t fit in the world and settings of the gamesuch as fantasy world.

4.3 Dynamic Storylines

As the cost of developing high quality games keep increasing and expectation of player togame content, it requires careful balance as the finite budget to develop the game. Evenbudget can grow to a very high extent. It is not reasonable to sell the game at a veryhigh price to make profit and to fill up the development cost that was spent as most playercannot afford the price of the game.

One of the way to solve this problem is via replayability, meaning if a title can be re-played several times without making players feel bored, the value to the player is increased,while the cost of producing remain constant. Games like chess is infinitely replayable, con-versely a linear, heavy narrative and "cinematic" experience might only worth be playingonce. But what if we can make a game that offers rich storytelling and can be replayedmultiple times simultaneously?

For things to play out differently, there are many challenges in building a highly branch-ing narrative with different outcomes. At some point, the problem is closely similar to amore generalized version of the issue with dialogue tree. So, this is where dynamic sto-rytelling comes in. We use some AI to take part of the storytelling instead of design thestorytelling manually by hand-authoring. But still, using game AI as storytelling stillremain as ongoing research (Riedl et al. 2011, Cooper et al. 2010).

4.4 Player Modelling

Game developer often lament that they cannot get the true intention of the player. Aenormous amount of effort goes into developing user experiences and research how tostreamline them. Most game studios conduct usability testing by inviting outside playersto play the game in development and observes these player. The developer has insightwhere game’s user experience shines and where needs improvement.

Unfortunately, the feedback from usability testing is only available during game devel-opment cycle (Olsen et al. 2011). Once the game ships, game developers no longer can getthe precise feedback. It is not viable to put eye-tracking hardware on each player or rigthem out with electrodes. Game developers have a limited toolset for understanding howplayers think and how they are likely to behave.

Recently, one increasingly popular technique is to harvest statistics from thousandsor millions of player, data collected via the Internet, and then search through them forinteresting revelations. Ultimately, the dream of collecting all these data is to build aplayer model (Yannakakis et al. 2013). This is a statistical description of how players arelikely to behave and often dividing them into major grouping based on their play style. Itbecomes possible not just to study player actions after the fact but also to predict futureactions.

Page 16: Document

Ideally, player modelling should be able to customize a game as it is being a player butthere are still a lot of challenges to overcome such as how to efficiently capture and processthe data.

4.5 Scale

Games are quite a bit different from many other fields of AI research as they are constrainedfairly heavily in terms of computational resources. A game might be need to be playableon a mobile phone. Even on more capable hardware, it is not unusual to share resourcesdedicated to graphics and physics. On the contrary, traditional AI applications may havean entire machine or network of machines that deployed themselves. Game AI only haslimited resources to build AI that looks intelligent. (Kyaw 2013).

Another significant challenge for games is that they are usually pushing the perfor-mance of the host machine to the limit. Bad programming techniques (e.g., creation ofmemory fragmentation, frequent cache misses) can easily add up to prohibitive perfor-mance problems. Moreover, researching the time and memory requirements of varioussolutions is crucial to selecting appropriate technical strategies.

Individual AI agents are becoming increasingly complex. Decision-making techniquesare often quite efficient in isolation, but the same is not always true for perception, envi-ronmental awareness, dynamic state-space analysis methods, searches, and etc. Capturingenough data to make an informed decision, or to execute that decision in a complex envi-ronment, can be costly.

Creating a small number of highly believable agents is challenging, but it is still possiblewith current modern game AI techniques. However, the scale of many games are becomingbigger and bigger environment, and others may call for huge numbers of agents interactingat the same time. As AI agents grow, game developer need to start concerning aboutperformance and memory usage.

Table 3: Categorization of Technical Problem

One might argues that these challenges will eventually disappear as computing hard-ware continues to advance. Unfortunately, the CPU speed improvements is at slow paceand the effective use of massively parallel architectures is continuously difficult, and weak-ens this hope substantially (Samuel H. Fuller, Lynette I. Millett 2011). Worse, from history,it has shown that player will demand for the richer game experiences that correspond to

Page 17: Document

more powerful hardware. Environments had become so much more detailed and complexthat the increased cost of an individual check exceed the pace of Moore’s law.

With these sorts of trade-offs facing developers on a routine basis, it is likely that scalewill continue to be a source of interesting problems for quite some time to come.

5 Conclusion

Many researchers used StarCraft as testbed for their AI. The main reason is that thereexist an Application Programming Interface which allow AI to play the game as they werea player. All authors in Table 1 have claimed that their AI outperformed the default gameAI based on win rate with statistical prof except for paper in section 2.5. The authorsmade the statement that their AI outperformed in-built AI and shown the kill ratio. Manypapers have evaluated their AI in a non-standardize way and many of these experimentsare of poor quality. Some system are developed to solve specific sub-problem and not forplaying the entire game (Robertson & Watson 2014). All papers categorized in the table 1did not set a winning rate benchmark such as setting a binomial test at the 4% significancelevel with a 70% winning probability. There is no benchmark set by the authors as most ofthem took defeating-default-game-AI as their baseline even though the default scripted AIcan be defeated easily by players. Some of the papers which solved sub-problems cannot bescaled to complete game due to many constraints such as time and resource. In addition,some papers only tested on specific situation ie: specific units is pit against specific units.Their results cannot be scaled to the actual game as the probability of the same exactsituation occurring is too small.

Next, believability is something that is very hard to be measured. However, AIs thatare designed to be "believable" usually follow a trend where it appears to be human-like.Believable AIs are developed for a variety of games of different genre, each with the idea ofmaking gaming a better experience for players. As seen above, a variety of AI algorithmswas developed for different genres. However, these AIs are still incomplete and have manyflaws. For example, the ICARUS AI learns the layout of a map at an inhuman speed,making it not viable in large maps, which breaks the suspension of disbelief, and EISBotfor the RTS genre (specifically StarCraft) can only perform as well as a new/averageplayer. Aside from that, at this time, it is hard to emulate a pro player into an AI withoutbreaking the suspension of disbelief. However, for simple games, it is often easier to makea believable AI, as seen in the boxing game. The fact remains that there is still a lot to beresearched in order to make an AI truly believable.

In term of technical challenge, although there are still have many challenges waitingfor game developer to solve, the problems will be solved one by one as the time passes.New idea will emerge to the solve the problem. For the moment, we can see someone isworking on DirectX 12 and OpenGL 5.0 to try to reduce the overhead and improve theperformance. It will open more rooms to implement more sophisticated AI.

This paper attempted to raise the corresponding discussion and serve as a point ofreference for any interested researcher, academic or students, by providing an overview ofresearches done by other authors at a glance. By presenting a collection volume of litera-ture, it can be used for future literature search based on the existing research approaches.

Page 18: Document

6 Reference

Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Erlbaum.

B. Hayes-Roth. A blackboard architecture for control. Artificial Intelligence Journal, 26 251-321,1985.

Barata, A., Santos, P. A. & Prada, R. (2011), ‘AI for MMO Strategy Games’, Artificial Intelligence(abbreviated CivIV).

Bernacchia, M. (2014), ‘AI platform for supporting believable combat in role-playing games’,pp. 139–144.

Blackman, S. (2013), Beginning 3D Game Deveopment with Unity 4.

Charles, D. (2003), ‘Enhancing Gameplay: Challenges for Artificial Intelligence in Digital Games’,Proceedings of the 2003 Digital Games Research Association Conference pp. 208–219.

Choi, D., Konik, T., Nejati, N., Park, C. & Langley, P. (2007), ‘A Believable Agent for First-PersonShooter Games’, Artificial Intelligence pp. 71–73.URL: http://www.aaai.org/Papers/AIIDE/2007/AIIDE07-013.pdf

Cooper, S., Rhalibi, a. E., Merabti, M. & Price, M. (2010), ‘Dynamic Interactive Storytelling forComputer Games Using AI Techniques’, Cms.Livjm.Ac.Uk .URL: http://java.cms.livjm.ac.uk/homura/dist/docs/Paper-GDTW2008-DISforCGusingAI.pdf\nhttp://www.cms.livjm.ac.uk/pgnet2010/MakeCD/Papers/2010012.pdf

Coy, J. M. C. & Mateas, M. (2008), ‘An Integrated Agent for Playing Real-Time Strategy Games’,Artificial Intelligence pp. 1313–1318.URL: http://www.aaai.org/Papers/AAAI/2008/AAAI08-208.pdf

Cui, X. & Shi, H. (2011), ‘A*-based pathfinding in modern computer games’, International Journalof Computer Science and . . . 11(1), 125–130.URL: http://paper.ijcsns.org/07_book/201101/20110119.pdf

Cui, X. & Shi, H. (2012), ‘An Overview of Pathfinding in Navigation Mesh’, 12(12), 48–51.

Delalleau, O., Contal, E., Thibodeau-Laufer, E., Ferrari, R. C., Bengio, Y. & Zhang, F. (2012),‘Beyond skill rating: Advanced matchmaking in ghost recon online’, IEEE Transactions onComputational Intelligence and AI in Games 4(3), 167–177.

Dijkstra, E. W. (1959), ‘A note on two problems in connection with graphs’, Numerische Mathe-matik 1(1), 269–271.

Dill, K. (2011), ‘A Game AI Approach to Autonomous Control of Virtual Characters A Game AIApproach to Autonomous Control of Virtual Characters’, (11136), 1–11.

Elogeel, A., Kolobov, A. & Alden, M. (2015), ‘Selecting Robust Strategies in RTS Games viaConcurrent Plan Augmentation’, (Aamas), 4–8.

Fang, Y. P. & Ting, I. H. (2010), ‘Applying reinforcement learning for the ai in a tank-battle game’,Journal of Software 5(12), 1327–1333.

Felipe, B. & Karlsson, F. (2003), ‘Issues and Approaches in Artificial Intelligence MiddlewareDevelopment for Digital Games and Entertainment Products’.

Gabriel, I., Negru, V. & Zaharie, D. (2012), ‘Neuroevolution based multi-agent system for mi-cromanagement in real-time strategy games’, Proceedings of the Fifth Balkan Conference inInformatics on - BCI ’12 (1), 32.URL: http://dl.acm.org/citation.cfm?doid=2371316.2371324

Page 19: Document

Guan, T. T., Nan, Y. Y., On, C. K., Teo, J. & Alfred, R. (2013), ‘Automated Evaluation for AIControllers in Tower Defense Game Using Genetic Algorithm’, pp. 135–146.

Gupta, K. M. (2002), ‘Performance Comparison of Sarsa (λ) and Watkin’s Q(λ) Algorithms’,pp. 1–8.

Horswill, I. (2014), ‘Architectural Issues for Compositional Dialog in Games’, pp. 15–17.

Kapadia, M., Ninomiya, K., Shoulson, A., Garcia, F. & Badler, N. (2013), ‘Constraint-AwareNavigation in Dynamic Environments’, Proceedings of Motion on Games - MIG ’13 pp. 111–120.URL: http://dl.acm.org/citation.cfm?doid=2522628.2522654

Kyaw, A. (2013), Unity 4. x Game AI Programming.

Laird, J. & van Lent, M. (2001), ‘Human-level AI’s killer application: Interactive computer games’,AI Magazine 22(2), 15–26.

Lucas, S. M., Mateas, M., Preuss, M., Spronck, P. & Togelius, J. (2013), DFU, Volume 6, Artificialand Computational Intelligence in Games, Complete Volume, Vol. 6.URL: http://drops.dagstuhl.de/opus/volltexte/2013/4351

McPartland, M. & Gallagher, M. (2011), ‘Reinforcement learning in first person shooter games’,IEEE Transactions on Computational Intelligence and AI in Games 3(1), 43–56.

Micić, a., Arnarsson, D. & Jónsson, V. (2011), ‘Developing game {AI} for the real-time strategygame starcraft’, Project Report .URL: http://skemman.is/en/stream/get/1946/9143/22925/1/Final_Report.pdf

Mozgovoy, M. & Umarov, I. (2010), ‘Building a believable and effective agent for a 3D boxingsimulation game’, Proceedings - 2010 3rd IEEE International Conference on Computer Scienceand Information Technology, ICCSIT 2010 3(2), 14–18.

Mozgovoy, M. & Umarov, I. (2011), ‘Believable Team Behavior : Towards Behavior Capture AIfor the Game of Soccer’, 8th International Conference on Complex Systems pp. 1554–1564.

Oh, I.-s. (2014), ‘Imitation Learning for Combat System in RTS Games with Application to Star-Craft’, pp. 1–2.

Olsen, T., Procci, K. & Bowers, C. (2011), ‘Serious games usability testing: How to ensure properusability, playability, and effectiveness’, Lecture Notes in Computer Science (including subseriesLecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6770 LNCS(PART2), 625–634.

Patel, P. G., Carver, N. & Rahimi, S. (2011), ‘Tuning computer gaming agents using Q-learning’,2011 Federated Conference on Computer Science and Information Systems (FedCSIS) pp. 581–588.

Patel, U. K., Patel, P., Hexmoor, H. & Carver, N. (2012), ‘Improving behavior of computer gamebots using fictitious play’, International Journal of Automation and Computing 9(2), 122–134.

Pereira, R. D. F., Simões, V., Paulo, S. a. & Carlos, S. a. (2012), ‘an Evolutionary AlgorithmApproach for a Real Time Strategy Game’, XI Simpósio Brasileiro de Jogos e EntretenimentoDigital - SBGames 2012 pp. 56–63.

Pérez, A. (2011), ‘Multi-Reactive Planning for Real-Time Strategy Games’, Ddd.Uab.Cat .URL: http://ddd.uab.cat/pub/tesis/2001/tdx-0218102-103941/arp1de1.pdfhttp://users.soe.ucsc.edu/ bweber/pubs/Slides.pdf

Page 20: Document

Petrakis, S. & Tefas, A. (2010), ‘Neural networks training for weapon selection in first-personshooter games’, Lecture Notes in Computer Science (including subseries Lecture Notes in Arti-ficial Intelligence and Lecture Notes in Bioinformatics) 6354 LNCS(PART 3), 417–422.

Pirovano, M., Elettronica, I. & Milano, P. (2012), ‘The use of Fuzzy Logic for Artificial Intelligencein Games The current state of Game AI’.

Poels, Y., Leuven, K. U., Ibbt, C. U. O. & Health, F. (2012), ‘Are You a Gamer ? a QualitativeStudy on the Parameters for Categorizing’, IADIS International Journal 10(1), 1–16.

Rabin, S. (2015), Game AI Pro 2 Collected Wisdom of Game AI Professionals.

Riedl, M., Thue, D. & Bulitko, V. (2011), ‘Game AI as storytelling’, Artificial Intelligence forComputer Games pp. 125–150.

Robertson, G. & Watson, I. (2014), ‘A Review of Real-Time Strategy Game AI’.

Samuel H. Fuller, Lynette I. Millett (2011), The Future of Computing Performance : Game Overor Next Game Over or Next Level ?

Sandberg, T. W. (2011), ‘Evolutionary Multi-Agent Potential Field based AI approach for SSCscenarios in RTS games Supervisor Julian Togelius Master of Science Media Technology andGames IT University of Copenhagen February 2011’, (February).

Shaker, N., Togelius, J., Yannakakis, G. N., Poovanna, L., Ethiraj, V. S., Johansson, S. J.,Reynolds, R. G., Heether, L. K., Schumann, T. & Gallagher, M. (2013), ‘The turing test track ofthe 2012 Mario AI Championship: Entries and evaluation’, IEEE Conference on ComputatonalIntelligence and Games, CIG .

Tremblay, J. & Verbrugge, C. (2013), ‘Adaptive Companions in FPS Games’, 8th InternationalConference on Foundations of Digital Games pp. 229–236.

Umarov, I. & Mozgovoy, M. (2012), ‘Believable and effective AI agents in virtual worlds: Currentstate and future perspectives’, International Journal of Gaming and Computer-Mediated Simu-lations 4(2), 37–59.URL: http://www.igi-global.com/article/believable-effective-agents-virtual-worlds/67551

Van Toll, W. G., Cook IV, A. F. & Geraerts, R. (2012), ‘A navigation mesh for dynamic environ-ments’, Computer Animation and Virtual Worlds 23(6), 535–546.

Weber, B. G. & Mateas, M. (2008), ‘Building Human-Level AI for Real-Time Strategy Games’.

Weber, B., Mateas, M. & Jhala, A. (2011), ‘Building human-level AI for real-time strategy games’,Proceedings of the AAAI Fall Symposium Series pp. 329–336.URL: http://www.aaai.org/Library/AIIDE/aiide11contents.php

Wender, S. & Watson, I. (2012), ‘Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft:Broodwar’, 2012 IEEE Conference on Computational Intelligenceand Games, CIG 2012 pp. 402–408.

Yannakakis, G. N., Spronck, P., Loiacono, D. & André, E. (2013), ‘Player Modeling’, DagstuhlFollow-Ups 6, 59.URL: http://drops.dagstuhl.de/opus/volltexte/2013/4335/

Yeruva, A. R. (2012), ‘Gamebots Evolution for First Person Shooter ( FPS ) Games’, pp. 1–11.

Young, J. & Hawes, N. (2012), ‘Evolutionary Learning of Goal Priorities in a Real-Time StrategyGame.’, Aiide pp. 87–92.URL: http://www.aaai.org/ocs/index.php/AIIDE/AIIDE12/paper/download/5450/5702