22
It was a great pleasure to serve as president of the Association for the Advancement of Artificial Intelligence (AAAI). I think that every- one who serves begins with great trepidation but leaves with a sense of satisfaction. I certainly did. One of the sources of satisfaction was the opportunity to give the presidential address at AAAI-07 in Van- couver, my hometown. This article is based on that talk. That oppor- tunity allowed me to step back and think of the big picture, the long perspective. The way to read the title, “Agents, Bodies, Constraints, Dynamics, and Evolution,” as it is intended to be read, is to say that the article is about agents with bodies subject to constraints on their dynamics as they undergo evolution. Thus, I am telling you a story. It is a story that does not, I hope, tell any lies by omission or commission. It is idiosyncratic, that being the nature of narrative, to make it fit what I want to say as I make sense of that big picture. I hardly need add that this is a highly personal and telegraphic view of the topics I discuss. In a sense, it is a cartoon. The theme of this article, then, is the dynamics of evolution. I shall apply that theme to five different topics: to the evolution of the idea of constraint satisfaction, to agents themselves as they evolve, to our own models of agents and how they have changed over time, to the field of AI, and, finally, I shall apply it to AAAI, the organization. Thus, we can imagine each of these concepts, including the field of AI and AAAI, as agents acting in an environment. An agent in an active envi- ronment changes that environment and, in turn, is changed by that environment as each evolves in time. To tie it all together, I do have a thesis, and it will not surprise too many of you, perhaps, when I say that the thesis is constraint satisfaction is central to intelligent behavior. Constraint Satisfaction Problems What may surprise you, however, is that constraint satisfaction of today is not constraint satisfaction as it was 30 years ago—what we Articles SPRING 2009 7 Copyright © 2009, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 Agents, Bodies, Constraints, Dynamics, and Evolution Alan K. Mackworth The theme of this article is the dynamics of evolution of agents. That theme is applied to the evolution of constraint satisfaction, of agents themselves, of our models of agents, of artificial intelligence, and, finally, of the Association for the Advancement of Artifi- cial Intelligence (AAAI). The overall thesis is that constraint satisfaction is central to proactive and responsive intelligent behavior.

Mack Worth

  • Upload
    mac

  • View
    39

  • Download
    0

Embed Size (px)

DESCRIPTION

k

Citation preview

  • It was a great pleasure to serve as president of the Association forthe Advancement of Artificial Intelligence (AAAI). I think that every-one who serves begins with great trepidation but leaves with a senseof satisfaction. I certainly did. One of the sources of satisfaction wasthe opportunity to give the presidential address at AAAI-07 in Van-couver, my hometown. This article is based on that talk. That oppor-tunity allowed me to step back and think of the big picture, the longperspective. The way to read the title, Agents, Bodies, Constraints,Dynamics, and Evolution, as it is intended to be read, is to say thatthe article is about agents with bodies subject to constraints on theirdynamics as they undergo evolution. Thus, I am telling you a story. It isa story that does not, I hope, tell any lies by omission or commission.It is idiosyncratic, that being the nature of narrative, to make it fitwhat I want to say as I make sense of that big picture. I hardly needadd that this is a highly personal and telegraphic view of the topics Idiscuss. In a sense, it is a cartoon.

    The theme of this article, then, is the dynamics of evolution. I shallapply that theme to five different topics: to the evolution of the ideaof constraint satisfaction, to agents themselves as they evolve, to ourown models of agents and how they have changed over time, to thefield of AI, and, finally, I shall apply it to AAAI, the organization. Thus,we can imagine each of these concepts, including the field of AI andAAAI, as agents acting in an environment. An agent in an active envi-ronment changes that environment and, in turn, is changed by thatenvironment as each evolves in time. To tie it all together, I do have athesis, and it will not surprise too many of you, perhaps, when I saythat the thesis is constraint satisfaction is central to intelligent behavior.

    Constraint Satisfaction ProblemsWhat may surprise you, however, is that constraint satisfaction oftoday is not constraint satisfaction as it was 30 years agowhat we

    Articles

    SPRING 2009 7Copyright 2009, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602

    Agents, Bodies, Constraints, Dynamics, and Evolution

    Alan K. Mackworth

    The theme of this article is the dynamics ofevolution of agents. That theme is applied tothe evolution of constraint satisfaction, ofagents themselves, of our models of agents,of artificial intelligence, and, finally, of theAssociation for the Advancement of Artifi-cial Intelligence (AAAI). The overall thesis isthat constraint satisfaction is central toproactive and responsive intelligent behavior.

  • might call good old-fashioned constraint satisfaction(GOFCS). Constraint satisfaction itself has evolvedfar beyond GOFCS. However, I will start withGOFCS as exemplified in the constraint satisfac-tion problem (CSP) paradigm. The whole conceptof constraint satisfaction is a powerful idea. It arosein several fields somewhat simultaneously; a groupof us in the early 1970s abstracted the underlyingmodel. Simply, many significant sets of problemsof interest in artificial intelligence can each becharacterized as a CSP with a set of variables; eachvariable has a domain of possible values, and thereare various constraints on those variables, specify-ing which combinations of values for the variablesare allowed (Mackworth 1977). The constraintsmay be between two variables or they may beamong more than two variables. Consider a simplescheduling CSP example with five one-hour meet-ings A, B, C, D, and E to be scheduled. There areconstraints such as meeting A must occur aftermeeting E; meetings A and D have to occur at thesame time but meetings A and B cannot occur atthe same time, and so on. A CSP can be represent-ed as a bipartite graph as shown for the schedulingproblem in figure 1(a), where the meetings (vari-ables) are shown as elliptical nodes and the con-straints are shown as rectangular nodes.

    As domains of possible values for the meeting

    start times we have one, two, three, or four oclockin the afternoon. We have to find a solution, a setof times that satisfies the constraints for each meet-ing. If you are interested, you can go to our AIspacewebsite1 and use the consistency-based CSP solverapplet we have developed. There you can see thisproblem satisfied using arc consistency, and othertechniques, to find out whether there is no solu-tion, a unique solution, or more than one. For thisproblem, as it happens, there is a unique solutionfor the set of possible meetings, found by runningarc consistency, shown in figure 1(b).

    Using the applet, one can see the constraintspropagate around, based on arc consistency. Ourdesign of this AIspace applet for CSPs was inspiredby a historical video that David Waltz made in theearly 1970s to animate his edge-labeling algorithmfor line drawings of blocks world scenes (Waltz1975). That video shows the constraints propagat-ing around a line drawing as the sets of possiblecorners at each vertex are pruned by the edge-labelconstraints.

    Here is a familiar CSP example from current pop-ular culture. A typical Sudoku puzzle is shown infigure 2.

    The solver has to fill in the squares, each with adigit chosen from {1, 2, , 9}, where the con-straints are that every row, every column, and

    Articles

    8 AI MAGAZINE

    A:{1, 2, 3, 4}

    D:{1, 2, 3, 4}

    E:{1, 2, 3, 4}

    a b

    C:{1, 3, 4}

    not(A = B)

    not(B = D)A = D

    E < A

    E < D

    E < B

    E < C

    C < D

    not(B = C)

    B:{1, 2, 4}

    A:{4}

    D:{4}

    E:{1}

    C:{3}

    not(A = B)

    not(B = D)A = D

    E < A

    E < D

    E < B

    E < C

    C < D

    not(B = C)

    B:{2}

    Figure 1. A Meeting Scheduling Problem.

    (a) As a CSP in AIspace. (b) The solution to the scheduling problem found with arc consistency.

  • every 3 x 3 subgroup has to be a permutation ofthose nine digits.

    Usually, to avoid torturing the puzzle solver, thepuzzle maker ensures there is one and only onesolution to the problem. One can find these solu-tions using arc consistency techniques and search;moreover, one can easily generate and test poten-tial Sudoku puzzles to make sure they have oneand exactly one solution before they are published.AI has its uses.

    Arc consistency is a simple member of the classof algorithms we called network consistency algo-rithms. The basic idea is that one can, before con-structing global solutions, efficiently eliminatelocal nonsolutions. The constraints are all con-junctive, so each value for every variable must sat-isfy the constraints. And since all of the constraintshave to be satisfied, if there is any local value con-figuration that does not satisfy them, one canthrow that tuple out; that is a no good. So onecan discover (learn) those local inconsistenciesvery quickly, in linear, quadratic, or cubic time.Those savings will give huge, essentially exponen-tial savings when one does start searching, con-structing global solutions, using backtracking orother approaches. The simplest algorithm is arcconsistency, then path consistency, then k-consis-tency, and so on. Many other AI researchers con-tributed to this development, including RichardFikes, Dave Waltz, Ugo Montanari, and EugeneFreuder. For a detailed historical perspective onthat development see Freuder and Mackworth(2006). Since those early days network consistencyalgorithms have become a major research industry.The CSP industry is one of the largest in AI, and inmany conferences over the years it has been thesingle largest topic with the greatest number of ses-sions and papers. In fact, it has now evolved intoits own field of computer science and operationsresearch, called constraint programming. The CSPapproach has been combined with logic program-ming and various other forms of constraint pro-gramming. Indeed, it has come to have its ownjournals, its own conferences, and all the otheraccoutrements of a fully fledged academic field. Itis having a major impact in many industrial appli-cations of AI, logistics, planning and scheduling,and combinatorial optimization. For a compre-hensive overview, see the Handbook of ConstraintProgramming (Rossi, van Beek, and Walsh 2006).However, this discussion, although pertinent toinsight into the development of AI, is not centralto my theme here.

    Pure Good Old-Fashioned AI and Robotics (GOFAIR)

    Here, we are more interested in how we buildagents and how the way we build them has

    evolved over time. John Haugeland (1985) was thefirst to use the phrase good old-fashioned AI (GOFAI)in talking about symbolic AI using reasoning andso on as a major departure from early work incybernetics and control theory. GOFAI has come tobe a straw man for advocates of subsymbolicapproaches. AI at that point, when we discoveredthese symbolic techniques, tended to segregateitself from those other areas. Lately, however, wehave been coming back together in some ways,and that is another theme of this article. Let mequickly add here that there was a great deal of ear-ly work in symbolic programming of robotsa lotof great work. That work can be characterized asgood old-fashioned AI and robotics (GOFAIR)(Mackworth 1993). The way I shall describe it hereis just a cartoon, for as I said, I am telling a storyhere not meant to be taken literally.

    GOFAIR MetaassumptionsIn a cartoon sense, a pure GOFAIR robot operatesin a world that satisfies several metaassumptions:(1) there is a single agent; (2) the agent executesactions serially; (3) actions occur in a deterministicworld; (4) the world is a fully observable, closedworld; (5) the agent has a perfect internal model ofinfallible actions and world dynamics; (6) percep-tion is needed only to determine the initial worldstate; and (7) a perfect plan to achieve the goal isobtained by reasoning, and executed in an openloop.

    There is a single agent in the world that executesits actions serially. It does not have two hands thatcan work cooperatively. The world is deterministic.

    Articles

    SPRING 2009 9

    5 3 7

    6 1 9 5

    8 6 3

    4 8 3 1

    7 2 6

    6 2 8

    9 8 6

    4 1 9 5

    8 7 9

    Figure 2. A Sudoku Puzzle.

  • It is fully observable. It is closed, so if I do notknow something to be true, then it is false, thanksto the closed world assumption (Reiter 1978).The agent itself has a perfect internal model of itsown infallible actions and the world dynamics,which are deterministic. If these assumptions aretrue then perception is needed only to determinethe initial world state. The robot takes a snapshotof the world. It formulates its world model. It rea-sons in that model and it can combine that rea-soning with its goals using, say, a first order theo-rem-prover to construct a plan. This plan will beperfect because it will achieve a goal even if it exe-cutes the plan open loop. So, with its eyes closed,it can just do action A, then B, then C, then D,then E. If it happens to open its eyes again, itwould see, Oh, I did achieve my goal, great!However, there is no need for it to open its eyesbecause it had a perfect internal model of theseactions that have been performed, and they aredeterministic, and so the plan was guaranteed tosucceed with no feedback from the world.

    CSPs and GOFAIRWhat I would like you, the reader, to do is to thinkof the CSP model as a very simple example ofGOFAIR. There are no robots involved, but thereare some actions. The solver is placing numbers inthe squares and so on. In pure GOFAIR there is aperfect model of the world and its dynamics in theagents head, so I call the agent then an omniscientfortune teller, as it knows all and it can see the entirefuture because it can control it, perfectly. Thereforeif these conditions are all satisfied, then the agentsworld model and the world itself will be in perfect

    correspondencea happy state of affairs, but itdoesnt usually obtain. However, when working inthis paradigm we often failed to distinguish theagents world model and the world itself becausethere really is no distinction in GOFAIR. We con-fused the agents world model and the worldaclassic mistake.

    A Robot in the WorldNow we come to think about the nature of robots.A robot acts in a world. It changes that world, andthat world changes the robot. We have to conceiveof a robot in an environment, performing actionsin an environment, and the environmental stim-uli, which could be sensory stimuli or physicalstimuli, will change the robot. Therefore, think ofthe robot and its environment as two coupleddynamic systems, operating in time, embedded intime, each changing the other, as they coevolve asshown in figure 3. They are mutually evolving, per-petually, to some future state, because, of course,the environment could contain many other agentswho see this robot as part of their environment.

    Classic Horizontal ArchitectureAgain, in a cartoon fashion, consider the so-calledthree boxes model or the horizontal architecturemodel for robots. Since perception, reasoning, andaction are the essential activities of any robotwhy not just have a module for each?

    As shown in figure 4, the perception moduleinterprets the stimuli coming in from the environ-ment; it produces a perfect three-dimensionalworld model that is transmitted to the reasoningmodule, which has goals, either internally gener-

    Articles

    10 AI MAGAZINE

    ROBOT

    ENVIRONMENT

    actionsstimuli

    Figure 3. A Robot Coevolving with Its Environment

  • ated or from outside. Combining the model andthe goals, the reasoning module produces a plan.Again, that plan is just a sequence of the form: Dothis, do this, do this, then stop. There are no con-ditionals, no loops in these straight-line plans.Those actions will, when executed, change theworld perfectly according to the goals of the robot.Now, unfortunately for the early hopes for this par-adigm, this architecture can only be thought of asa really good first cut. You know that if you want-ed to build a robot it is a really good first thought.You want to push it as hard as you can, because itis nice and simple, it keeps it clean and modular,and all the rest of it. It is simple but, unfortunate-ly, not adequate. Dissatisfaction with thisapproach drove the next stage of evolution of ourviews of robotic agents, illustrating our theme.

    The Demise of GOFAIRGOFAIR robots succeed in controlled environ-ments such as blocks worlds and factories, but theycannot play soccer! GOFAIR did work, and doeswork, as long as the blocks are matte blocks withvery sharp edges on black velvet backgrounds. Itworks in factories if there is only one robot armand it knows exactly where things are and exactlywhere they are going to go. The major defect, frommy point of view, is that GOFAIR robots certainlycannot, and certainly never will, play soccer. Iwould not let them into my home without adult

    supervision. In fact, I would advise you not to letthem into your home either.

    It turns out that John Lennon, in retrospect, wasa great AI researcher since in one of his songs hemused, Life is what happens to you when yourebusy making other plans. (Lennon 1980. The keyto the initial success of GOFAIR is that the fieldattacked the planning problem and came up withreally powerful ideas, such as GPS, STRIPS, andback-chaining. This was revolutionary. Algorithmswere now available that could make plans in a waywe could not do before. Miller and colleaguesbook Plans and the Structure of Behaviour (Miller,Galantner, and Pribam 1960) was a great inspira-tion and motivation for this work. In psychologythere were few ideas about how planning could bedone until AI showed the way. The GOFAIR para-digm demonstrated how to build proactive agentsfor the very first time.

    But planning alone does not go nearly farenough. Clearly, a proactive GOFAIR robot isindeed an agent that can construct plans and act inthe world to achieve its goals, whether short termor long term. Those goals may be prioritized. Thatis well and good. However, There are more thingsin heaven and earth, Horatio, than are dreamt of inyour philosophy.2 In other words, events willoccur in the world that an agent does not expect.It has to be able to react quickly to interrupts fromthe environment, to real-time changes, to immi-

    Articles

    SPRING 2009 11

    ROBOT

    ENVIRONMENT

    actionsstimuli

    perception reasoning actionmodel

    goals

    plan

    Figure 4. A Horizontal Architecture for a GOFAIR Robot.

  • nent threats to safety to itself or humans, to otheragents, and so on. An intelligent robot must beboth proactive and responsive. An agent is proac-tive if it acts to construct and execute short-termand long-term plans and achieve goals in priorityorder. An agent is responsive if it reacts in real timeto changes in the environment, threats to safety,and other agents actions.

    Beyond GOFAIR to SoccerSo that was the real challenge that was before us inthe 1980s to the GOFAIR cartoon worldview. Howcould we integrate proactivity and reactivity? In1992, I made a proposal (Mackworth 1993) that itis fine to say robots must be proactive and reactiveor responsive but we needed a simple task domainin order to force us to deal with those kinds ofissues. I proposed robot soccer as that domain inthat paper. Actually, I proposed it after we hadactually already built soccer players in our lab andmade them work. We built the worlds first robotsoccer players, using cheap toy radio-controlledmonster trucks, as drawn in figure 5. The first twoplayers were named after Zeno and Heraclitus. Youcan see videos of the first robot soccer games onthe web.3

    A single color camera, looking down on thesetrucks, could see the colored circles on top of thetrucks so that the perceptual system could distin-guish Zeno from Heraclitus. It could also see theball and the goals. Each truck has its own con-troller. Since they cannot turn in placethey arenonholonomicit is very hard actually to controlthem. It is a very tricky problem to control thiskind of steerable robot. The path-planning prob-lems have to be solved in real time. Of course, oneis trying to solve a path-planning problem, as theball is moving and the opponent is moving inorder to get that ball; that is very tricky computa-tionally. We were pushing the limits both of oursignal-processing hardware and the CPUs to getthis to work in real time. We were running there atabout 15 Hz cycle time. The other problem wasthat our lab was not big enough for these monstertrucks. We wanted to go with more trucks, but thedepartment head would not give us a bigger lab. Sowe were forced to go to smaller robots, namely,1/24th scale radio-controlled model Porsches, thatwe called Dynamites. These cars were running on aping-pong table with a little squash ball. In theonline video, one can see the players alternatingbetween offensive and defensive behaviors. The

    Articles

    12 AI MAGAZINE

    Drawing by Jim Marin, 2009.

    Figure 5. The Setup for the Worlds First Robot Soccer Game, between Zeno and Heraclitus.

  • behaviors the robots exhibit are clearly a mix ofproactive and responsive behaviors, demonstratingthe theme of evolution of our models of agentsbeyond the GOFAIR approach.

    Incidentally, there was an amazing, successfuleffort to get chess programs to the point wherethey beat the world champion (Hsu 2002). Butfrom the perspective presented here, it changesonly the single-agent Sudoku puzzle into a two-agent game; however, all the other aspects of theSudoku domain remain the sameperfect infor-mation, determinism, and the like. Chess loses itsappeal as a domain for driving AI research in newdirections.

    We managed to push all our soccer system hard-ware to the limit, so we were able to develop two-on-two soccer. The cars were moving at up to 1meter per second and autonomously controlled at30 Hz. Each had a separate controller off board,and the cars were entirely independent. The onlything they shared is a common front-end vision-perceptual module. We were using transputers (a 1MIP CPU) because we needed significant paral-lelism here. You can see a typical game segmentwith the small cars on the web.4 We were able todo the real-time path planning and correction andcontrol at about 1530 Hz, depending, but thatwas really the limit of where we could go at thattime (199294) because we were limited by thehardware constraints.

    RoboCup As it happens, shortly thereafter, some Japaneseresearchers started to think along similar lines.They saw our work and said, Looks good. Insteadof using steerable robots, as we had, they choseholonomic robots that can spin in place. HiroakiKitano and his colleagues in Japan proposedRoboCup (Kitano 1998). In Korea, the MiroSotgroup5 was also intrigued by similar issues. It madefor an interesting international challenge.

    The first RoboCup tournament was held inNagoya, Japan, in 1997. Our University of BritishColumbia (UBC) team participated; it was a mile-stone event. Many researchers have subsequentlymade very distinguished contributions in therobot soccer area, including Peter Stone, ManuelaVeloso, Tucker Balch, Michael Bowling, MilindTambe, and many others. It has been fantastic. AtRoboCup 2007, held in Atlanta, Georgia, therewere approximately 2,700 participant agents, andof those, about 1,700 were people and 1,000 wererobots. A review of the first 10 years of RoboCuphas recently appeared (Visser and Burckhard 2007)showing how it has grown in popularity and influ-enced basic research.

    RoboCup has become incredibly exciting, a littlecutthroat and competitive with, perhaps, somedubious tactics at times but that is the nature of

    intense competition in war and soccer. Moreimportantly, robot soccer has been incredibly stim-ulating to many young researchers, and it hasbrought many people into the field to do fine workincluding new competitions such as RoboRescueand RoboCup@Home. The RoboCup mission is tofield a team of humanoid robots to challenge andbeat the human champions by 2050, as suggestedby figure 6.

    From Sudoku to Soccer and BeyondNow let us step back a bit and consider our themeof evolutionary development. If one thinks of the

    Articles

    SPRING 2009 13

    Illustration by Giacomo Marchesi, 2009.

    Figure 6. Humanoid Robot Soccer Player.

  • Sudoku domain as the exemplar of GOFAIR in avery simple-minded way, then soccer is an exem-plar of something elsewhat is the somethingelse? I think it is situated agents, and so we are intransition from one paradigm to another. Asshown in figure 7, we can compare Sudoku andsoccer as exemplar tasks for each paradigm,GOFAIR and situated agents, respectively, alongvarious dimensions.

    I shall not go through these dimensions exhaus-tively. In soccer we have 23 agents: 22 players anda referee. Soccer is hugely competitive between theteams obviously, but also of major importance isthe collaboration within the teams, the teamworkbeing developed, the development of plays, andthe communications systems, the signaling sys-tems between players and the protocols for them.Soccer is very real time. There is a major influenceof dynamics and of chance. Soccer is online in thesense that one cannot compute a plan offline andthen execute it, as one can in GOFAIR. Wheneveranything is done, a plan almost always must berecomputed. There exist a variety of temporal plan-ning horizons, from Can I get my foot to the

    ball? through to Can I get the ball into the net?and Can I win this tournament? The visual per-ception is very situated and embodied. Vision is onboard the robots now in most of the leagues, so arobot sees only what is visible from where it is,meaning the world is obviously only partiallyobservable. The knowledge base is completelyopen because one cannot infer much about what isgoing on behind ones back. The opportunities forrobot learning are tremendous.

    From GOFAIR to Situated Agents As we make this transition from GOFAIR to situat-ed agents, how is it done? There has been a wholecommunity working on situated agents, sinceJames Clerk Maxwell in the late 19th centurybuilding governors for steam engines and the like.Looking at Maxwells classic paper On Gover-nors (Maxwell 1868), it is clear that he producedthe first theory of control, trying as he was tounderstand why James Wattss feedback controllerfor steam engines actually worked, under whatconditions it was stable, and so on. Control theo-rists have had a great deal to say about situatedagents for the last century or so. So, one way tobuild a situated agent would be to suggest that weput AI and control together: to stick a planner, anAI planner, GOFAIR or not, on top of a reactivecontrol-theoretic controller doing PID control.One could also put in a middle layer of finite-statemode control. These are techniques we fullyunderstand, and that is, in fact, how we did it forthe first soccer players, which I have describedabove. There was a two-level controller. However,there are many problems with that approach, notthe least being debugging it, understanding it, letalone proving anything about it. It was all verymuch: try it and see. It was very unstable as newbehaviors were added. It had to be restructured atthe higher level and so on. Let me just say that itwas a very graduate student intensive processrequiring endless student programming hours! Sorather than gluing a GOFAIR planner on top of amultilayer control-theoretic controller we movedin a different direction.

    I argued that we must abandon the metaas-sumptions of GOFAIR but keep constraint satisfac-tion. My response was that we just give up on thosemetaassumptions of GOFAIR but not throw outthe baby of constraint satisfaction with the bath-water of the rest of GOFAIR. Constraint satisfac-tion was, and is, the key in my mind, and the rea-son it is the key is that we understand symbolicconstraints, as well as numerical. We understandhow to manipulate them and so on. We under-stand even first-order logic as a constraint-solvingsystem, thanks to work on that side, but we alsounderstand constraints in the control world. Weunderstand that a thermostat is trying to solve a

    Articles

    14 AI MAGAZINE

    Sudoku Soccer

    Number of agents 1 23

    Competition No Yes

    Collaboration No Yes

    Real time No Yes

    Dynamics Minimal Yes

    Chance No Yes

    Online No Yes

    Planning Horizons No Yes

    Situated Perception No Yes

    Partially Observable No Yes

    Open World No Yes

    Learning Some Yes

    Figure 7. Comparison of Sudoku and Soccer along Various Dimensions.

  • constraint. We have now a uniform language ofconstraint solving or satisfaction although oneaspect may be continuous while the other may bediscrete or even symbolic. There is a single lan-guage or single paradigm to understand it from topto bottom, which is what we need to build cleansystems. The constraints now though are dynamic:coupling the agent and its environment. They arenot like the timeless Sudoku constraint: everynumber must be different now and forever. Whenone is trying to kick a ball. the constraint one is try-ing to solve is whether the foot position is equal tothe balls position at a certain orientation, at a cer-tain velocity, and so on. Those are the constraintsone is trying to solve, and one really does not carehow one arrives there. One simply knows that at acertain point in time, the ball will be at the tip ofthe foot, not where it is now but where it will be inthe future. So this is a constraint, but it is embed-ded in time, and it is changing over time as one istrying to solve it, and clearly, that is the tricky part.

    Thus, constraints are the key to a uniform archi-

    tecture, and so we need a new theory of constraint-based agents. This has set the stage. I shall leaveyou in suspense for a while for a digression beforeI come back to sketch that theory. Its developmentis part of the evolutionary process that is thetheme of this article.

    Robot Friends and FoesI digress here briefly to consider the social role ofrobots. Robots are powerful symbols; they have areally interesting emotional impact. One sees thisinstinctively if one has ever worked with kids andLego robotics or the Aibo dogs that we see in figure8, or with seniors who treat robots as friends andpartners. We anthropomorphize our technologywith things that look almost like us or like ourpetsalthough not too much like us. (That is theuncanny valley [Mori 1982]). We relate tohumanoid robots very closely emotionally. Chil-dren watching and playing with robot dogs appear

    Articles

    SPRING 2009 15

    Photo courtesy Stefan Eissing.

    Figure 8. Robot Friends Playing Soccer.

  • to bond with them at an emotional level.But, of course, the flip side is the robot soldier

    (figure 9), the robot army, and the robot tank.

    Robots, Telerobots, Androids, and CyborgsRobots really are extensions of us. Of course, thereare many kinds of robots. One uses the word robotloosely but, technically, one can distinguishbetween strictly autonomous robots and telerobotswhere there is human supervisory control, perhapsat a distance, to Mars or in a surgical situation, forexample. There are androids that look like us andcyborgs that are partly us, partly machine. Theclaim is that robots are really reflections of us, andthat we project our hopes and fears onto them.That this has been reflected in literature and othermedia over the last two centuries is a fact. I do notneed to bring to mind all the robot movies, butrobots do stand as symbols for our technology.

    Dr. Frankenstein and his creation, in Franken-

    stein; or, The Modern Prometheus (Shelley 1818),stood as a symbol of our fear, a sort of Faustian fearthat that kind of power, that kind of projection ofour own abilities in the world, would come backand attack us. Mary Shelleys work explored that,and Charlie Chaplins Modern Times (1936)brought the myth up to date. Recall the scene inwhich Chaplin is being forced to eat in the factorywhere, as a factory worker, his entire pace of life isdictated by the time control in the factory. He is aslave to his own robots, and his lunch break is con-strained because the machines need to be tended.He is, in turn, tended by an unthinking robot whokeeps shoving food into his mouth and pouringdrinks on him until finally, it runs amok. Chaplinwas making a very serious point that our technol-ogy stands in real danger of alienating and repress-ing us if we are not careful.

    I shall conclude this somewhat philosophicalinterjection with the observations of two studentsof technology and human values. Marshall

    Articles

    16 AI MAGAZINE

    Illustration by Giacomo Marchesi, 2009.

    Figure 9. and Robot Foes.

  • McLuhan argued, although he was thinking ofbooks, advertising, television, and other issues ofhis time but it applies equally to robots, We firstshape the tools and thereafter our tools shape us.(McLuhan 1964). Parenthetically, this effect can beseen as classic projection and alienation in thesense of Ludwig Andreas Feuerbach (1854).

    The kinds of robots we build, the kinds of robotswe decide to build, will change us as they willchange our society. We have a heavy responsibili-ty to think about this carefully. MargaretSomerville is an ethicist who argues that actuallythe whole species Homo sapiens is evolving intoTechno sapiens as we project our abilities out(Somerville 2006). Of course, this is happening atan accelerating rate. Many of our old ethical codesare broken and do not work in this new world,whether it is in biotechnology or robotics oralmost any other area of technology today. As cre-ators of some of this technology, it is our responsi-bility to pay serious attention to that.

    Robots: One More Insult to the Human Ego?Another way of thinking about our fraught andambivalent relationship with robots is that this isreally one more insult. How much more canhumankind take? Robotics is only the latest dis-placement of the human ego from center stage.Think about the intellectual lineage that linksCopernicus, Darwin, Marx, Freud, and robots. Thismay be a stretch in thinking but perhaps not.

    Humans thought they were at the center of theuniverse until Nicolaus Copernicus proposed thatthe earth was not at the center and the sun shouldbe seen that way. Charles Darwin hypothesized weare descended from apes. Karl Marx claimed thatmany of our desires and goals are determined byour socioeconomic status, and, thus, we are not asfree as we thought. Sigmund Freud theorized thatones conscious thoughts are not freely chosen butrather come from the unconscious mind. Now Isuggest that you can think of robots as being in

    Articles

    SPRING 2009 17

    Photo Courtesy University of Calgary.

    Figure 10. RoboSurgeon NeuroArm.

  • that same great lineage of saying, you, Homo sapi-ens, are not unique. Now, there are other entities,robots, created by us, that can also perceive, think,and act. They could become as smart as we are. Butthis kind of projection can lead to a kind of moralpanic: The robots are coming! The robots arecoming! What are we going to do? When we talkto the media the first questions reporters ask are,typically: Are you worried about them rising upand taking over? and Do you think theyll keepus as pets? The public perception of robots isevolving as our models of robots and the robotsthemselves evolve.

    Helpful RobotsTo calm this kind of panic we need to point tosome helpful robots. The arm shown in figure 10(on the previous page), the RoboSurgeon Neu-roArm, is actually fabricated from nonmagneticparts so it can operate within a magnetic resonanceimaging (MRI) field. The surgeon is able to do neu-rosurgery telerobotically, getting exactly the rightparts of the tumor while seeing real-time feedbackas the surgery is performed. An early prototype ofour UBC smart wheelchair work is shown in figure

    11. This chair can use vision and other sensors tolocate itself, map its environment, and allow itsuser to navigate safely.

    RoboCars: DARPA Urban ChallengeContinuing with the helpful robot theme, consid-er autonomous cars. The original DARPA Chal-lenges, in 2004 and 2005, and the Urban Chal-lenge in 2007, have catalyzed significant progress,stimulated by Ron Brachman (my predecessor asAAAI president) at DARPA. Sebastian Thrun andhis team at Stanford developed Junior (figure 11a),loaded with sensors and actuators and horsepowerand CPUs of all sorts, facing off against Boss (figure11b), the Carnegie Mellon University/GeneralMotors Tartan racing team in the fall of 2007, withBoss taking first place and Junior second in theUrban Challenge.6 The media look at these devel-opments and see them as precursors to robot tanks,cargo movers, and automated warfare, reaching anunderstanding of why DARPA funded them. How-ever, Thrun (2006) is an evangelist for a differentview of such contests. The positive impact of hav-ing intelligent cars would be enormous. Considerthe potential ecological savings of using highwaysso much more efficiently instead of paving overfarmland. Consider the safety aspect in reducingthe annual carnage of 4,000 road accident deaths ayear in Canada alone. Consider the fact that carscould negotiate at intersections, the way Dresnerand Stone (2008) have simulated to show youcould get maybe two to three times the throughputin cities in terms of traffic if these cars could talk toeach other instead of having to wait for dumb stopsigns and traffic lights. Consider the ability for theelderly or disabled to get around on their own.Consider the ability to send ones car to the park-ing lot by itself and then call it back later. Therewould be automated warehouses for cars instead ofusing all that surface land for parking. Truly, thestrong positive implications of success in this areaare enormous. But, can we trust them? This is a realproblem and it is one of the major problems. Interms of smart wheelchairs, one major reason theydo not already exist now is liability. It is almostimpossible to get an insurance company to back aproject or a product. This clarifies why the carmanufacturers have moved very slowly in an incre-mental way to develop intelligent technology.

    Can We Trust Robots?There are some real reasons we cannot yet trustrobots. The way we build them now, not only arethey not trustworthy, they are also unreliable. So,can they do the right thing? Will they do the rightthing? And then, of course, there is the fear that Ialluded to earlier that eventually they will becomeautonomous, with free will, intelligence, and con-sciousness.

    Articles

    18 AI MAGAZINE

    Figure 11. Prototype Smart Wheelchair (UBC, 2008).

  • Ethics at the Robot-Human InterfaceDo we need robot ethics, for us and for them? Wedo. Many researchers are working on this (Ander-son and Anderson 2007). Indeed, many countrieshave suddenly realized this is an important issue.There will have to be robot law. There are alreadyrobot liability issues. There will have to be profes-sional ethics for robot designers and engineers justas there are for engineers in all other disciplines.We will have to factor the issues around what weshould do ethically in designing, building, anddeploying robots. How should robots make deci-sions as they develop more autonomy? Whatshould we do ethically and what ethical issues arisefor us as we interact with robots? Should we givethem any rights? We have a human rights code;will there be a robot rights code?

    There are, then, three fundamental questions wehave to address. First, what should we humans doethically in designing, building, and deployingrobots? Second, how should robots decide, as theydevelop autonomy and free will, what to do ethi-cally? Third, what ethical issues arise for us as weinteract with robots?

    Asimovs Laws of Robotics. In considering thesequestions we will go back to Isaac Asimov (1950) ashe was one of the earlier thinkers about theseissues, who put forward some interesting, if per-haps nave, proposals. His original three Laws ofRobotics are: I. A robot may not harm a humanbeing, or, through inaction, allow a human beingto come to harm. II. A robot must obey the ordersgiven to it by human beings except where suchorders would conflict with the First Law. III. Arobot must protect its own existence, as long assuch protection does not conflict with the First orSecond Laws.

    Asimovs Answers. Asimovs answers to thosequestions I posed are that, first, you must put thoselaws into every robot, and by law manufacturerswould have to do that. Second, robots shouldalways have to follow the prioritized laws. He didnot say much about the third question. His plotsarise mainly from the conflict between what thehumans intend the robot to do and what it actual-ly does do, or between literal and sensible inter-pretations of the laws because they are not codifiedin any formal language. He discovered many hid-den contradictions, but they are not of great inter-est here. What is of interest and important here isthat, frankly, the laws and the assumptions behindthem are nave. That is not to blame Asimov, as hewas very early and pioneered the area, but we cansay that much of the ethical discussion nowadaysremains nave. It presupposes technical abilitiesthat we just do not have yet.

    What We NeedWe do not currently have adequate methods formodeling robot structure and functionality, of pre-dicting the consequences of robot commands andactions, and of imposing requirements on thoseactions, such as reaching the goal but doing it in asafe way and making sure that the robot is alwayslive, with no deadlock or livelock. And mostimportantly, one can put those requirements onthe robot, but one has to be able to find outwhether the robot will be able to satisfy thoserequirements. We will never have, for real robots,100 percent guarantees, but we do need within-epsilon guarantees. Any well-founded ethical dis-cussion presupposes that we (and robots) doindeed have such methods. That is what werequire.

    Articles

    SPRING 2009 19

    Photos courtesy US Defense Advanced Research Projects Agency.

    Figure 12. Two Competitors in the DARPA Urban Challenge.

  • Theory WantedSo, finally coming back to the constraint-basedagent theory, it should help to satisfy thoserequirements. In short, we need a theory with alanguage to express robot structure and dynamics,language for constraint-based specifications, and averification method to determine whether a robotdescribed in the first language will be likely to sat-isfy its specifications described in the second lan-guage.

    Robots as Situated AgentsWhat kind of robots, then, are we thinking about?These are situated robots tightly coupled to theenvironment; they are not universal robots.Remember Rossums Universal Robots (Capek 1923)?We are not going to build universal robots. We aregoing to buildwe are buildingvery situatedrobots that are functioning in particular environ-ments for particular tasks. But those environmentsare, typically, highly dynamic. There are otheragents. We have to consider social roles. There is a

    very tight coupling of perception and action, per-haps at many different levels. We now know thatthe human perceptual system is not a monolithicblack box that delivers a three-dimensional modelfrom retinal images. There are many visual subsys-tems dealing with recognition, location, orienta-tion, attention, and so forth. Our robots will be likethat as well.

    It is not cheating to embody environmentalconstraints by design, evolution, or learning. It wascheating in the old GOFAIR paradigm, which didaim at universal robots. Everything had to bedescribed in, say, the logic, and one could notdesign environmental constraints into the robots.We think just following biology and natural evo-lution is the way to go, and learning will play amajor part. Evolution is learning at the species lev-el. Communication and perception are very situat-ed. The architectures are online, and there is a hier-archy of time scales and time horizons. Critically,we want to be able to reason about the agents cor-rectness as well as in the agents. We do not requirethe agents to do reasoningthey may notbut

    Articles

    20 AI MAGAZINE

    ROBOT

    ENVIRONMENT

    actionsstimuli

    body

    controller-1

    controller-2

    ... ...

    controller-n

    Figure 13. A Vertical Robotic System Architecture.

  • certainly we want to be able to reason about them.When we think back to the GOFAIR model, wenever actually did that. The reasoning was in theagents head alone, and we assumed that if it wascorrect everything else was correct. Finally, as Imentioned earlier, one cannot just graft a symbol-ic system on top of a signal-control-based systemand expect the interface to be clean, robust, reli-able, debuggable, and (probably) correct. So theslogan is No hybrid models for hybrid systems.

    Vertical ArchitectureTo satisfy those requirements for situated agentswe have to throw away the horizontal three boxesarchitectural model and move to a vertical wed-ding cake architecture. As shown in figure 13, asone goes up these controllers, each controller seesa virtual body below it, modularizing the system inthat way. Each controller, as one goes higher, isdealing with longer time horizons but with coars-er time granularity and different kinds of percep-tion. Each controller will only know what it needsto know. This architectural approach was advocat-ed by James Albus (1981) and Rodney Brooks(1986). It corresponds quite closely to biologicalsystems at this level of abstraction.

    A Constraint-Based AgentWe are interested in constraint-based agents. Theyare situated; they will be doing constraint satisfac-tion but in a more generalized sense, not in theGOFCS sense. These constraints may be prioritized.Now we conceive of the controller of the agent orrobot as a constraint solver.

    Dynamic Constraint SatisfactionConsider the generalization of constraint satisfac-tion to dynamic constraint satisfaction. A soccerexample will serve us.

    Imagine a humanoid robot trying to kick a soc-cer ball. In figure 14, we can see the projection intoa two-dimensional space of a complex phase spacethat describes the position and velocity of thelimbs of the robot and the ball at time (t). Eachflow line in the figure shows the dynamics of theevolution of the system from different initial con-ditions. The controller has to be able to predictwhere the robot should move its foot to, knowingwhat it knows about the leg actuators and the balland where it is moving and how wet the field isand so on, to make contact with the ball to propelit in the right direction. That corresponds to the 45degree line y = x. So x here is the ball position onthe horizontal axis and y is the foot position on thevertical axis. That is the constraint we are trying tosolve. If the controller ensures the dynamical sys-tem always goes to (or approaches, in the limit)that constraint and stays there, or maybe if it does-nt stay there but always returns to it soon enough,

    then we say that this system is solving that con-straint, FootPosition(t) = BallPosition(t). In hybriddynamic systems language, we say the coupledagent environment system satisfies the constraint ifand only if the constraint solution set, in the phasespace of that coupled hybrid dynamic system, is anattractor of the system as it evolves. Incidentally,that concept of online hybrid dynamic constraintsatisfaction subsumes the entire old, discreteoffline GOFCS paradigm (Zhang and Mackworth1993).

    Formal Methods for Constraint-Based AgentsThe constraint-based agent (CBA) framework con-sists of three components: (1) Constraint Nets (CN)for system modeling; (2) timed for-all automata forbehavior specification; and (3) model-checkingand Liapunov methods for behavior verification.

    These three components correspond to the tri-partite requirement for the theory I said we want-ed earlier. Ying Zhang and I developed these for-mal methods for constraint-based agents (Zhangand Mackworth 1995; Mackworth and Zhang2003). First, there is an architecture for distributedasynchronous programming languages called Con-straint Nets. Programs in CN represent the robotbody, the controller, and the environment. Inthem are represented constraints that are local onthe structure and dynamics of each system. For

    Articles

    SPRING 2009 21

    Foot

    Posi

    tion(

    )BallPosition( )

    FootPosition( ) = BallPosition( )

    Figure 14. Solving a Dynamic Soccer Constraint.

  • behavior specification we use either temporal log-ics or timed for-all automata. For verification tech-niques we have used model checking or general-ized Liapunov techniques taken from the standardcontrol literature but generalized for symbolic aswell as numerical techniques. Rather than presentany technical detail here, I shall sketch a casestudy, again using soccer.

    A Soccer Case Study with Prioritized Constraints

    Suppose we want to build a robot soccer playerthat can move around the world, and repeatedlyfind, track, chase, and kick the soccer ball. The set-up is shown in Figure 15. Pinar Muyan-zelikbuilt a controller for this robot to carry out the taskusing the constraint-based agent methodology

    (Muyan-zelik and Mackworth 2004). Thedetailed view of the robot in figure 16 shows arobot base, which can only move in the directionit is facing but can rotate in place to move in a newdirection. There is a pan-tilt unit that serves as aneck and a trinocular color camera on top of it thatcan do stereo vision, but in this experiment weused monocular color images only.

    This is a very simple, almost trivial example, buteven here you get a rich complexity of interactionwith emergent behavior. Imagine that you havegot very simple controllers that can solve each ofthese constraints: (1) to get the ball in the image,(2) if the ball is in the image, center it, (3) make thebase heading equal to the pan direction. Imaginethat you are a robot and remember you can onlymove forward in the direction you are facing. Ifyou turn your head to the left and acquire the ballin the image over there then you have to turn your

    Articles

    22 AI MAGAZINE

    Figure 15. A Robot and a Human Play Ball.

  • body to the left towards it, and as you are trackingthe ball in the image, you have to turn your headto the right in the opposite direction. This is anal-ogous to the well-known vestibulocular reflex(VOR) in humans. Now you are looking at the balland facing towards it, so now you can movetowards it and hit the ball. The last constraint is forthe robot to be at the ball. These are the con-straints, and from that will emerge this unifiedbehavior: acquire, track, chase, and kick the ball ifwe can satisfy these constraints, but in that priori-ty order. If at any time one is satisfying a lower pri-ority constraint and a higher priority constraintbecomes unsatisfied, the controller must revert toresatisfying it.

    The prioritized constraints are: Ball-In-Image (I),Ball-In-Center (C), Base-Heading-Pan (H), Robot-At-Ball (A). The priority ordering is: I > C > H > A.We want to satisfy those prioritized constraints.That is the specification. The specification for thissystem is that one has to solve those four con-straints with that priority. That is all one would sayto the system. It is a declarative representation ofthe behavior we want the system to exhibit. Wecan automatically compile that specification into acontroller that will in fact exhibit that emergentbehavior. We can conceptualize these prioritizedconstraint specifications as generalizations of theGOFAIR linear sequence plans.

    Constraint-Based Agents in Constraint NetsSuppose we are given a prioritized constraint spec-ification for a controller at a certain level in thecontroller hierarchy as in figure 17. The specifica-tion involves Constraint-1, Constraint-2, and Con-straint-3. It requires this priority order: Constraint-1 > Constraint-2 > Constraint-3.

    We assume we have a simple solver for each con-straint, Constraint Solver-1, Constraint Solver-2,and Constraint Solver-3. Constraint-1 is the high-est priority, so if it is active and not satisfied, itssolver indicates, Im not satisfied now; Id like youto do this to satisfy Constraint-1. It might be agradient descent solver, say. Its signal would gothrough Arbiter-1. The arbiter knows this is higherpriority and its signal passes it all the way throughto Arbiter-2 as well, to the motor outputs. If Con-straint-1 is satisfied, Arbiter-1 will let ConstraintSolver-2 pass its outputs through and so on. Ifthere is any conflict for the motors, that is how itis resolved. If there is no conflict then the con-straints can be solved independently because theyare operating in orthogonal spaces. Using thatarchitecture we built a controller for those con-straints in the soccer player, and we tested it all insimulation, and it works. It works in a wide varietyof simulated conditions; it works with the samecontroller in a wide variety of real testing situa-

    tions. We found that the robot always eventuallykicks the ball repeatedly, both in simulation andexperimentally. In certain circumstances, we canprove that the robot always eventually kicks theball repeatedly. We conclude that the constraint-based agent approach with prioritized constraintsis an effective framework for robot controller con-struction for a simple task.

    Just as an aside, this is a useful way to thinkabout a classic psychology problem. Carl Lashley,in a seminal paper called The Problem of the Seri-al Ordering of Behavior (Lashley 1951), was writ-ing about speech production, but the sequencingproblem arises for all behaviors. Suppose one has alarge number of goals one is attempting to satisfy.Each goal is clamoring to be satisfied. How do thegoals get control of the actuators? How does onesequence that access? How does one make sure it is

    Articles

    SPRING 2009 23

    TrinocularCamera

    Pan Tilt Unit

    B14 Robot

    SoccerBall

    Figure 16. A Simple Soccer Player.

  • robust? For prioritized constraints, it is robust inthe sense that if the controller senses the loss ofsatisfaction of a higher priority constraint it willimmediately resume work on resatisfying it. Onecan also think of what we have done as a formal-ization of subsumption (Brooks 1991). So this is asolution; I am not saying it is the solution. It is avery simple-minded solution, but it is a solution tothe classic problem of the serial ordering of behav-ior. And it demonstrates that prioritized con-straints can be used to build more reliable dynam-ic agents.

    Modeling UncertaintySo far, nothing has been said about the element ofchance but, of course, in real robots with real envi-ronments there will be much noise and uncertain-ty. Robert St-Aubin and I have developed proba-bilistic constraint nets using probabilisticverification (St-Aubin, Friedman, and Mackworth2006). As shown in Figure 18, there will be uncer-tainty in the model of the dynamics and in thedynamics themselves and in the robots measure-

    ment of the world. Moreover, one will be unable tofully model the environment, so there will be envi-ronmental disturbances.

    ObservationsStepping back we observe that a very simple idea,constraint satisfaction, allows us to achieve intelli-gence through the integration of proactive andresponsive behaviors and it is uniform top to bot-tom. We see in the formal prioritized frameworkthe emergence of robust goal-seeking behavior. Ipropose it as a contribution to the solution of theproblem of a lack of technical foundation to manyof the proposals for robot ethics. So if one asks,Can robots do the right thing? The answer so faris, Yes, sometimes they can do the right thing,almost always, and we can prove it, sometimes.

    Challenges for AAAIAs noted throughout, evolution is a dynamicprocess. I have discussed how constraint satisfac-tion has changed, how agents change, how our

    Articles

    24 AI MAGAZINE

    ENVIRONMENT

    BODY

    CONTROLLER-1

    ... ...

    CONTROLLER-

    Reports

    CONTROLLER

    AGENT

    Constraints ConstraintSolver-1

    ConstraintSolver-2 Arbiter-1

    Arbiter-2ConstraintSolver-3

    Control Sythesis withPrioritized Constraints

    Constraint1 > Constraint2 > Constraint3

    Reports Constraints

    Reports Commands

    Reactions Actions

    Figure 17. Synthesizing a Controller from a Prioritized Constraint Specification.

  • model of agents has changed, and how AI is chang-ing as we dropped the metaassumptions that heldus back in GOFAIR and evolved to the situatedagent model.

    AAAI, as an organization, is an agent; it is oper-ating in an environment, a global environment. Itis a very competitive but also collaborative envi-ronment. In fact, to me, it is analogous to soccer.Of course, apparently everything looks like soccerto me, so that is not surprising. Think of AAAI as asoccer team. Keep your eye on the ball, know whatgame is going on, and know what the competitionis doing. We constitute the premier AI membershipsociety on the planet. I say this with pride. It ishealthy; we have great involvement from our com-munity. We see at our conferences the excitementgenerated by all the different events, competitions,and papers, and participant interaction.

    AAAI is actually changing AI, and that is whatwe should be doing through all our activitiesincluding the workshops and symposia. And weare. But the world is also changing; in fact, AI haschanged in a way that maybe we have not recog-nized as a society. AI does now have a level of sci-entific and engineering maturity. In some of ourearly days we rather pushed aside control theoryand pushed aside pattern recognition and the like.

    We insisted we were doing important symbolicwork and dismissed other aspects of work that wenow value. We have to change that attitude, andwe have to change it in our society and in our field.We are now doing so more and more. Now werejoice in a new AI. We recognize that uncertaintyis critically important thanks to Peter Cheeseman(1985), Judea Pearl (1988), and many others, andwe are including fields such as statistics, machinelearning, pattern recognition, operations research,control, neuroscience, and psychology and wel-come them all. The beauty of the way we work asa field is that we are problem focused not disciplinefocused, so we do not have those disciplinary silotraps to worry about. AI is now everywhere.

    Globalization has hit us too. We came to realizethat although we called ourselves the AmericanAssociation for Artificial Intelligence, about one-third of our members are not American. About halfof the conference papers coming in and the atten-dees at our conferences are not American. Do theyfeel welcome? Of course, for a Canadian presidentto ask that question was a little tricky, but I wasassured by my American colleagues that it was acritical question to ask. Thus, we decided to go forit to recognize the current reality. In proposing tochange the name of the society, we were anticipat-

    Articles

    SPRING 2009 25

    EnvironmentalDisturbances

    Uncertaintyin the Dynamics

    MeasurementNoise

    TARGET

    Control Force

    Figure 18. Uncertainty in Robots.

  • ing a bit of a firestorm, which, happily, did nothappen. Certainly our membership, over 90 per-cent, was enthusiastically in favor of it, and, inter-estingly, many of the international societies werein favor of changing our name to the Associationfor the Advancement of Artificial Intelligence,which recognizes the current nature of our globalsituation.

    We still must remember we have three roles:national, regional, and international. We have anational role; we have to represent our interests inscientific advancement in Washington and to bepart of the Computing Research Association look-ing at the science and engineering funding situa-tion in the United States and in Canada; and wealso have both a regional role and, increasingly, aninternational role. We need to communicate bet-ter with our sibling AI and computer-science soci-eties and push harder for global coordination.

    Strategic Directions for AAAIGiven the rapid change in AI and in the environ-ment for scientific research and global communi-cations, this is an opportune time to reflect uponstrategic directions for AAAI. AAAI, until 2007, hadnever had a formal strategic plan. Thus, the Strate-gic Planning Board and Council started a planningprocess with a Strategic Planning Working Groupin 2007, which is now well underway. We need toconsider our vision, our mission, our values, andour goals as a scientific society. These goals includesupporting and facilitating the basic and appliedscience of AI, enabling and facilitating the practiceof AI, conducting outreach and advocacy, andbeing an agile and responsive professional society.Some of the key issues that have arisen includeeducation for various audiences; communicationforums (conferences, workshops, symposia); openaccess for publications, knowledge, and data;ensuring we provide tangible member benefits;building community; exploiting media; and build-ing collaborative linkages with other AI societies,conferences, and journals.

    We have done some good work in educationwith AI Topics, led by Bruce Buchanan and JonGlick, but we could do more to encourage AI cur-riculum development. Clearly, AI is the most excit-ing field in computer science, but most undergrad-uate programs do not offer it until third year.Perhaps we could help with enrollment and recruit-ment if we could make it more accessible earlier inthe curriculum. Recent AAAI meetings on AI andeducation are promoting and enabling this work.

    Open access to publications is a problem andopportunity we do have to address. The publica-tions access task force under president-electMartha Pollack is carefully considering theoptions, acknowledging that access to publicationsis a significant benefit to our members. The AI Mag-

    azine, guided by David Leake, does a wonderfuljob. New forms of publication, such as video onthe web, are now being actively encouraged. Weare also archiving and preserving historical AIvideos.

    Finally, AAAI has a leadership role to play in theinternational AI community. In working coopera-tively with all our sibling AI societies and confer-ences, we need more coordination and coherence,internationally. Obviously, this must be done care-fully, in a collegial fashion, but it must be done.

    How Can We Help?As I contemplated what I wished to address in con-cluding, I realized I wanted to touch upon a few ofmy more personal thoughts. AI is now mature,both as a science and in its technologies and appli-cations as an engineering discipline. There arethemes of AI in service to people and to our plan-et; themes that we should explore. We no longerhave the luxury of ignoring the global crises ofglobal warming, poverty, food production, sus-tainability, arms control, health, education, theaging population, and demographic issues. We dohave a unique perspective and the skills to con-tribute practically to addressing many of these con-cerns. There is an opportunity to join with peoplewho are already dealing with these issues, not inan arrogant way but collaboratively. We could, asan example, improve access to tools for learningabout AI so that people could be empowered to tryour techniques on their problems. We know wecannot go in as experts telling people how to solvetheir problems, but we can make tools available sothat someone already knowledgeable in a domainmight realize that machine learning, say, offersradically new and potentially better approaches toa problem. The success of RoboCup and othergames and competitions shows how effective theycan be as learning and teaching environments. TheOne Laptop Per Child (OLPC) initiative has alreadymade enormous strides; we could be supportingthis more actively. Clearly, there is ample room forimaginative thinking in this area.

    There are many opportunities for AI to have apositive impact upon the planets environment. Itis tremendously encouraging that work for better-ment has already been done or is being contem-plated. We have already considered some of theenvironmental impact of intelligent cars and smarttraffic control. All the work on combinatorial auc-tions, already applied to spectrum allocation andlogistics, could be further applied to optimizingenergy supply and demand and to carbon offsets.There could be much more work on smart energycontrollers with distributed sensors and actuatorsthat would improve energy use in buildings. Wecould perhaps use qualitative modeling techniquesfor climate scenario modeling.

    Articles

    26 AI MAGAZINE

  • Finally, assistive technology for the disabled andaging is being pioneered by researchers such asMartha Pollack (2005) and Henry Kautz (Liu et al.2006). We need to listen very carefully to whatthey are telling us. There is a huge opportunity forus, an opportunity that is extremely attractive tograduate students. One issue, which has been men-tioned earlier, is the progress-hindering question ofliability, which can be, and must be, addressed.Assisted cognition is one application, but work isalso proceeding in assisted perception and assistedaction, for example, in the form of smart wheel-chairs and companions for the elderly and nursesassistants in long-term care facilities, following thelead of Alex Mihailidis and Jesse Hoey (Mihailidiset al. 2007). All of the research topics mentionedwill, of course, have many obstacles to overcome.It is an exciting time to do research in AI and itsapplications. Perhaps some of these will serve aschallenges to our younger, and not so young,researchers.

    ConclusionTo recap, I have sketched the evolution of con-straint satisfaction, of agents, of models of agents,of AI, and of AAAI. Given the scope of those top-ics, this sketch has necessarily been idiosyncratic,personal, and incomplete. However, I hope to havepersuaded you of the thesis that constraint satis-faction is central to intelligent behavior.

    AcknowledgementsI am most grateful to all of the students, colleagues,and collaborators who have contributed to some ofthe work mentioned here: Saleema Amershi, RodBarman, Le Chang, Cristina Conati, GiuseppeCarenini, Pooyan Fazli, Gene Freuder, Joel Fried-man, Stewart Kingdon, Jaek Kisynski, ByronKnoll, Jim Little, David Lowe, Valerie McRae, Jef-ferson Montgomery, Pinar Muyan-zelik, DineshPai, David Poole, Michael Sahota, Fengguang Song,Robert St-Aubin, Pooja Viswanathan, Bob Wood-ham, Suling Yang, Ying Zhang, and Yu Zhang. Thesupport of Carol Hamilton and all her staff made iteasier to serve as AAAI president. David Leakehelped with this article. I am also grateful to thosewho served on the AAAI Council and on the Exec-utive Committee and all the other volunteers whomake AAAI special. Funding was provided by theNatural Sciences and Engineering Research Coun-cil of Canada and through the support of theCanada Research Chair in Artificial Intelligence.

    Notes1. AIspace (formerly known as CIspace) is an environ-ment for learning and teaching AI featuring animationsof many AI algorithms (Amershi et al. 2008, Knoll et al.2008). The AIspace site is at www.AIspace.org.

    2. William Shakespeare, Hamlet, Act 1.

    3. www.cs.ubc.ca/~mack/RobotSoccer.htm.

    4. www.cs.ubc.ca/~mack/RobotSoccer.htm.

    5. www.fira.net/soccer/mirosot/overview.html.

    6. www.tartanracing.org, cs.stanford.edu/group/roadrun-ner.

    ReferencesAlbus, J. S. 1981. Brains, Behavior and Robotics. New York:McGraw-Hill.

    Amershi, S.; Carenini, G.; Conati, C.; Mackworth, A. K.;and Poole, D. L. 2008. Pedagogy and Usability in Inter-active Algorithm Visualizations: Designing and Evaluat-ing CIspace. Interacting with Computers, 20(1): 6496.

    Anderson, M., and Anderson, S. L. 2007. Machine Ethics:Creating an Ethical Intelligent Agent. AI Magazine, 28(4):1526.

    Asimov, I. 1950. I, Robot. New York: Gnome Press.

    Brooks, R. A. 1986. A Robust Layered Control System fora Mobile Robot. IEEE Journal of Robotics and AutomationRA-2(1): 1423.

    Brooks, R. A. 1991. Intelligence without Reason. In Pro-ceedings of the Twelfth International Joint Conference on Arti-ficial Intelligence, 569595. San Mateo, CA: Morgan Kauf-mann.

    Capek, K. 1923. R.U.R. (Rossums Universal Robots): A Fan-tastic Melodrama in Three Acts and an Epilogue. GardenCity, NY: Doubleday.

    Cheeseman, P. 1985. In Defense of Probability. In Pro-ceedings of the Ninth International Joint Conference on Arti-ficial Intelligence, 10021009. San Mateo, CA: MorganKaufmann.

    Dresner, K., and Stone, P. 2008. A Multiagent Approach toAutonomous Intersection Management. Journal of Artifi-cial Intelligence Research 31: 591656.

    Feuerbach, L. A. 1854. The Essence of Christianity. London:John Chapman.

    Freuder, E. C., and Mackworth, A. K. 2006. ConstraintSatisfaction: An Emerging Paradigm. In Handbook of Con-straint Programming, ed. F. Rossi, P. van Beek, and T.Walsh, 1328. Amsterdam: Elsevier.

    Haugeland, J. 1985. Artificial Intelligence: The Very Idea.Cambridge, MA: The MIT Press.

    Hsu, F. 2002. Behind Deep Blue: Building the Computer ThatDefeated the World Chess Champion. Princeton, NJ: Prince-ton University Press.

    Kitano, H., ed. 1998. RoboCup-97: Robot Soccer World CupI. Lecture Notes in Computer Science 1395. Heidelberg:Springer.

    Knoll, B.; Kisynski, J.; Carenini, G.; Conati, C.; Mack-worth, A. K.; Poole, D. L. 2008. AIspace: Interactive Toolsfor Learning Artificial Intelligence. In Proceedings of theAAAI AI Education Colloquium. Technical Report WS-08-02, Menlo Park, CA: AAAI Press.

    Lashley, K. S. 1951. The Problem of Serial Order in Behav-ior. In Cerebral Mechanisms in Behavior, ed. L. A. Jeffress,112136. New York: Wiley.

    Lennon, J. 1980. Beautiful Boy (Darling Boy). Song lyrics.On album Double Fantasy. New York: Geffen Records.

    Liu, A. L.; Hile, H.; Kautz, H.; Borriello, G.; Brown, P. A.;Harniss, M.; and Johnson, K. 2006. Indoor Wayfinding:Developing a Functional Interface for Individuals with

    Articles

    SPRING 2009 27

  • Cognitive Impairments. In Proceedings of the 8th Interna-tional ACM SIGACCESS Conference on Computers and Acces-sibility, 95102. New York: Association for ComputingMachinery.

    McLuhan, M. 1964. Understanding Media: The Extensionsof Man. New York: New American Library.

    Mackworth, A. K. 1977. Consistency in Networks of Rela-tions. Artificial Intelligence 8(1): 99118.

    Mackworth, A. K. 1993. On Seeing Robots. In ComputerVision: Systems, Theory and Applications, eds. A. Basu andX. Li, 113. Singapore: World Scientific Press.

    Mackworth, A. K., and Zhang, Y. 2003. A FormalApproach to Agent Design: An Overview of Constraint-Based Agents. Constraints 8(3): 229242.

    Maxwell, J. C. 1868. On Governors. In Proceedings of the Roy-al Society of London, 16, 270283. London: Royal Society.

    Mihailidis, A.; Boger, J.; Candido, M.; and Hoey, J. 2007.The Use of an Intelligent Prompting System for Peoplewith Dementia. ACM Interactions 14(4): 3437.

    Miller, G. A.; Galantner, E.; and Pribram, K. H. 1960. Plansand the Structure of Behavior. New York: Holt, Rinehart,and Winston.

    Mori, M. 1982. The Buddha in the Robot. Tokyo: Charles E.Tuttle Co.

    Muyan-zelik, P., and Mackworth, A. K. 2004. SituatedRobot Design with Prioritized Constraints. In Proceedingsof the International Conference on Intelligent Robots and Sys-

    tems (IROS 2004), 18071814. Los Alamitos, CA: IEEEComputer Society.

    Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems:Networks of Plausible Inference. San Francisco, CA: MorganKaufmann.

    Pollack, M. E. 2005. Intelligent Technology for an AgingPopulation: The Use of AI to Assist Elders with CognitiveImpairment. AI Magazine 26(2): 924.

    Reiter, R. 1978. On Closed World Data Bases. In Logic andData Bases, eds. H. Gallaire and J. Minker, 119140. NewYork: Plenum.

    Rossi, F.; van Beek, P.; and Walsh, T., eds. 2006. Handbookof Constraint Programming. Amsterdam: Elsevier Science.

    Sahota, M., and Mackworth, A. K. 1994. Can SituatedRobots Play Soccer? In Proceedings of Artificial Intelligence94, 249254. Toronto, ON: Canadian Society for Com-putational Studies of Intelligence.

    Shelley, M. W. 1818. Frankenstein; or, The ModernPrometheus. London: Lackington, Hughes, Harding,Mavor, and Jones.

    Somerville. M. 2006. The Ethical Imagination: Journeys ofthe Human Spirit. Toronto: House of Anansi Press.

    St-Aubin, R.; Friedman, J.; and Mackworth, A. K. 2006. AFormal Mathematical Framework for Modeling Proba-bilistic Hybrid Systems. Annals of Mathematics and Artifi-cial Intelligence 37(34): 397425.

    Thrun, S. 2006. Winning the DARPA Grand Challenge.Invited Talk presented at the Innovative Applications ofArtificial Intelligence conference (IAAI-06), Boston, Mas-sachusetts, July 1620.

    Visser, U., and Burkhard, H.-D. 2007. RoboCup: 10 Yearsof Achievements and Challenges. AI Magazine 28(2): 115130.

    Waltz, D. L. 1975. Understanding Line Drawingsof Scenes with Shadows. In The Psychology of Com-puter Vision, ed. P. H. Winston, 1992. New York:McGraw-Hill.

    Zhang, Y., and Mackworth, A. K. 1993. Constraint Pro-gramming in Constraint Nets. In Proceedings of the FirstWorkshop on Principles and Practice of Constraint Program-ming, 303312. Padua: Association for Constraint Pro-gramming.

    Zhang, Y., and Mackworth, A. K. 1995. Constraint Nets:A Semantic Model for Dynamic Systems. Theoretical Com-puter Science 138: 211239.

    Alan K. Mackworth is a professor of computer science,Canada Research Chair in Artificial Intelligence, and thefounding director of the Laboratory for ComputationalIntelligence at the University of British Columbia. Hereceived a B.A.Sc. (Toronto, 1966), A.M. (Harvard, 1967),and D.Phil. (Sussex, 1974). He has worked primarily onconstraint-based artificial intelligence with applicationsin vision, constraint satisfaction, robot soccer, and con-straint-based agents. He has served as chair of the IJCAIBoard of Trustees and as president of the Canadian Soci-ety for Computational Studies of Intelligence and theAssociation for the Advancement of Artificial Intelligence(AAAI); he is now past-president of AAAI.

    Articles

    28 AI MAGAZINE

    !"#""#"#" !$

    "#" % ! #!!! " # # " ""# !"!%"!&' #$

    ( ) *!#"

    !