A robust Layer Control System for A Mobile Robot

Embed Size (px)

Citation preview

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    1/10

    14 IEEE JOURNAL OF ROBOTICS AN D AUTOMATION, VOL. RA-2, NO. I , MARCH 19

    A Robust Layered Control SysteFor A Mobile RobotAbstract--A new architecture for ontrollingmobile obots is de-

    scribed. Layers of contro l system are built to let the robot operate atincreasing levels of comp etence. Layers are made up of asynchronousmodules that comm unicate over low-bandw idth hannels. Each module isan instance of a fairly simple computational machine.Higher-level layerscan subsume the rolesof ower levels by suppressing their outputs.Howev er, lower levels continu e to functio n as higher levels are added.The result is a robust and flexible robot control system. The system hasbeen used to contro l a mobil e robot wandering around unconstrainedlaboratory areas and comp uter mach ine rooms. Eventually it i s intendedto controla obot that wanders the office areas of our laboratory,building maps of its surroundings using an onboard arm to performsimple tasks.

    I. INTRODUCTIONA ONTROL SYSTEM for a completely autonomousmobile robot must perform many complex informationprocessing tasks in real time. It operates in an environmentwhere the boundary conditions (viewing the instantaneouscontrol problem in a classical control theory formulation) arechanging rapidly. In fact the determination of those boundaryconditions is done over very noisy channels since there is nostraightforward mapping between sensors (e.g. TV cameras)and the form required of the boundary conditions.The usual approach to building control systems for suchrobots is to decompose the problem into a series (roughly) offunctional units as illustrated by a series of vertical slices inFig. 1. After analyzing the computational requirements for amobile robot we have decided to use task-achieving behav-iors as our primary decomposition of the problem. This isillustrated by a series of horizontal slices in Fig. 2. As with afunctional decomposition, we implement each slice explicitlythen tie them all together to form a robot control system. Ournew decomposition leads to a radically different architecturefor mobile robot control systems, with radically differentimplementation strategies plausible at the hardware level, andwith a large number of advantages concerning robustness,buildability and testability.

    Manuscript revised February 3, 1986. This work was supported in part byan IB M Faculty Development Award, in part by a grant from he SystemsDevelopm ent Foundation, in part by an equipment grant from Motorola, andin part by the Advanced Research Projects Agency under Office of NavalResearch contracts N00014-80-C-0505 and N00014-82-K-0334.Th eauthor iswith theArtificial ntelligenceLaboratory,MassachusettsInstituteofTechnology,54 5TechnologySquare,Cambridge, MA 02139,USA.IEEE Log Number 8608069.

    Fig.1. Traditionaldecomposition of amobile obot ontrol ystem ntofunctional modules.

    reason about behaviorof objectsplan changes to the world

    identify objectsmonitor changes

    Sensors __+ ---b Actuatorsbuild maps

    explorewander

    avoid objectsFig. 2. Decomposition of a mobile obot ontrol ystembased on tasachieving behaviors.

    A . RequirementsWecan identify a number of requirements of a contrsystem for an intelligent autonomous mobile robot. They eacput constraints on possible control systems that wemayemploy. They are identified as follows.Multiple Goals: Often the robot will have multiple goalsome conflicting, which it is rying to achieve. It may be tryinto reach a certain point ahead oft while avoiding locobstacles. It may be trying to reach a certain place in minimtime while conserving power reserves. Often th e relativimportance of goals will be context-dependent. Getting off thrailroad tracks when a train is heard becomes much morimportant than inspecting the last ten track ties of the curren

    track section. The control system must be responsive to higpriority goals, while still servicing necessary low-levelgoals (e.g., in getting off the railroad tracks, it is stiimportant that the robot maintains its balance so it doesnt fadown).Multiple Sensors: The robot will most likely have multipsensors (e.g., TV cameras, encoders on steering and drivmechanisms, infrared beacon detectors, an inertial navigatio

    08824967/86/0300-0014$01 OO O 1986 IEEE

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    2/10

    IEEE JOURNAL OF ROBOTICS AN D AUTOMATION, VOL. RA-2, NO . 1, MARCH 1986 15system, acoustic rangefinders, infrared rangefinders, access toa global positioning satellite system, etc.). All sensors have anerror component n their readings. Furthermore, often there isno direct analytic mapping fromsensor values to desiredphysical quantities. Some of the sensors will overlap in thephysical quantities they measure. Theywill often giveinconsistent readings-sometimes due to normal sensor errorand sometimes due to the measurement conditions being suchthat the sensor (and subsequent processing)s used outside itsdomain of applicability. Often there willbeno analyticcharacterization of the domain of applicability (e.g. underwhat precise conditions does the Sobel operator return validedges?). The robot must make decisions under these condi-tions.

    Robustness: The robot ought to be robust. When somesensors fail it should be able to adapt and cope by relying onthose still functional. When the environment changes drasti-cally it should be able to still achieve somemodicumofsensible behavior, rather then sit in shockor wander aimlesslyand irrationally around. Ideally it shouldalso continue tofunction well when there are faults in parts of its processor(s).Extensibility:As more sensorsand capabilities are added toa obot it needs moreprocessingpower; otherwise, theoriginal capabilities of the robot will be impaired relative to

    the flow of time.B . Other Approaches

    Multiple Goals: Elfes and Talukdar [4] designed a controllanguage for Moravecs robot [111, which tried to accommo-date multiple goals. It mainly achieved this by letting the userexplicitly code for parallelism and to code an exception ath toa special handler for each plausible case of unexpectedconditions.Multiple Sensors: Flynn [5] explicitly investigated the useof multiple sensors, with complementary characteristics (sonaris wide angle but reasonably accurate in depth, while infraredisvery accurate in angular resolutionbut terrible in depthmeasurement). Her system has the virtue that if one sensorfails the other still delivers readings that are useful to thehigher level processing. Giralt etal. [6]use a laser rangefinder for map making, onar ensors for local obstacledetection, and infrared beacons for map calibration. The robotoperates in a mode in which one particular sensor type is usedat a time and the others are completely ignored, even thoughtheymaye functional. Inheaturalworldmultipleredundant sensors are abundant. For instance [lo] reports that

    pigeons have more than four independent orientation sensingsystems (e.g., sunposition compared to internal biologicalclock). It is interesting that the sensors do notseem to becombined but rather, depending on the environmental condi-tions and operational level of sensor subsystems, he data fromone sensor tends to dominate.Robustness: The above work tries to make systems robustin terms of sensor availability, but little has been done withmaking either the behavior or the processor of a robot robust.Extensibility: There are three ways his can be achieved

    without completely rebuilding the physical control system. 1)Excess processor power that was previously being wasted canbe utilized. Clearly this is aboundedesource. 2) Theprocessor(s) can be upgraded to an architecturally compatiblebut faster system. The original software can continue to run,but now excess capacity will be available and we can proceedas in the first case. 3) More processors can be added to carrythe new load. Typically systemsbuilders then get enmeshed indetails of how to make all memory uniformly accessible to allprocessors. Usually the cost of the memory oprocessorrouting system soon comes to dominate the cost (the measureof cost is not mportant-itcanbe monetary, silicon area,access ime delays, or something else) of the system. As aresult there is usually a fairly small upper bound (on the orderof hundreds for traditional style processing units; on the ordertoens to hundreds of thousands for extremelyimpleprocessors) on the number of processors which can be added.C . Starting Assumptions ,

    Our design decisions for our mobile robot are based on thefollowingnine dogmatic principles (sixof these principleswere presented more fully in [2]).1) Complex (and useful) behavior need not necessarily be aproductof anextremelycomplex control system.Rather,complex behavior may simply be the reflection of a complexenvironment [13]. It mayen observerwho ascribescomplexity to an organism-not necessarily its designer.2) Things should be simple. This has two applications. a)When building a system of many parts one must pay attentionto the interfaces. If you notice that a particular interface isstarting to rival in complexity the components it connects, theneither the interface needs to be rethoughtor the decompositionof the system needs redoing. b) If a particular component orcollection of components solves an unstable or ill-conditionedproblem, or, more radically, if its design involved he solutionof an unstable or ill-conditioned problem, then it is probablynot a good solution from the standpoint of robustness of thesystem.3) We want to build cheap robots that can wander aroundhuman-inhabited space with no human ntervention, advice, orcontrol and at the same time do useful work. Map making istherefore of crucial importance evenwhen idealized blue-prints of an environment are available.4) The human world is three-dimensional; it is not just atwo-dimensional surface map. The obot mustmodel theworld as three-dimensional if it is to be allowed to continuecohabitation with humans.5 ) Absolute coordinate systems or a robot are the source oflarge cumulative errors. Relational maps are more useful to amobile robot. This alters the design pace for perceptionsystems.

    6) The worlds where mobile robots will do useful work- arenot constructed of exact simple polyhedra. While polyhedramay be useful models of a realistic world, it is a mistake tobuild a special world such that the models can be exact. For

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    3/10

    16 IEEEOURNAL OF ROBOTICS A ND AUTOMATION, VOL. RA-2, NO . I , MARCH 1986this reason we will build no artificial environment for ourrobot.7) Sonar data, while easy to collect, does not by itself lead orich descriptions of the worlduseful for truly intelligentinteractions. Visual data is much better for that purpose. Sonardata may be useful for low-level interactions such as real-timeobstacle avoidance.

    8) For robustness sake the robot must be able to performwhen one or more of its sensors fails or starts giving erroneousreadings. Recovery should be quick. This implies that built-inself calibration must be occurring at all times. If t s goodenough to achieve our goals then it will necessarily be goodenough to eliminate the need for external calibration steps. Toforce the issue we do not incorporate any explicit calibrationsteps for our robot. Rather we try to make all processing stepsself calibrating.

    9) We are interested in building artificial beings-robotsthat can survive for days, weeks and months, without humanassistance, in a dynamic complex environment. Such robotsmust be self-sustaining.11. LEVELSN D LAYERS

    There are many possible approaches to building an autono-mous intelligent mobile robot. Aswithmost engineeringproblems, they all start by decomposing the problem intopieces, solving the subproblems for each piece, and hencomposing the solutions. We think we have done the first ofthese three steps differently to other groups. The second andthird steps also differ as a consequence.A . Levels of Competence

    Typically, mobile robot builders (e.g., [ 3 ] , [6], [SI, [l I] ,[12], [14], [Tsuji 841, [Crowley 851) have sliced the probleminto some subset of* sensing* mapping sensor data into a world representation* planningtask execution* motor control.This decomposition can be regarded as a horizontal decom-position of the problem into vertical slices. The slices form achain through which information flows from the robotsenvironment, via sensing, through the robot and back to theenvironment, via action, closing the feedback loop (of coursemost implementations of the above subproblems includeinternal feedback loops also). An instance of each piece mustbe built in order to run the robot at all . Later changes to aparticular piece (to improve it or extend its functionality) musteither be done in such a way that the interfaces to adjacentpieces do not change, or the effects of the change must bepropagated to neighboring pieces, changing their functional-ity, too.We have chosen instead to decompose the problem verti-cally as our primary way of slicing up the problem. Ratherthan slice the problem on the basis of internal workings of the

    solution, we slice the problem on the basis of desired externamanifestations of the robot control system.To this nd we have defined a number of levels ofcompetence for an autonomous mobile robot. A level ocompetence is an informal specification of a desired class obehaviors for a robot over all environments it will encounterA higher level of competence implies a more specific desiredclass of behaviors.We have used he following levels of competence (an earlieversion of these was reported in [l])as a guide in our work.0) Avoid contact with objects (whether the objects move o1) Wander aimlessly around without hitting things.2) Explore the world by seeing places in the distanc3) Build a map of the environment and plan routes from one4) Notice changes in the static environment.5) Reason about the world in terms of identifiable objectand perform tasks related to certain objects.6) Formulate and execute plans that involve changing thestate of the world in some desirable way.7) Reason about the behavior of objects in the world andmodify plans accordingly.

    are stationary).

    that look reachable and heading for them.place to another.

    Notice that each level of competence includes as a subset eachearlier level of competence. Since a level of competencedefines a class of valid behaviors it can be seen that highelevels of competence provide additional constraints on thaclass.B. Layers of Control

    The key idea of levels of competence is that we can buildlayers of a control system corresponding to each level ocompetence and simply add a new layer to an existing set tomove to the next higher level of overall competence.We start by building a complete robot control system thaachieves level 0 competence. It is debugged thoroughly. Wenever alter that system. We call it the zeroth-level controsystem. Next we build a another control layer, which we calthe first-level control system. It is able to examine data fromthe level 0 system and is also permitted to inject data into thinternal interfaces of level 0 suppressing the normal data flowThis layer, with the aid of the zeroth, achieves levelcompetence. The zeroth layer continues to run unaware of thelayer above it which sometimes interferes with its data paths

    The same process is repeated to achieve higher levels ocompetence (Fig. 3). We call this architecture a subsumptionarchitecture.In such a scheme we have a working control system for therobot very early in the piece-as soon as we have built the firslayer. Additional layers can be added later, and the initiaworking system need never be changed.We claim that this architecture naturally lends itself tosolving the problems for mobile robots delineated in SectionIA .

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    4/10

    BROOKS: A ROBUST LAYERED CONTROL SYSTEM FOR A MOBILE ROBOTI 1I ,, I

    1Sensors level 0 t Actuators

    Fig. 3 . Control is layeredwithhigher evel ayerssubsuming he oles oflower evel ayerswhen hey wish to take control. The system can bepartitioned at any level, and the layers below form a complete operationalcontrol system.Multiple Goals: Individual layers can eworking nindividualgoals concurrently. The uppression mechanismthen mediates the actions that are taken. The advantage heresthat there is no need to make an early decision on which goalshould be pursued. The results of pursuing all of them to somelevel of conclusion can be used for the ultimate decision.Multiple Sensors: In part we can ignore the sensor fusionproblem as stated earlier using a subsumption architecture.

    Notall sensors need to feed nto a central representation.Indeed, certain readings of all sensors neednot eed ntocentral representations-only those which perception process-ing identifies as extremely reliable might be eligible to entersuch a central representation. At the same time however thesensor valuesmay still bebeingused by the robot. Otherlayers may be processing them in some fashion and using theresults to achieve their own goals, independent of how otherlayers may be scrutinizing them.

    Robustness: Multiple sensors clearly add to the robustnessof a system when their results can be used intelligently. Thereis another source of robustness in a subsumption architecture.Lower levels that have been well debugged continue to runwhen higher levels are added. Since a higher level can onlysuppress the outputs of lower levels by actively interferingwith replacement data, in the cases that it can not produceresults in a timely fashion the lower levels will still producesensible results-albeit at a lower level of competence.

    Extensibility: An obvious way to handle extensibility is tomake each new layer run on its own processor. We will seebelow that this is practical as there are in general fairly lowbandwidth requirements on communication channels betweenlayers. In addition we will see that the individual layers caneasily be spread over many loosely coupled processors.C. Structure of Layers

    But what about building each ndividual layer? Dont weneed to decompose a single layer in the traditional manner?This is true to some extent, but the key difference is that wedont need to account for all desired perceptions and process-ing and generated behaviors n a single decomposition. We arefree to use different decompositions for different sensor-settask-set pairs.We have hosenobuild layers from set of smallprocessors that send messages o each other.Each processor isa finite state machine with the ability to hold ome data

    17structures. Processorsendessagesveronnectingwires. There is no handshaking or acknowledgement ofmessages.Theprocessors uncompletelyasynchronously,monitoring their input wires, and sending -messages on theiroutput wires. It is possible for messages to get lost-it actuallyhappens quite often. There is no other form of communicationbetween processors, in particular there is no shared globalmemory.All processors (which we refer to as modules) are createdequal in the sense that within a layer there is no central control.Each module merely does its thing as best it can.Inputs to modulescanbe suppressed and outputs can beinhibited by wires terminating fromother modules. This is themechanism by which higher level layers subsume the role oflower levels.

    111. A ROBOT ONTROLYSTEMPECIFICATIONANGUAGEThere are two aspects to the components of our ayeredcontrol architecture. One ishe internal structure of themodules, and the second is the way in which they communi-cate. In this section we flesh out the details of the semantics ofour modules and explain a description language for them.

    A . Finite State MachinesEach module isa finite state machine, augmentedwith someinstance variables, whichcan actually hold Lisp data struc-

    tures.Each module has a number of input lines and a number ofoutput lines. Input lines have single-element buffers. The mostrecently arrived message is always available for inspection.Messages can be lost if a new one arrives on an input linebefore the last was inspected. There is a distinguished input toeachmodule called reset. Each state is named. When thesystem first starts up all modules start in the distinguished statenamed NIL. When a signal is received on the reset line themodule switches to state NIL. A state can be specified as one offour types.

    output

    Side effect

    Conditionaldispatch

    An output message, computed as a functionof the modules input buffers and instancevariables, is sent to an output line. A newspecified state is then entered.One of the modules instance variables is setto a new value computed as a function of itsinput buffers and variables. A new specifiedstate is then entered.A predicate on the modules instance varia-bles and input buffers is omputed nddepending on the outcome ne of twosubsequent states is entered.

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    5/10

    18 SEEE JOURNAL OF ROBOTICS A N D AUTOMATION, VOL. RA-2, NO . I , MARCH 1EventdispatchA sequence ofpairsofconditionsandstatesto branch to are monitored until one of theevents is true. The events are incombina-tions of arrivals of messages on input linesand the expiration of time delays.An example ofmoduledefinedn our specification

    language is the Avoid module in Listing 1.Listing I , Avoidmodule nLisp.~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ . ~ ~

    (defnlodule avoid 1:inputs (force heading):outputs (command):instance-vars (resultforce):states((nil (event-dispatch (and fbrcc heading) plan))(plan (setf resultforce (select-direction force heading))(go conditional-dispatch significant-force-p esultforce 1.0)go) startnil))(start (output comm and (follow-force resultforce))

    nil)))~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

    Here, select-direction, significant-force-p, and follow-forceare all Lisp functions, while setf is the modern Lisp assign-ment special form.The force input line inputsa force withmagnitudeanddirection found by treating each point found by the sonars asthe site of a repulsive force decaying as the square of distance.Function select-direction takes this and combines it with theinput on theheading line consideredasamotive force. Itselects the instantaneous direction of travel by summing theforces acting on the robot. (This simple technique computesthe tangent to the minimum energy path computed by 191.)The function significant-force-p checks whether the result-ing force is above some threshold-in this case it determineswhether the resulting motion would take less than a second.The dispatch ogic hen ignores such motions. The functionfcllow-force converts the desired direction and force magni-tude into motor velocity commallds.This particular module is part of the level 1 control system(as indicated by the argument 1 following avoid, the nameof the module),whichs described inSection -1V-B. Itessentially does localnavigation,making sure obstacles areavoided by diverting a desired heading away from obstacles. Itdoes not deliver the robot to a desired location-that is the taskof level 2 competence.B. CommunicationFig. 4 shows the best way to think about these finite statemodules for thepurposes of communications. They have someinput lines and some output lines. An output ine from onemodule sconnected o nput inesofone or more other

    The exact semantics are as follows. After an event dispatch is executed allinput lines are monitored for message arrivals. When the next event dispatchis executed it hasaccess to latcheswhich ndicatewhether new messagesarrived on each input line. Each conditions evaluated in turn. If it is true thenthe dispatch to the new state happens. Each condition is an and/or expressionon the input line latches. In addition, condition expressions can include delayterms, which become true a specified amountf time after the beginningf theexecution of theeventdispatch. An eventdispatchwaits until one of itscondition expressions is true.

    Inhibitor

    I n

    Fig. 4 . A module has input and output lines. Input signals can be suppresand replaced with the suppressing signal. Output signals can be inhibiteA module can also be reset to state NIL.

    modules. Onecan thinkof hese ines aswires, eachwitsourcesnd estination. Additionally, outputsmayeinhibited, and inputs may be suppressed.An extra wire can terminate (i.e. have its destination) at output site of a module. If any signal travels along this wireinhibits any output message from the module along that linfor some predetermined time. An y messages sent by tmodule to that output during that time period is lost.Similarly, an extra wire can terminate at an input site omodule. Its action svery similar to that of nhibition,but.additionally, the signal on this wire, besides inhibiting signalong the usual path, actually gets fed through as the inputthe module. Thus it suppresses the usual input and providesreplacement. If more than one suppressing wire is present thare essentially OR-ed together.For bothuppression ndinhibition we write the time constants inside the circle.In our specification language we write wires as a sour(i.e. an output line) followed by a number of destinations (iinput lines). For instance the connection to the force inputthe Avoid module might be the wire defined as

    (defwire 1 (feelforce force) (avoid force)).This links the force output of the Feelforce module to tinput of the Avoid module in the level one control systemSuppressionand nhibition can also be described withsmall xtensiono the syntax above. Below we eehesuppression of the command input of he Turn module, a le

    0 module by a signal from the level 1 module Avoid.(defwire 1 (avoid command) ((suppress (turncommand) 20.0

    In a similar manner a signal can be connected to the resinput of a module.IV . A ROBOTCONTROLYSTEMNSTANCEWe have implementedamobile robot control system achieve levels 0 and 1 competence as defined above, and hastarted implementation of level 2 bringing it to a stage whiexercises hefundamentalsubsumption dea effectively. Wneed more work on an early vision algorithm to complete lev2 .

    A . Zeroth LevelThe lowest level layer of control makes sure that the robdoes not come into contact with other objects. It thus achievlevel 0 competence (Fig. 5 ) . If something approaches the roit will move away. If in the course of moving itself it is abo

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    6/10

    BROOKS: A ROBUST LAYERED CONTROL SYSTEM FOR A MOBILE ROBOT 19

    robotI 4 + turnunawayeelforce force headingforward3

    sonar 4map encodershalt- ollide

    Fig. 5. Level 0 control system.

    to collide with an object it will halt. Together these two tacticsare sufficient for the robot to flee from moving obstacles,perhaps requiring many motions, without colliding withstationary obstacles. The combination of the tactics allows therobot to operate with very coarsely calibrated sonars and awide range of repulsive force functions. Theoretically, therobot is not invincible of course, and a sufficiently fast-movingobject ora very cluttered environment might result, in acollision. Over the course of a number of.hours of autonomousoperation, our physical robot (see Section V-B) has otcollided with either a moving or fixed obstacle. The movingobstacles have, however, been careful to move slowly.The Turn and Forward modules communicate with theactual robot. They have extra communication mechanisms,allowing them to send and receive commands to and from thephysical robot directly. The Turn module receives a headingspecifying an in-place turn angle followed by a forwardmotion of a specified magnitude. It commands the robot toturn (andat the same time sends a busy message onanadditional output channel described in Fig. 7) and oncompletion passes on the heading to the Forward module (andalso reports the shaft encoder readings on another output lineshown in Fig. 7). The Turn module then goes into a wait stateignoring all incoming messages. The Forward module com-mands the robot to move forward but halts the robot if itreceives a message on its halt input line during the motion. Assoon as the robot is idle, it sends out the shaft encoderreadings. The message acts as a reset for the Turn module,which is then once again ready to accept a new motioncommand. Notice the any heading commands sent to the Turnmodule during transit are lost.The Sonar module takes a vector of sonar readings, filtersthem for invalid readings, and effectively produces a robotcentered map of obstacles in polar coordinates.The Collide module monitors the sonar map and if it detectsobjects dead ahead, it sends a signal on the halt line to theMotor module. The Collide module does not know or carewhether the robot is moving. Halt messages sent while therobot is stationary are essentially lost.The Feelforce module sums the results of considering eachdetected object as a repulsive force, generating a singleresultant force.

    The Runaway module monitors the force produced by thesonar detected obstacles and sends commands to the turnmodule if it ever becomes significant.Fig. 5 gives a complete description of how the modules areconnected together.B . First Level

    The first level layer of control, when combined with thezeroth, imbues the robot with the ability to wander aroundaimlessly without hitting obstacles. This was defined earlier aslevel 1 competence. This control level relies in a large degreeon the zeroth levels aversion to hitting obstacles. In addition ituses a simple heuristic to plan ahead a little in order to avoidpotential collisions which would need to be handled by thezeroth level.The Wander module generates a new heading for the robotevery ten seconds or so.The Avoid module, described in more detail in Section 111,takes the result of the force computation from the zeroth leveland combines it with the desired heading to produce amodified heading, which usually points in roughly the rightdirection, but is perturbed to avoid any obvious obstacles. Thiscomputation implicitly subsumes the computations of theRunaway module, in the case that there is also a heading toconsider. In fact the output of the Avoid module suppresses theoutput from the Runaway module as it enters the Motormodule.Fig. 6 gives a complete description of how the modules areconnected together. Note that it is simply Fig. 5 with somemore modules and wires added.C. Second Level

    Level 2 is meant to add an exploratory mode of behavior tothe robot, using visual observations to select interesting placesto visit. A vision module finds corridors of free pace.Additional modules provide a means of position servoing therobot to along the corridor despite the presence of localobstacles on its path (as detected with the sonar sensingsystem). The wiring diagram is shown in Fig. 7. Note that it issimply Fig. 6 with some more modules and wires added.The Status module monitors the Turn and Forward mod-ules. It maintains one status output which sends either hi or lomessages to indicate whether the robot is busy. In addition, at

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    7/10

    20 IEEE JOURNAL OF ROBOTICS AND AUTOMATION, VOL. RA-2, NO. I , MARCH 19

    wander

    heading

    4 e e l f o o c e heading heading

    4 haltcoll ide -Fig. 6 . Level 0 control system augmented with the evel 1 system.

    robotI busy1- t rave ltereo - + i n t e g r a t e - - ,

    candidate i n t eg r a l

    init A __cstar t look look pathplan-+whenlook c path 4 eading

    I1

    headingvo idL headingncoders I s t a t u s

    busymbot

    mbot 4 eelforceforceeading - 2eadlng forward-

    sonar -map 4 encoder8halt- ollideu

    Fig. 7. Level 0 and 1 control systems augmented with he evel 2 system.

    the completion of every turn and roll forward combination itsends out a combined set of shaft encoder readings.The Whenlook module monitors the busy line from theStatus module, and whenever the robot has been sitting idle fora few seconds it decides its time to look for a corridor totraverse. It inhibits wandering so it can take some pictures andprocess them without wandering away from its currentlocation, and resets the Pathplan and Integrate modules. Thislatter action ensures that the robot will know how far it has

    moved from its observation point should any Runawaimpulses perturb it.The Look module initiates the vision processing, and waifor a candidate freeway. It filters out poor candidates anpasses any acceptable one to the Pathplan module.The Stereo module is supposed to use stereo TV images [7which are obtained by the robot, to find a corridor of frespace. At the time of writing final version of this module hanot been implemented. Instead, both in simulation and on th

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    8/10

    BROOKS: A ROBUST LAYERED CONTROL SYSTEM FOR A MOBILE ROBOT

    physical robot, we have replaced it with a sonar-base corridorfinder.The Integrate module accumulates reports of motions fromthe status module and always sends its most recent result outon its integral line. It gets restarted by application of a signal toits reset input.The Pathplan module takes a goal specification (in terms ofan angle to turn, a distance to travel) and attempts to reach thatgoal. To do his, it sends headings to the Avoid module, whichmay perturb them to avoid local obstacles, and monitors itsintegral input which is an integration of actual motions. Themessages to the Avoid module suppress random wanderings ofthe robot, so long as the higher level planner remains active.When the position of the robot is close to the desired position(the robot is unaware of control errors due to wheel slippageetc., so this is a dead-reckoning decision) it terminates.The current wiring of the second level of control is shown nFig. 7, which augments the two lower level control systems.The zeroth and first layers still play an active roll duringnormal operation of the second layer.V . PERFORMANCEThe control system described here hasbeenused exten-sively to control both a simulated robot and an actual physicalrobot wandering around a cluttered laboratory and a machineroom.

    A. A Simulated RobotThe simulation tries to simulate all the errors and uncertain-ties that exist in the world of the real robot. When commandedto turn through angle a and travel distance d the simulatedrobot actually turns through angle a + 601 and travels distance

    d +6d. Its sonars can bounce off walls multiple times, andeven when they do return they have a noise component in thereadings that model thermal and humidity effects. We feel it isimportant to have such a realistic simulation. Anything lessleads to incorrect control algorithms.The simulator runs off a clock and runs at the same rate aswould the actual robot. It actually runs on the same processorthat is simulating the subsumption architecture. Together theyare nevertheless able to perform a real-time simulation of therobot and its control and also drive graphics displays of robotstate and module performance monitors. Fig. 8 shows therobot (which itself is not drawn) receiving sonar reflections atsome of its 12 sensors. Other beams did not return within thetime allocated for data collection. The beams are beingreflected by various walls. There is a small bar in front of therobot perpendicular to the direction the robot is pointing.Fig. 9 shows an example worldn two dimensionalprojection. The simulated robot with a first level controlsystem connected was allowed to wander from an initialposition. The squiggly line traces out its path. Note that it waswandering aimlessly and that it hit no obstacles.Fig. 10 shows two examples of the same scene and themotion of the robot with the second level control systemconnected. In these cases the Stereo module was supplantedwith a situation-specific module, which gave out two precise

    A

    21

    Fig. 8. Simulated obot receives 12 sonar eadings.Some onarbeamsglance off walls and do not return within a certain tim e.

    Fig. 9.

    I

    Under levels 0 an d 1 control the robot wanders around aimlessly. Itdoes no t hit obstacles.corridor descriptions. While achieving the goals of followingthese corridors the lower level wandering behavior wassuppressed. However the obstacle avoiding behavior of thelower levels continued to function-in both cases the robotavoided the square obstacle. The goals were not reachedexactly. The simulator models a uniformly distributed error of+ 5 percent in both turn and forward motion. As soon as thegoals had been achieved satisfactorily the robot reverted to itswandering behavior.B . A Physical Robot

    We have constructed a mobile robot shown in Fig. 11. It isabout 17 inches in diameter and about 30 inches from theground to the top platform. Most of the processing occursoffboard on a Lisp machine.

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    9/10

    22 IEEE JOURNAL OF ROBOTICS AND AUTOMATION, VOL. RA-2, NO. 1 , MARCH 1986

    Fig. 10. (a) With b e l 2 ontrol the robot tries o achieve commanded goals.The nominal goals are the two straight lines. (b) After reaching he secondgoal, since there are no new goals forthcoming, th e robot reverts to aimlesslevel 1 behavior.Thedrive mechanism was purchased from Real WorId

    Interface of Sudbury, MA. Three parallel drive wheels aresteered together. The two motors are servoed by a singlemicroprocessor. The robot body is attached to the steeringmechanism and always points in the same direction as hewheels. It can turn in place (actually it inscribes a circle about1 cm in diameter).Currently installed sensors are a ring of twelve Polaroidsonar time-of-flight range sensors and wo Sony CCD cam-eras.The sonars are arranged symmetrically around therotating body of the robot. The cameras are on a tilt head (panis provided by the steering motors). We plan to install feelersthat can sense objects at ground level about si x inches from thebase extremities.

    Fig. 11 . Th eM.I.T. A I Lab mobile obot.

    A central cardcage contains the main on-board processor,an Intel 803I . tt communicates with off-board processors via a12Kbit/s duplex radio link. The radios are modified Motoroladigital voice encryption units. Error correction cuts theeffective bit rate to less than half the nominal rating. The 8031passes commands down to the motor controller processor andreturns encoder readings. It controls the sonars and the tilthead, and it switches the cameras through a single channelvideo transmitter mounted on top of the robot. The lattertransmits a standard T V signal to a Lisp machine equippedwith a demodulator and frame grabber.The robot has spent a few hours wandering around alaboratory and a machine room.

    Under level 0 control, the robot finds a large empty spaceand then sits there contented until a moving obstacle ap-proaches. Two people together can successfully herd the robotjust about anywhere-through doors or between rows of diskdrives, for instance.When level 1 control is added the robot is no longer contentto sit in an open space. After a few seconds it heads off in arandom direction. Our uncalibrated sonars and obstaclerepulsion functions make it overshoot a ittle to locationswhere the Runaway module reacts. It would be interesting tomake this the basis of adaption of ceriain parameters.Under Level 2 a sonar-based corridor finder usually finds themost distant point in the room.The robot heads of in thedirection. People walking n front of the robot cause it todetour, but the robot still gets to the initially desired goal, evenwhen it involves squeezing between closely spaced obstacles.If the sonars are in error and a goal is selected beyond a wall,the robot usually ends up in a position where the attractiveforce of the goal is within a threshold used by Avoid of therepulsive forces of the wall. At this point Avoid does not issueany heading, as it would be for some trivial motion of therobot. The robot sits still defeated by the obstacle. TheWhenlook module, however, notices that the robot is idle andinitiates a new scan for another corridor of free space tofollow.

  • 7/30/2019 A robust Layer Control System for A Mobile Robot

    10/10

    BROOKS: A ROBUST LAYERED CONTROL SYSTEM FOR A MOBILE ROBOT 23

    C. Implementation IssuesWhile we have been able to simulate sufficient processorson a single Lisp machine up until now, that capability willsoon pass as we bring on line our vision work (the algorithmshave been debugged as traditional serial algorithms, but weplan on re-implementing them within the subsumption archi-tecture). Building the architecture in custom chips is a long-

    term goal.One of the motivations for developing the layered controlsystem was extensibility of processing power. The fact that itis decomposed into asynchronous processors with low-band-width communication and no shared memory should certainlyassist in achieving that goal. New processors can simply beadded to the network by connecting their inputs and outputs atappropriate places-there are no bandwidth or synchronizationconsiderations in such connections.The finite state processors need not be large. Sixteen statesis more than sufficient for all modules we have written so far.(Actually, eight states are sufficient under the model of theprocessors we have presented here and used in OU T simula-tions. However we have refined the design somewhat towardsgate-level implementation, and there we use simpler morenumerous states.) Many such processors could easily bepacked on a single chip.The Lisp programs that are called by the finite statemachines are all rather simple. We believe it is possible toimplement each of them with a simple network of compara-tors, selectors, polar coordinate vector adders, and monotonicfunction generators. The silicon area overhead for eachmodule would probably not be larger than that required for thefinite state machine itself.

    VI. CONCLUSIONThe key ideas in this paper are the following.

    The mobile robot control problem can be decomposed interms of behaviors rather than in terms of functionalmodules.It provides a way to incrementally buildand test acomplex mobile robot control system.Useful parallel computation can. be performed on a lowbandwidth loosely coupled network of asynchronoussimple processors. The topology of that network isrelatively fixed.There is no need for a central control module of a mobilerobot. The control system can be viewed as a system ofagents each busy with their own solipsist world.

    Besides leading to a different implementation strategy it is alsointeresting to note the way the decomposition affected thecapabilities of the robot control system we have built. Inparticular, our control system deals with moving objects in theenvironment at the very lowest level, and thas a specificmodule (Runaway) for that purpose. Traditionally mobilerobot projects have delayed handling moving objects in heenvironment beyond the scientific life of the project.Note: A drawback of the presentation in this paper was

    merging the algorithms for control of the robot with theimplementation medium. We felt this was necessary toconvince the reader of the utility of both. It is unlikely that thesubsumption architecture would appear to be useful without aclear demonstration of how a respectable and useful algorithmcan run on it. Mixing the two descriptions as we have donedemonstrates the proposition.ACKNOWLEDGMENT

    Tomas Lozano-Pkrez, Eric Grimson, Jon Connell, andAnita Flynn have all provided helpful comments on earlierdrafts of this paper.REFERENCES

    R. A.Brooks,Aspects of mobile obotvisual mapmaking, inRobotics Research 2, Hanafusaand Inoue, Eds. Cambridge, MA:M.I.T. , 1984, pp.369-375.Conf. Robotics and Autom at., pp. 824-829.James L. Crowley, N avigation or n ntelligentmobile obot,IEEE J . Robotics Au tomat., vol. RA-1, no. I , Mar. 1985, pp. 31-41 .A. Elfesan d S. N . Talukdar, A distributedcontrolsystem or heCMU rover, in Proc. ZJCAI, 1983, pp. 830-833.A . Flynn,Redundant ensors formobile obotnavigation, M . S .Thesis, Department of Electrical Engineering and Computer Science,M.I.T., Camb ridge, MA, July 1985.G . Giralt, R. Chatila, and M . Vaisset, An integrated navigation andmotion control system for autonomous multisensory mobile robots, inRobotics Research 1, Brady and Paul, Eds. Cambridg e, MA: M.I.T.W. L. Grimson,Computationalexperimentswitha eaturebasedstereo lgorithm, ZEEE Trans. Patt. Anal. Mach. Intell., vol.PAMI-7, pp. 17-34, Jan. 1985.Y . Kanayama,Concurrentprogrammingof ntelligent obots, nProc. ZJCAI, 1983,pp .834-838.0 . Khatib, Dynamic control of m anipulators in operational space,Sixth IFTQMM Cong. Theory of Machines and Mechanisms, Dec.1983.M . L. Kreithen, Orientational strategies in birds: a tribute to W . T.Keeton,n Behavioral Energetics: The Cost of Survival inVertebrates. Columbus, OH : Ohio State University, 1983, pp. 3-28.H. P. Moravec, The stanford cart and the MU rove r, Proc. ZEEE,N. J. Nilsson,Shakey he obot, SRI AI Center, ech. note323,Apr.1984.H. A. Simon, Sciences of the Artificial. Cambridge, MA: M.I.T. ,1969.S . Tsuji, Monitoring of a building environmentby a mobile robot, nRobotics Research 2, Hanafusaand Inoue,Eds.,Cambridge,M A:M.I.T ., 1985, pp. 349-356.

    _ _ - , Visual map making for a m obile robot, in Proc. 1985ZEEE

    1983,191-214.

    VOI. 71, pp. 872-884, July 1983.