21
Auton Robot DOI 10.1007/s10514-008-9092-9 Modeling dynamic scenarios for local sensor-based motion planning Luis Montesano · Javier Minguez · Luis Montano Received: 10 April 2007 / Accepted: 2 April 2008 © Springer Science+Business Media, LLC 2008 Abstract This paper addresses the modeling of the static and dynamic parts of the scenario and how to use this in- formation with a sensor-based motion planning system. The contribution in the modeling aspect is a formulation of the detection and tracking of mobile objects and the mapping of the static structure in such a way that the nature (sta- tic/dynamic) of the observations is included in the estima- tion process. The algorithm provides a set of filters tracking the moving objects and a local map of the static structure constructed on line. In addition, this paper discusses how this modeling module is integrated in a real sensor-based motion planning system taking advantage selectively of the dynamic and static information. The experimental results confirm that the complete navigation system is able to move a vehicle in unknown and dynamic scenarios. Furthermore, the system overcomes many of the limitations of previous systems associated to the ability to distinguish the nature of the parts of the scenario. Keywords Mobile robots · Mapping dynamic environments · Sensor-based motion planning 1 Introduction Autonomous robots are currently being deployed in real en- vironments such as hospitals, schools or museums develop- L. Montesano ( ) Instituto de Sistemas e Robotica, Instituto Superior Tecnico, Lisboa, Portugal e-mail: [email protected] J. Minguez · L. Montano Dpto de Informática e Ingeniería de Sistemas, Universidad de Zaragoza, Zaragoza, Spain ing help-care or tour-guide applications for example. A com- mon characteristic of these applications is the presence of people or other moving objects. These entities make the en- vironment dynamic and unpredictable and have an impact on the performance of many of the basic robotic tasks. One of these tasks, common for many robotic applications, is mobility. In particular, dynamic scenarios involve two as- pects in the motion generation context: (i) to model the sce- nario and (ii) to integrate this information within the motion layer. This paper addresses both issues. The majority of existing motion systems addresses dy- namic scenarios by using the sensor observations at high rate compared to the obstacles dynamics. In other words, they assume a static but rapidly sensed scenario, which al- lows fast reactions to the changes induced by the evolution of the moving objects. Although this assumption works fine in many cases (obstacles moving at a low speed), in realis- tic applications is no longer valid. This is because in real- ity the object’s motion is arbitrary and, even assuming low speeds, in many cases these systems fail. For instance, Fig. 1 shows two common and simple situations where the static and dynamic parts of the scenario have to be discriminated, modeled and consequently used within the motion genera- tion layer. To explicitly deal with dynamic objects is a must to improve the robustness of the motion systems. In this work we are interested in those applications where the scenarios are dynamic and unpredictable and, thus, re- quire rapid reactions of the vehicles. For instance, consider a robotic wheelchair (Fig. 2). In this type of application, the user places goal locations that the wheelchair autonomously attains. In this context, the goals are usually in the field of view of the user, that is, in the close vicinity of the robot. This is important to bound the spatial domain of the motion generation. In general, the motion task has been usually ad- dressed from a global or local point of view (i.e. global or

Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton RobotDOI 10.1007/s10514-008-9092-9

Modeling dynamic scenarios for local sensor-based motionplanning

Luis Montesano · Javier Minguez · Luis Montano

Received: 10 April 2007 / Accepted: 2 April 2008© Springer Science+Business Media, LLC 2008

Abstract This paper addresses the modeling of the staticand dynamic parts of the scenario and how to use this in-formation with a sensor-based motion planning system. Thecontribution in the modeling aspect is a formulation of thedetection and tracking of mobile objects and the mappingof the static structure in such a way that the nature (sta-tic/dynamic) of the observations is included in the estima-tion process. The algorithm provides a set of filters trackingthe moving objects and a local map of the static structureconstructed on line. In addition, this paper discusses howthis modeling module is integrated in a real sensor-basedmotion planning system taking advantage selectively of thedynamic and static information. The experimental resultsconfirm that the complete navigation system is able to movea vehicle in unknown and dynamic scenarios. Furthermore,the system overcomes many of the limitations of previoussystems associated to the ability to distinguish the nature ofthe parts of the scenario.

Keywords Mobile robots · Mapping dynamicenvironments · Sensor-based motion planning

1 Introduction

Autonomous robots are currently being deployed in real en-vironments such as hospitals, schools or museums develop-

L. Montesano (�)Instituto de Sistemas e Robotica, Instituto Superior Tecnico,Lisboa, Portugale-mail: [email protected]

J. Minguez · L. MontanoDpto de Informática e Ingeniería de Sistemas, Universidad deZaragoza, Zaragoza, Spain

ing help-care or tour-guide applications for example. A com-mon characteristic of these applications is the presence ofpeople or other moving objects. These entities make the en-vironment dynamic and unpredictable and have an impacton the performance of many of the basic robotic tasks. Oneof these tasks, common for many robotic applications, ismobility. In particular, dynamic scenarios involve two as-pects in the motion generation context: (i) to model the sce-nario and (ii) to integrate this information within the motionlayer. This paper addresses both issues.

The majority of existing motion systems addresses dy-namic scenarios by using the sensor observations at highrate compared to the obstacles dynamics. In other words,they assume a static but rapidly sensed scenario, which al-lows fast reactions to the changes induced by the evolutionof the moving objects. Although this assumption works finein many cases (obstacles moving at a low speed), in realis-tic applications is no longer valid. This is because in real-ity the object’s motion is arbitrary and, even assuming lowspeeds, in many cases these systems fail. For instance, Fig. 1shows two common and simple situations where the staticand dynamic parts of the scenario have to be discriminated,modeled and consequently used within the motion genera-tion layer. To explicitly deal with dynamic objects is a mustto improve the robustness of the motion systems.

In this work we are interested in those applications wherethe scenarios are dynamic and unpredictable and, thus, re-quire rapid reactions of the vehicles. For instance, considera robotic wheelchair (Fig. 2). In this type of application, theuser places goal locations that the wheelchair autonomouslyattains. In this context, the goals are usually in the field ofview of the user, that is, in the close vicinity of the robot.This is important to bound the spatial domain of the motiongeneration. In general, the motion task has been usually ad-dressed from a global or local point of view (i.e. global or

Page 2: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

(a)

(b)

Fig. 1 These figures depict two examples that illustrate the impor-tance of modeling and using the dynamic and static parts of the sce-nario in the motion layer. The points are laser measurements. Situa-tions where (a) the robot and a dynamic obstacle move in a long corri-dor, and (b) the robot moves toward a door that is temporally blockedby a moving obstacle. Without distinction between static and dynamicobstacles, both the corridor and the door seem to be blocked and everymotion layer using the sensor measurements without processing wouldfail. To solve both situations, we need to construct a model of the staticand dynamic parts of the scenario and use them consequently in themotion layer (adapting the motion to the object dynamics)

local motion systems). For full autonomous operation bothare required since they are complementary, however theircompetences are different and related with the spatial do-main and the reaction time. In short, the larger the spatialdomain is, the higher the reaction time due to the computa-tional requirements (Arkin 1999).

On the one hand, global systems address solutions withlarge spatial domains. In static environments, successfulglobal mapping and planning has been demonstrated eventhough the computational requirements increase with thesize of the scenario. Dynamic scenarios impose additionaldifficulties since it has been proved that the motion plan-ning problem in the presence of dynamic obstacles is NP-hard (Canny and Reif 1987) (even in simple cases such asa point robot and convex polygonal moving obstacles). Inother words, global mapping and planning are time consum-ing operations especially in dynamic scenarios. Thus, theyare not adapted to high rate perception—action operations.

Fig. 2 A child using an autonomous robotic wheelchair in a crowdedand populated scenario. The number and dynamics of the moving ob-stacles is unpredictable. To deal with the motion aspect in these sce-narios is the context of this work

On the other hand, in local systems the domain is usu-ally bounded to achieve high rate perception-action schemes(working within the control loop). Furthermore, the motionproblem in dynamic and unpredictable scenarios is local innature, since: (i) it makes no sense to maintain a map ofdynamic objects observed long time ago (e.g. two roomsaway from the current location); and (ii) moving obstaclescan modify the route arbitrarily (e.g. humans) constantly in-validating plans. That is why we focus our attention on lo-cal motion systems. These systems work within a high fre-quency perception—action scheme. Real-time is achievedby limiting the model size (used to plan the motions) andalleviating some constraints of the planner (to speed it up).The consequences of these design choices are: (i) the max-imum reach of the motion solution is the model size, and(ii) the system might fail in some rare situations due to theunder-constrained motion planning strategies. Despite theselimitations, local systems are able to compute robust localmotion in the large majority of cases (see Schlegel 2004 fora discussion). Additionally, as discussed before, these localtechniques can be combined with global techniques improv-ing the behavior in long-term missions. In particular, in therobotic wheelchair application, the high level (global) com-petences may rely on a human and the machine addresseslocal issues. This paper focuses on the construction of localmodels in dynamic environments and their integration withlocal motion planning systems.

The paper is organized as follows. Section 2 describes therelated work and the contributions of this work. Section 3presents the modeling of the static and dynamic scenarios,and in Sect. 4 this module is integrated with a local motionplanning system. In Sect. 5 we describe the experimentalresults to validate the modeling and the motion generationin dynamic scenarios. Section 6 draws the conclusions.

Page 3: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

2 Related work and contributions

This paper focuses on two aspects of the design of motionsystems in dynamic scenarios: (i) to appropriately modelthe scenario, and (ii) to consequently integrate and use thismodel in the motion layer. Thus, we articulate this sectionin these two directions and, finally, we outline the contribu-tions of this work.

2.1 Modeling dynamic scenarios

Modeling dynamic scenarios has at least two aspects: themodeling of the static parts of the environment and the iden-tification and tracking of the moving objects. On one hand,the modeling of static scenarios has been extensively stud-ied. The proposed algorithms include incremental maximumlikelihood mapping techniques (Hähnel 2004; Gutmann andKonolige 1999; Lu and Milios 1997a), batch mapping algo-rithms (Hähnel et al. 2003) or online Simultaneous Local-ization and Map Building (SLAM) (Cheeseman and Smith1986; Castellanos and Tardós 1999; Thrun et al. 2005).Some of this SLAM algorithms are able to deal with moder-ate dynamic landmarks using exponential decays (Andrade-Cetto and Sanfeliu 2002) or treating them as parameters ofa regression function and estimating their maximum like-lihood values over time (Martinez-Cantin et al. 2007). Onthe other hand, the Tracking of Moving Objects (TMO)is also a well-studied problem (Bar-Shalom et al. 2001;Blackman and Popoli 1999). However, a robust modeling ofdynamic environments requires to perform both tasks at thesame time. This is because the robot position error affectsthe classification of the measurements and, consequently,the map construction and the tracking of the moving objects.

The modeling techniques that address both issues simul-taneously can be roughly divided into global or local tech-niques. On one hand, there are global techniques that ad-dress the mapping and tracking problem simultaneously. Forexample, Wang et al. (2007) presents a rigorous formulationof the TMO-SLAM problem assuming a known classifica-tion of the observations into static and dynamic. The prob-lem is factorized into an independent SLAM and an inde-pendent tracking. In the implementation, the classificationis based on the violation of the free space in a local densegrid map and the tracking on a set of extended Kalman fil-ters (EKF). In (Hähnel et al. 2002) they use a feature basedapproach to detect the moving objects in the range profile ofthe laser scans. Next, they use Joint Probabilistic Data Asso-ciation particle filters (Schulz et al. 2001) to track the mov-ing objects and a probabilistic SLAM technique to build themap. The previous methods do not take into account the un-certainty of the robot motion in the classification step. Thus,difficulties may arise in the presence of large odometry er-rors due to misclassification, since these errors affect the

precision and the convergence of the previous algorithms.Incorporating the classification process within the estima-tion process is hard due to the combination of discrete andcontinuous hidden variables and results in intensive comput-ing algorithms. For instance, in (Hähnel et al. 2003), an Ex-pectation Maximization based algorithm filters the dynamicmeasurements that do not match the current model of thestatic parts while building a map of the environment. How-ever, this technique is not well suited for real time motiongeneration because of its batch nature and because it doesnot explicitly model the dynamic features. In (Biswas et al.2002), an occupancy grid map technique was proposed tomodel non-stationary environments. The focus is on learn-ing grid representations for objects that change their posi-tions in long periods such as chairs or tables. The algorithmalso uses the Expectation Maximization to find correspon-dences between static maps obtained at different points intime. The work in (Wolf and Sukhatme 2005) uses two occu-pancy grids to model the static and dynamic parts of the en-vironment. The localization uses landmarks extracted fromthe static map to compute the robot localization. Althoughthe dynamic grid models explicitly the moving objects, it hasno information about their dynamics required for the naviga-tion task. In addition to this, the approach is not well suitedfor unstructured environments due to its dependency on ex-tracted landmarks.

On the other hand, the usual strategy with the local tech-niques is to build the map by filtering the measurementsoriginated by the moving objects. Many algorithms use ascan matching technique to correct the robot position andincrementally build a local map (Besl and McKay 1992;Lu and Milios 1997b; Biber and Strafler 2003; Minguez etal. 2006b). The performance of these techniques is affectedby dynamic environments due to failures in the matchingprocess. Several authors have extended them to minimizethe effect of the moving objects by discarding sectors withbig correspondence errors (Bengtsson and Baerveldt 1999)or by using the Expectation Maximization (EM) methodto detect and filter the spurious measurements (Jensen andSiegwart 2004). These type of methods are widely spreadand used, since they are well adapted to real time oper-ation. However, they discard dynamic information and donot explicitly track the moving objects. The lack of a modelfor the dynamic parts hampers their filtering in subsequentmeasurements and prevents the usage of this information forother tasks (e.g. sensor-based motion planning).

2.2 Motion generation in dynamic scenarios

The correct way to generate motion in dynamic scenariosis to address the motion planning with moving obstacles.Unfortunately it has been demonstrated that this problemis NP-hard even for the most simple cases (point robot and

Page 4: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

convex polygonal moving obstacles, Canny and Reif 1987).As a result, these techniques are not well suited for real timeoperation. This is because they take significant time to com-pute the plan and often, when it is available, the scenario hasbeen modified invalidating the plan. The problem of motionin dynamic scenarios is usually simplified to achieve realtime operation.

The usual simplification is to compute the motion withreactive obstacle avoidance techniques, which reduce thecomplexity of the problem by computing only the next mo-tion (instead of a full plan). Therefore, they are very effi-cient for real-time applications. Some reactive techniqueshave been designed to deal with moving obstacles (Fior-ini and Shiller 1998; Fraichard and Asama 2004) and havedemonstrated good performance. Unfortunately their localnature produces trap situations and cyclic behaviors, whichis a strong limitation for realistic operation.

To overcome this limitation it has been suggested to com-bine these reactive techniques with some sort of planning(see Arkin 1999 for a discussion on integration schemesand Minguez and Montano 2005 for a similar discussionin the motion context). The more widespread way to com-bine reaction with planning are the systems of tactical plan-ning (Ratering and Gini 1993; Ulrich and Borenstein 2000;Brock and Khatib 1999; Minguez and Montano 2005;Stachniss and Burgard 2002; Philipsen and Siegwart 2003;Montesano et al. 2006). They perform a rough planning overa local model of the scenario, which is used to guide the ob-stacle avoidance. The planning extracts the connectivity ofthe space to avoid the trap situations. The reactive colli-sion avoidance computes the motion addressing the vehicleconstraints. The advantage of these local motion systems isthat the combination of planning and obstacle avoidance ata high rate assures robust collision avoidance while beingfree of local minima (up to some rare cases related with therelaxation of constraints of the planning algorithm Schlegel2004). However, none of these systems constructs modelsof the dynamic and static parts of the scenario. Thus, theycannot correctly cope with some typical situations (such asthose depicted in Fig. 1) affecting the robustness of the sys-tem.

2.3 Contributions

This paper contributes in two aspects of the motion genera-tion where the dynamic obstacles affect: (i) the constructionof a model of the dynamic and static parts of the scenarioand (ii) its integration within a local system of tactical plan-ning.

• The first contribution is an incremental local mapping al-gorithm that explicitly solves the classification process.The method uses the Expectation Maximization algo-rithm to compute the robot pose and the measurement

classification, constructs a local dense grid map and tracksthe moving objects around the robot. The method couldalso be seen as a scan matching algorithm for dynamic en-vironments that includes the information about the mov-ing objects.

• The second one is the integration of the previous mod-eling within a local sensor-based motion system basedon a tactical planning scheme. The advantage is thatthe motion is computed by selectively using the infor-mation provided by the static and moving obstacles inthe planning—obstacle avoidance paradigm. As a conse-quence, the system is able to drive the vehicle in dynamicscenarios while avoiding typical shortcomings such as thetrap situations or the motion in confined spaces.

3 Modeling dynamic environments

In this section, we outline the problem of modeling dynamicenvironments from a Bayesian perspective. Next, we presenta maximum likelihood algorithm to jointly estimate the ro-bot pose and classify the measurements into static and dy-namic; and we provide the implementation details for a lasersensor.

The objective is to estimate the map of the static partsand the map of the moving objects around the robot us-ing the information provided by the onboard sensors. For-mally, let Zk = {zk,1, . . . , zk,Nz} be the Nz observationsobtained by the robot at time k and uk the motion com-mand executed at time k. The sets Z1:k = {Z1, . . . ,Zk} andu0:k = {u0, . . . , uk} represent the observations and motioncommands up to time k. Let xk denote the robot location attime k, Ok = {ok,1, . . . , ok,NO

} the state of the NO movingobjects at time k and M the map of the static environment.1

From a Bayesian point of view, the objective is to estimatethe distribution p(Ok, xk,M|Z1:k, u0:k−1). Using the Bayesrule and marginalizing out the state variables at the previousstep, we get the recursive Bayes estimator

p(Ok, xk,M|Z1:k, u0:k−1)

= ηp(Zk|Ok,xk,M)

1The assumption here is that the map M does not change over time(note that the map does not have a time index). The formulation statesthat the world can be divided in two different types of features: staticand dynamic. The static ones are parameters (their value is fixed) whilethe dynamic ones require modeling their evolution. This assumption iscommon in this context and in most of the algorithms that map staticenvironments. Otherwise, if all features are considered dynamic, thenon-visible parts of the map become unusable after some time sincetheir location tends to a non informative distribution (their uncertaintyincreases in an unbounded way). Furthermore, observability also be-comes an issue due to the uncertainty in the robot displacement.

Page 5: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

×∫ ∫

p(Ok, xk|Ok−1, xk−1,M,uk−1)

× p(Ok−1, xk−1,M|Z1:k−1, u0:k−2)dOk−1dxk−1 (1)

where η is a normalization factor. The term p(Zk|Ok,xk,M)

is known as the measurement model. The integral combinesthe motion model p(Ok, xk|Ok−1, xk−1,M,uk−1) and thedistribution of interest at k − 1 to predict the state vector attime k. Notice that, since the map does not change over time,it is a constant in the integration. We have used a Markov as-sumption to discard previous measurements and commandsand simplify both the motion and measurement models.

The motion model, p(Ok, xk|Ok−1, xk−1,M,uk−1), rep-resents the evolution of the robot and the moving objects.Let us assume that the objects ok,i and the robot xk moveindependently. Then, the joint motion model can be factor-ized into the individual motion models of the robot and eachmoving object. If the motion does not depend either on themap M , the motion model can be written as

p(Ok, xk|Ok−1, xk−1, uk−1)

= p(xk|xk−1, uk−1)

Nz∏i

p(ok,i |ok−1,i ). (2)

The likelihood term, p(Zk|Ok,xk,M), measures how wellthe observations match the prediction done by the motionmodel. Computing this term requires to solve the data asso-ciation problem, i.e. to establish a correspondence betweeneach measurement and a feature of the map or a moving ob-ject. In order to model the data association, we introducea new variable ck that indicates which feature originatedeach measurement Zk . Since the correspondences are unob-served, one has to integrate over all the possible sources ck

p(Zk|Ok,xk,M)

=∑ck

p(Zk, ck|Ok,xk,M)p(ck|Ok,xk,M). (3)

Figure 3 shows the graphical representation of the prob-lem. The model contains the continuous variables of (1) andthe discrete variables ck representing the source of the mea-surements at each point in time (i.e. they represent the clas-sification or correspondence problem into static/dynamic).

In general, it is very hard to perform exact inferenceon such models due to the integration over all the possi-ble correspondences and the mutual exclusion constraints.Therefore, one has to use approximations. It is possible touse sequential Monte Carlo techniques (Doucet et al. 2000;Murphy 2002) to approximate the full distribution. Thedrawback is a high computational cost due to the increasingsize of the map and the multiple hypotheses arising from dif-ferent classifications. As described in Sect. 2, most of previ-

Fig. 3 Graphical representation of the problem including the dataassociations. The circles represent the continuous variables and thesquares discrete ones. The filled nodes are the measurements whereasthe empty ones represent hidden variables

ous works simplify the problem assuming that the classifica-tion into static and dynamic is known. In practice the classi-fication is not available and it is usually computed based onthe distribution p(Ok, xk,M|Z1:k−1, u0:k−1) using patternsand/or free space constraints.

In the next section, we propose an incremental mappingalgorithm that jointly computes the robot pose xk and thecorrespondences ck to obtain the estimate of the map M andthe location of the moving objects Ok .

3.1 Incremental mapping of dynamic environments

A key feature of previous approaches is a high computa-tional time due to the increasing size of the map, whichhinders its application in the context of this work (real- time operation). One simplification of the full model-ing problem is incremental mapping (Thrun et al. 2000;Gutmann and Konolige 1999). The objective is to compute asingle pose xk of the vehicle at each point in time k (insteadof a distribution). Thus, the representation of the map andthe moving objects, which can be probabilistic, are condi-tioned on this deterministic trajectory.

Incremental mapping estimates at each point in timek the robot pose xk that maximizes the likelihood termp(Zk|Ok,xk,M)

xk = arg maxx

p(Zk|Ok,xk,M)p(xk|xk−1, uk−1). (4)

The additional term p(xk|xk−1, uk−1) introduces the uncer-tainty of the last robot motion uk−1 to constraint the pos-

Page 6: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

sible solutions of the optimization algorithm. Furthermore,the map M and the state of the objects Ok are computedfrom the set of measurements Z1:k−1 and poses x1:k−1 up totime k − 1,

Mk = fM(x1:k−1,Z1:k−1), (5)

Ok = fO(x1:k−1,Z1:k−1). (6)

The robot trajectory x1:k−1 used by functions fM and fO isthe deterministic set of maximum likely poses. The detaileddescription of functions fM and fO is given in the next sec-tions. However, let us advance that for computational rea-sons, they are incremental functions that use the estimates atk − 1, the new pose and the last set of measurements.

The advantage of this framework is that the classifi-cation of the measurements can be included within themaximum likelihood optimization using the Expectation-Maximization (EM) algorithm. This is addressed in the nextsection.

3.2 Expectation maximization (EM) maximum likelihoodapproach

So as to solve (4) the term p(Zk|Ok,xk,M) has to be max-imized, which requires to consider all the possible sources(static map or a dynamic object) for each observation (see(3)). The resulting expression has no closed-form solutionand it is difficult to maximize since the classification vari-ables ck are unknown in general (Fig. 3). This subsectiondescribes how to use the Expectation Maximization (EM)(Dempster et al. 1977; McLachlan and Krishnan 1997) for-mulation to address the static/dynamic classification processand solve (4) to obtain xk .

The EM technique is a maximum likelihood approach tosolve incomplete-data optimization problems. Initially, it in-troduces some auxiliary variables to convert the incomplete-data likelihood into a complete-data likelihood Lc which iseasier to optimize. Then, there is a two step maximizationprocess: (i) the E-step computes the conditional expecta-tion Q(x,x(t)) = Ex(t)[logLc|Zk] of the complete-data log-likelihood given the measurements Zk and the current esti-mate x(t) of vector x to be maximized; and (ii) the M-stepcomputes a new estimate x(t+1) that maximizes the functionQ(x,x(t)). This process is repeated until the change in like-lihood in the complete-data likelihood is arbitrarily small.The original incomplete likelihood is assured not to decreaseafter each iteration and, under fairly regular assumptions thealgorithm converges to a local maximum (McLachlan andKrishnan 1997).

In the remainder of this section, we describe the appli-cation of the EM technique to maximize the likelihood termp(Zk|Ok,xk,M) of (4). The second term, p(xk|xk−1, uk−1),is just a prior over the robot poses. Since it does not depend

on the measurements, it only affects the M-step (Sect. 3.3.3)acting as a regularization term.2

Initially, we define some extra variables to build thecomplete-data likelihood function. Although, in general theextra variables in the EM do not require to have any phys-ical meaning, in our case we use the correspondence vari-able ck . This is because if this variable is known, the result-ing likelihood is much simpler and its maximization has aclosed form solution (given that we linearized the measure-ment equation).

Let the correspondence variable ck be a vector of binaryvariables cij with j ∈ 0..No + 1 defined for each observa-tion zk,i , i ∈ 1..Nz of Zk . So as to ease the notation, wedrop the time index k from the binary variables cij . The pos-sible sources are represented by the index j , where j = 0for static measurements, j ∈ 1..NO for the tracked objects,j = NO + 1 for the unknown sources. The value cij = 1indicates that zk,i was originated by source j , and cij = 0otherwise.

Assuming that a single observation only belongs to onesource (i.e.

∑j cij = 1) and that the measurements zk,i are

independent, the complete-data likelihood model is

Lc = p(Zk, ck|xk,Mk,Ok)

=M∏i=1

[ps(zk,i |xk,Mk)

ci0pu(zk,i)ciNO+1

×N∏

j=1

pd(zk,i |xk, ok,j )cij

](7)

where ps(.), pd(.) and pu(.) are the likelihood models forobservations originated from the map (static), each of thetracked moving objects (dynamic) and new discovered areas(unknown), respectively. Note that the binary variables cij

select the likelihood model according to the origin of themeasurements. The complete-data log likelihood function is

logLc =Nz∑i=1

[ci0 logps(zk,i |xk,Mk) + ciNO+1 logpu(zk,i )

+NO∑j=1

cij logpd(zk,i |xk, ok,j )

]. (8)

The function Q(xk, x(t)k ) is the conditional expectation of

the complete-data log likelihood logLc. As logLc is lin-ear in the unobservable data ck , in the computation ofQ(xk, x

(t)k ) we replace each cij by its conditional expecta-

2Under Gaussian assumptions, the inclusion of this term in the mini-mization still has a closed form solution.

Page 7: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

tion given the current measurements Zk and the current es-timates of xk , Mk and Ok ,

Q(xk, x(t)k ) = E

x(t)k

{logLc|Zk,Mk,Ok}

=Nz∑i=1

[ci0 logps(zk,i |xk,Mk)

+ ciNO+1 logpu(zk,i)

+NO∑j=1

cijpd(zk,i |xk, ok,j )

](9)

where

ci0 = Ex

(t)k

{ci0|Zk,Mk,Ok}, (10)

ciNO+1 = Ex

(t)k

{ciNO+1|Zk,Mk,Ok}, (11)

cij = Ex

(t)k

{cij |Zk,Mk,Ok}. (12)

3.3 Implementation for range sensors

In this section we provide the implementation of the previ-ous method for a range sensor, e.g. laser range finder. First,we present the measurement models associated to each typeof measurement (static, dynamic, unknown). Based on thesemodels, we derive the equations for the E-Step and M-Stepof the EM algorithm based on the function Q(xk, x

(t)k ) of

the previous section. Finally, we describe how to update themap Mk and the moving objects Ok using the new pose xk .Algorithm 1 summarizes the steps of the algorithm.

3.3.1 Models

We next address the implementation of the functions fM(·)and fO(·) (see (5)), which compute the current estimates ofthe map Mk and the moving objects Ok conditioned over theset of robot poses x1:k−1. Next, we will describe the likeli-hood terms ps(.), pd(.) and pu(.) of (7).

For the static map fM(·), we use a two dimensional prob-abilistic grid (Moravec and Elfes 1985) to represent theworkspace. Each cell has associated a random binary vari-able mi , where mi = 1 when it is occupied by a static obsta-cle, and mi = 0 if it is free space. The probability of the gridcells is computed using the Bayesian approach proposed in(Elfes 1989). This map representation is convenient in ournavigation context since: (i) it contains information of oc-cupied and free space, and (ii) it is suitable for unstructuredscenarios.

We implement the function fO(·) using an independentextended Kalman filter to track each of the moving objects.The state vector for each moving object contains its posi-tion, its velocity and its size. The latter is computed from

Algorithm 1 : Algorithm SummaryINPUT: xk−1, uk−1, Zk , Mk−1, Ok−1t = 0,% Prediction step

Compute the initial x(0)k

using xk−1 and uk−1Predict moving objects locations Ok|k−1% Estimation of the robot poserepeat

E-Step:for each zi,k , do

Select the nearest occupied grid cell gi ∈ Mk−1 usingthe Mahalanobis distanceCompute the Mahalanobis distance to each oj ∈Ok|k−1, j ∈ 1..NO

Compute ci0, cij , ciNO+1 to form Q(xk, x(t)k

)

end forM-Step:Compute x(t+1) = arg maxxk Q(xk, x

(t)k

)

t = t + 1until convergence or t > MaxIter% Update models using xk = x(t)

Classify measurements into Zstatic,Zdynamic

Update Mk with static measurements Zstatic

Update filters Ok with dynamic measurements Zdynamic

OUTPUT: xk , Mk , Ok

the main axis associated to the measurements. The func-tion fO(·) computes the predicted positions of the movingobjects at time k based on the vehicle trajectory x1:k andthe measurements Z1:k−1. We use a constant velocity modelwith acceleration noise to predict the positions of the mov-ing objects between observations and a random walk modelfor the size of the object.

The complete-data likelihood function of (7) representshow well the observations fit the current estimate of theenvironment. This expression explicitly reflects the differ-ent possible sources for each measurement with the mod-els ps(.), pd(.) and pu(.) and the classification variablesck . The definition of the measurement models depends onthe previous representations and on the sensor used (for in-stance, see Hähnel et al. 2003; Thrun et al. 2005 for laserrange sensors ). We use a correspondence oriented modelwhere each likelihood term of (7) is computed based onthe Mahalanobis distance between each observation and itsmodel conditioned on ck . We model the uncertainties usingGaussian distributions and linearize the models to computethe Mahalanobis distance as in (Montesano et al. 2005) (seeAppendix A).

This framework is convenient since: (i) the Mahalanobisdistance takes into account the measurement noise andthe error of the last robot displacement, which may havea big impact on the classification of each measurement(Fig. 4a); and (ii) the use of a probabilistic metric im-proves the correspondences computed in the E-step result-

Page 8: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

(a) (b)

Fig. 4 (a) This figure illustrates how the error in the robot pose hindersthe classification of the measurements. In particular, the effect of rota-tion error increases for points far away from the sensor. The figure alsoshows how the information of the object predicted position helps toimprove the correspondences. (b) This figure illustrates the differences

between an end-point model and a complete one. In the case of an endpoint model, both beams have the same probability. On the other hand,a complete model will assign a lower likelihood to the top ray due tothe fact that it traverses an obstacle before reaching the final obstacle

ing in a better convergence and robustness (Montesano et al.2005).

In the previous model, we implicitly make two simpli-fications to decrease the computational requirements of thealgorithm: (i) we use an End Point model (Thrun et al. 2005),which ignores the path traversed by the ray and only consid-ers the end point (Fig. 4b); (ii) instead of taking into ac-count all the possible correspondences between the mea-surements and the map of static obstacles in (7), we se-lect the nearest neighbor occupied cell of the map for eachmeasurement. Although this is a simplification, nothing pro-hibits the usage of more sophisticated techniques such as(Jensen and Siegwart 2004; Montesano 2006) in the frame-work.

We next describe the models for each type of measure-ments ps(.), pd(.) and pu(.). The likelihood of a measure-ment associated to a cell of the static map is modeled as aGaussian,

ps(zk,i |xk,Mk) = N(zk,i;f (xk, qi),Pi0) (13)

where qi is the location of the correspondent point associ-ated to the measurement zk,i , f (xk, qi) is the transformationbetween the map and the robot reference systems. The co-variance matrix Pi0 takes into account the uncertainty in thelast robot motion, the position of the point in the map and themeasurement noise. The computation of Pi0 is described inAppendix A. We refer the reader to (Montesano et al. 2005)for further details.

We use the same model as the likelihood function forevery dynamic object,

pd(zk,i |xk, ok,j ) = N(zk,i;f (xk, ok,j ),Pij ),

∀j ∈ 1..NO (14)

where the function f (·, ·) and the covariance matrix Pij arethe same as in (13) but applied to the estimated position ofthe moving object ok,j (see Appendix A). Since the filteronly contains an estimate of the centroid, we take into ac-count the estimated size of the object when computing thedistance between the object and the measurement.

Finally, the classification variable ciNO+1 includes spuri-ous observations and those corresponding to unexplored ar-eas and new moving objects. The likelihood of such a mea-surement is difficult to quantify. It depends on the spuriousrate of the sensor and on where the measurement is,

pu(zk,i ) ={

punexplored if zk,i /∈ Mk,

pspurious if zk,i ∈ Mk.(15)

The value pspurious is an experimental value measuring thespurious rate of the sensor and punexplored is typically set to ahigher value to avoid those observations placed in unknownareas to influence the optimization process.

3.3.2 E-Step

The E-Step requires the computation of the expectation ofthe classifications variables cij defined in (10–12),

Page 9: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

cij = Ex

(t)k

{cij |Zk,xk,Mk,Ok}

=∑cij

cijp(cij |Zk,xk,Mk,Ok)

= p(cij = 1|Zk,xk,Mk,Ok) (16)

where i = 1..Nz and j = 0..NO +1. The expectation is con-ditioned on the predicted position of the objects Ok , the lastmap estimate Mk and the current estimate of xk . Using theBayes rule, we obtain

p(cij = 1|Zk,xk,Mk,Ok)

= p(zk,i |cij = 1, xk,Mk,Ok)p(cij = 1|xk,Mk,Ok)

p(zk,i |xk,Mk,Ok)

= p(zk,i |cij = 1, xk,Mk,Ok)p(cij = 1)∑l p(zk,i |cil = 1, xk,Mk,Ok)

. (17)

The previous derivation assumes a constant value for theprior over the classification variables p(cij = 1|xk,Mk,Ok)

and computes the probability of the observation, p(zk,i |xk,

Mk,Ok), as the sum of all its potential sources. The specificlikelihood model to be used (ps(·), pd(·) or pu(·) defined in(13), (14) or (15)) depends on the source of the measurementindicated by the variable cij .

3.3.3 M-Step

The M-Step computes a new robot pose x(t+1)k such that

Q(x(t+1)k , x

(t)k ) ≥ Q(xk, x

(t)k ). (18)

Given the models introduced in Sect. 3.3 and (9), the cri-terium to minimize is,

Q(xk, x(t)k )

=Nz∑i=1

[ci0 log(−2π

√|Pi0|)

+ ci0(f (xk, qi) − zk,i)T P −1

i0 (f (xk, qi) − zk,i)

+ ciNO+1 logpu(zk,i ) +NO∑j=1

[cij log(−2π√|Pij |)

+ cij (f (xk, ok,j ) − zk,i)T P −1

ij (f (xk, ok,j ) − zk,i)]].

(19)

Grouping all the terms that do not depend on xk we get

Q(xk, x(t)k )

= cte +Nz∑i=1

[ci0(f (xk, qi) − zk,i)

T

× P −1i0 (f (xk, qi) − zk,i))

+NO∑j=1

cij (f (xk, ok,j ) − zk,i )T

× P −1ij (f (xk, ok,j ) − zk,i)

](20)

which has no closed form solution due to the nonlinear func-tion f (·, ·). Appendix B describes how to compute the solu-tion x

(t+1)k by linearizing f (·, ·).

Notice that, since the moving objects are included in theclassification (E-Step), they also influence the computationof xk . Their influence is reflected in the Pij term. When thelocation of the moving objects is uncertain (due to the pre-diction of this position, for instance), the value of Pij is highand does not affect the solution.

3.3.4 Updating the map and the moving objects

In this section, we describe how to update the probabilisticgrid map and the set of Kalman filters with the last measure-ments Zk after convergence of the EM algorithm.

In addition to the maximum likelihood pose of the robotxk , the EM also provides an estimate of the values of the cor-respondence variables ck . We use this estimate to distinguishbetween static, dynamic and unknown measurements. Ex-cept in those situations where there exist ambiguities, oncethe robot position is corrected all the weight is assigned to asingle source. A simple threshold on the probabilities of thecorrespondences allows us to classify the measurements inthree different sets Zstatic

k , Zdynamick or Zunknown

k ,

zk,i ∈

⎧⎪⎨⎪⎩

Zstatick , if ci0 > α,

Zdynamick , if

∑NO

j=1 cij > α,

Zunknownk , otherwise

(21)

where the value of the threshold α > 0.5 ensures that a mea-surement only belongs to one of the sets Zstatic, Zdynamic orZunknown.

The update of the probabilist map is done as in (Elfes1989), but the process is adapted to deal with the differenttypes of measurements (static,dynamic or unknown). On theone hand, all the measurements of a scan contribute to up-date the free space traversed by their corresponding laserbeams (see Fig. 4). On the other hand, only those measure-ments classified as static provide information about the sta-tic parts. So as to initialize new static areas, we keep a sec-ond grid map with the unknown measurements Zunknown

k inthe frontiers of the explored workspace. If a cell is detectedconsecutively a given number of time steps, it is includedas static in the probabilistic grid map. This type of delayedinitialization increases the robustness of the algorithm. For

Page 10: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

instance, a dynamic object moving in the frontier of the ex-plored space will be treated as unknown until it enters pre-viously mapped areas. A static one, on the other hand, willnot be included in the map until it has been detected severaltimes.

In the case of the filters that track the moving objects, weuse a segmentation algorithm based on distances to clusterthe dynamic measurements Z

dynamick . Then, we use a Joint

Probabilistic Data Association (Bar-Shalom and Fortmann1988) scheme to update the set of Kalman filters.3 So as todeal with new objects, we initialize a filter for those clustersof points that are not assigned to any filter. Furthermore, fil-ters without support, e.g. those out of the field of view of thesensor, are removed after a fixed number of steps.

In summary, we have described in this section an EM al-gorithm to incrementally compute the maximum likelihoodtrajectory of the vehicle. Based on this trajectory, the algo-rithm also computes a map of static obstacles and a map ofdynamic obstacles.

4 Integration of the modeling within the motion layer

Local sensor-based motion systems combine modeling andplanning aspects. On the one hand, we have proposed in theprevious section a technique to model the static and dynamicfeatures of the scenario. On the other hand, the planningaspect in these systems usually combines tactical planningwith obstacle avoidance.4 In this section we outline the toolsused in our system (Montesano et al. 2006) and we describethe interactions of the modeling with the rest of the modulesin a general framework. We address next the tactical plan-ning and the obstacle avoidance modules.

• Tactical planning: computation of the main cruise to drivethe vehicle (used to avoid the cyclical motions and trapsituations). This module uses the D∗Lite planner (Koenigand Likhachev 2002) to compute a path to the goal and toextract the main cruise. The principle of this planner is tolocally modify the previous path (available from the pre-vious step) using the changes in the scenario. The modulehas two different parts: (i) the computation of the obstaclechanges in configuration space (notice that the grid repre-sents the workspace), (ii) the usage of the D∗Lite plannerover the changes to recompute a path (if necessary). The

3We could use the final weights provided by the EM algorithm to solvethe data association problem between the moving objects and the dy-namic measurements. However, this strategy is prone to lose track ofthe moving objects in the presence of ambiguities (Montesano 2006).4These systems are usually referred as systems of tactical planning(Ratering and Gini 1993; Ulrich and Borenstein 2000; Brock andKhatib 1999; Minguez and Montano 2005; Stachniss and Burgard2002; Philipsen and Siegwart 2003).

Fig. 5 Overview of the local sensor-based motion system that com-bines modeling and planning aspects

planner avoids the local minima and is computationallyvery efficient for real time implementations.

• Obstacle avoidance: computation of the collision-freemotion. We chose the Nearness Diagram Navigation(Minguez and Montano 2004). This technique employsa “divide and conquer” strategy based on situations tosimplify the difficulty of the navigation. At each time, asituation is selected and the corresponding action com-putes the motion for the vehicle. This method has beenshown to perform well in scenarios that remain trouble-some for many existing methods. Furthermore, a tech-nique to take into account the shape, kinematics and dy-namic constraints is used to address the local issues of thevehicle (Minguez and Montano 2002).

We next describe the interaction between the modules ofthe system (Fig. 5) focusing on the modeling module. Re-call that this last module computes both a map of the staticstructure of the scenario and a map of dynamic obstacleswith their locations and velocities.

• Modeling—tactical planning: The map of the static struc-ture5 is the input data to the planner. This is because therole of the tactical planner is to determine at each cyclethe main cruise to direct the vehicle. A cruise dependson the permanent structure of the scenario (e.g. walls anddoors) and not on the dynamic objects moving around(e.g. people). Notice that using only the static structureovercomes situations that other systems would interpret

5The modeling module also includes those dynamic objects with zerovelocity for a predefined period of time in the map of static structurepassed to the planner.

Page 11: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

as trap situations or blocked passages due to the temporalpresence of moving objects.

• Modeling—obstacle avoidance: Both maps are the inputsof the obstacle avoidance technique. While the map of thestatic structure is directly used as computed by the mod-eling module, the map of dynamic obstacles is processedto compute an alternative map based on the predicted col-lision point (Foisy et al. 1990). This new map of obstaclesis the other input of the obstacle avoidance. We use a lin-eal model for the velocities of the robot and the obstacle.For each obstacle i the collision point pi

c = (picx,p

icy) is

computed by

pic = pi

o + viot

ic (22)

where pio is the obstacle location in the robot reference

system6 and vio is the velocity vector of the obstacle. The

collision time t ic represents the time when the robot andobstacle i intersect along the motion direction of the ro-bot,

t ic = piox

− (Rr + Rio)

vrx − viox

(23)

where vrx is the linear velocity of the robot and Rr and Rio

are the radius of the robot and the obstacle respectively.Figure 6 illustrates the computation of the predicted

collision. Notice that the obstacle avoidance receives amap of predicted collision locations pi

c. Let us remarkthat the predicted location of the obstacle depends on thecurrent obstacle location but also on both the vehicle andobstacle relative velocities. Furthermore, if the obstaclemoves further away from the robot (t ic < 0), it is not takeninto account. This approach to avoid the moving obstacleimplies: (i) if there is a potential collision, the obstacleavoidance method starts the avoidance motion before thanif the obstacle was considered static; and (ii) if there is nopotential collision, the obstacle is not taken into account.

• Tactical planning—obstacle avoidance: In the systems oftactical planning, the obstacle avoidance module gener-ates the collision-free motion to align the vehicle towardthe cruise computed by the planner. More specifically, thecruise is computed as a subgoal using the direction of theinitial part (predefined distance) of the path.

Globally the system works as follows (Fig. 5): given alaser scan and the odometry of the vehicle, the model builderincorporates this information into the existing model. Next,the static and dynamic information of obstacles in the modelis selectively used by the planner module to compute thecruise to reach the goal (tactical information). Finally, the

6The X-axis of the robot reference system is aligned with the instanta-neous robot velocity vr .

Fig. 6 The object location used by the obstacle avoidance is pc , whichis the predicted collision point according to the current vehicle andobject velocities

obstacle avoidance module uses the planner tactical infor-mation together with the information of the obstacles (staticand dynamic) to generate the target oriented collision-freemotion. The vehicle controller executes the motion and theprocess restarts with a new sensor measurement. It is im-portant to stress that the three modules work synchronouslywithin the perception—action cycle.

5 Experimental results

This section describes some of the tests that we have car-ried out to validate the modeling technique (Sect. 3) and itsintegration within the motion layer (Sect. 4). The robot isa commercial wheelchair equipped with two on-board com-puters and a SICK laser. The vehicle is rectangular (1.2×0.7meters) with two tractor wheels that work in differential-driven mode. We set the maximum operational velocities to(vmax,wmax) = (0.3 m

s ,0.7 rds ) due to the application con-

text (human transportation). All the modules work synchro-nously on the on board Pentium III 850 MHz at the fre-quency of the laser sensor (5 Hz).

We have intensively tested the system with our mobile ro-bot in and out of our laboratory with engineers and with theintended final users of the wheelchair. In this paper, we de-scribe two experiments that we understand will give insightinto the performance and benefits of the proposed approach.Firstly, we describe a real run where a child explored ourdepartment building. Here, we will focus on the propertiesof the modeling module. Secondly, we outline a controlledexperiment in our laboratory to illustrate the performance ofthe local sensor motion system.

5.1 Experiment 1: user guided experiment

We describe next a test where a cognitive disabled childdrove the vehicle during rush hour at the University ofZaragoza. By using voice commands, he placed goal loca-tions that the motion system autonomously attained. Notice

Page 12: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

(a) (b)

Fig. 7 Two snapshots of Experiment 1. (a) moving in an office like scenario and (b) traveling along a corridor

that in this case, the user is responsible for the global as-pects of the motion task while the motion system is locallygenerating the motion.

In the experiment, the child drove the vehicle out of thelaboratory (Fig. 7a), explored the department (long corridor,Fig. 7b) and came back to the initial location without colli-sions. The time of the experiment was around 20 minutes(including a break of five minutes to calm and relax thechild) and the traveled distance was 110 meters. From themotion generation point of view, the performance was verygood since the vehicle achieved all the goal locations with-out collisions. Notice that navigation under these circum-stances is not easy since the scenario was unknown and notprepared to move a wheelchair (in many places there waslittle room to maneuver). In addition, people turned the sce-nario into a dynamic and unpredictable place and sometimesmodified the structure of the environment creating difficultmotion situations.

Let us focus on the performance of the modeling mod-ule. We implemented the map of static features with a20 m × 20 m grid map with a 5 cm resolution cell centeredin the robot location. This spatial domain is large enoughto include the goal locations required for the motion. Weselected a 5 cm map resolution since experimentally we ob-served that it is enough for obstacle avoidance. The size ofthe map, 400 × 400, is close to the limit for the planner tocomply with the (worse case) real-time requirements.

Figure 8 shows the raw data of the experiment. From thisdata, at each point in time, the modeling module computeda map of dynamic objects, including their velocities, and amap of static obstacles. Figure 9 shows the blueprint of thescenario, the trajectory of the vehicle and the models (sta-tic and dynamic) computed at two given times in two dif-ferent places. Notice how despite the odometry errors the

Fig. 8 This figure shows the raw laser data integrated using the vehicleodometry and the trajectory of the vehicle

local maps of the static structure were correct for the pur-poses of motion generation. Furthermore, notice how the dy-namic objects do not appear in the static map and vice versa.In other words, the static and dynamic structures are sepa-rated. There were more than 100 moving objects (people)detected by the system during this experiment. The majorityof them corresponded to people moving, however, some ofthem were false moving objects. These rare situations oc-curred when the laser beams were almost parallel to the re-flected surface and produced specular reflections, or whenthe beams missed an object because its height was similar tothe height at which the laser scan is placed (around 70 cm).In the latter case, due to the motion of the wheelchair, thelaser oscillates and sometimes misses the obstacle. As a re-

Page 13: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

Fig. 9 The right part of thefigure shows the trajectory ofthe experiment within theblueprint of the building. Theleft part shows the map ofmoving objects and the map ofstatic obstacles at two differentpoints in time. In the dynamicmaps, the rectangles representthe estimated location of themoving objects being trackedcontaining the observationsassociated to each of them. Thestraight lines represent theestimated velocity of themoving object. In the static gridmaps, white represents freespace, black the obstacles andgray is unknown space

(a) (b)

Fig. 10 (a) Number of iterations until convergence for each step of Experiment 2 and (b) the corresponding computation time

sult, these objects were located in the free space area, clas-sified as dynamic and tracked accordingly.

For real-time operation, it is worth mentioning the lowcomputational requirements of the proposed technique. Fig-ures 10a, b show the number of iterations and the compu-tation time at each step. Despite the clock resolution of thecomputer (10 ms), the figures reflect that the cost per itera-tion is constant. This depends on the number of points of thescan, the grid resolution (number of static points) and thenumber of moving objects. The mean values for the wholeprocess are 14.1 iterations and 21 ms. The time spent in the

update of the map and in the prediction and update of thefilters is negligible and is below the clock resolution. Theproposed method provides a fast solution to the local mod-eling problem, which is important to integrate it with theother modules of the architecture for real-time operation.

Although it is beyond the application and context of thepaper, we understand that it is interesting to discuss theperformance of the technique facing larger scenarios. Totest this situation, we processed offline the previous dataset(Fig. 8) but using a grid map of 80 m × 80 m to representthe whole area covered by the experiment. In this mapping

Page 14: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

Fig. 11 (a) Map of the staticstructure, the robot trajectoryand the blueprint of the building(notice that the architect originalplan was modified duringconstruction). (b) Trajectories ofthe dynamic obstacles and theblue print. (c) Effect of movingobstacles on the map. The figureshows the map obtainedconsidering all the obstacles asstatic. Note how the dynamicobjects affect the displacementestimation along the corridor.As a result, the estimated finalvehicle location is far from theoriginal one and the map iscorrupted.

(a) (b) (c)

context, this experiment is not easy since the laboratory (firstroom) is very unstructured due to the presence of chairs, ta-bles and other people; and the corridor is very large and doesnot contain much information in the direction of the corridor(which makes it difficult to correct the robot displacement inthe presence of moving people).

The performance of the mapping technique was very sat-isfactory since it was still able to separate the 100 dynamicobstacles from the static structure (map of static parts) andthus they did not affect the estimated vehicle pose. Figu-res 11a and b show the final map of the static structure andthe trajectories of the dynamic objects tracked with the ref-erence of the blueprint. Notice that the map of static ob-jects fits perfectly with the blueprint, which is an indicatorof the quality of the map. In the bottom part of the blue-print, the trajectories go through a wall. This is because thefinal building was modified after the architect did the plan.Furthermore, the map of the dynamic structure shows 100trajectories that correspond to the people that moved around.Notice how all the trajectories are in free space which is alsoa good indicator for the dynamic map quality.

To check the influence of the dynamic objects, we usedour method without considering the moving objects (i.e.considering all the measurements as static). The resultsstrongly affected the resulting map not only in the motionalong the corridor, but also in the other directions. For in-stance, the corridor of Fig. 11c is slightly curved due to ori-entation errors accumulated when exploring it and the finallocation error is big. This result is consistent with the diffi-culty that many researchers have reported related with mapbuilding in the presence of dynamic obstacles (Hähnel et al.2002) and stresses the advantage of the proposed technique.

5.2 Experiment 2: fully autonomous experiment

We next describe a more academic experiment to deriveconclusions for the local sensor-based motion system butfocusing on navigation performance. The objective of theexperiment was to get the wheelchair out of the labora-tory (Fig. 12a). All the scenario was initially unknown andonly the target location was given in advance to the sys-tem. Initially, the vehicle proceeded toward the Passage 1avoiding collisions with the people that move around. Then,we blocked the passage creating a global trap situation thatwas detected by the system. The vehicle moved backwardsthrough Passage 2 and then traversed the Door exiting theroom and reaching the goal location without collisions. Thetime of the experiment was 200 s and the distance traveledaround 18 m.

As in the previous example, the performance of the mod-eling module was good enough for the other modules of thearchitecture. Figure 12b shows the raw laser data using theodometry readings and Fig. 12c shows the final map pro-duced by the modeling module when the vehicle reached thegoal location. The trajectories of the moving objects trackedduring the experiment are shown in Fig. 12d. Most of themcorrespond to people walking in the free space of the office.There were also some false positives due to misclassifica-tion that occurred mainly in the same situations as in Exper-iment 1. Regarding navigation, the motion system reliablydrove the vehicle to the final location without collisions. Re-call that the maps generated by the modeling module arethe basis for planning and obstacle avoidance. In general, arough model or only odometry readings are not enough andlikely would lead to navigation mission failures. The qual-ity of the model and the localization is specially relevant to

Page 15: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

(a) (b)

(c) (d)

Fig. 12 (a) Snapshot of Experiment 2. The objective was to drivethe vehicle out of the office through the Door. (b) Real laser dataand trajectory of Experiment 2 using the raw odometry readings.(c) The map built during the experiment and the vehicle trajectory.

The map shows the occupancy probability of each cell. White corre-sponds to a probability of zero (free cell) and black to a probability ofone (occupied cell). (d) The trajectories of the detected moving objects

Page 16: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

(a) (b)

Fig. 13 (a) A moving obstacle placed in the area of passage and (b)robot avoiding a trap situation. The figures show the tracked movingobjects (rectangles), the dynamic observations associated to them and

the estimated velocities. The two arrows on the vehicle show the cruisecomputed by the planner module and the direction of motion computedby the obstacle avoidance one

Fig. 14 (a) Moving obstaclegoing toward the robot. Thefigure shows the tracked movingobjects (rectangles), thedynamic observationsassociated to them and theestimated velocities. The twoarrows on the vehicle show thecruise computed by the plannermodule and the direction ofmotion computed by theobstacle avoidance one.(b) Detail of the robot maneuverto cross the Door

(a) (b)

avoid obstacles no longer perceived with the sensor due tovisibility constraints; to deal with narrow passages whereaccumulated errors can block the passage even if there isenough space to maneuver; or to approach the vehicle tothe desired final position with enough precision. All thesesituations were correctly managed due to the quality of themodels generated with the proposed modeling technique.

We next describe several situations where the selectiveuse of the static and dynamic information improved the mo-tion generation.

The planner computed at every point in time the tacticalinformation needed to guide the vehicle out of the trap sit-uations (the cruise) using only the static information. Themost representative situations happened in the Passage 1.

Page 17: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

While the vehicle was heading along this passage, peoplewere crossing it. However, since the humans were trackedand labeled dynamic they were not used by the planner andthus the cruise pointed toward this passage (Fig. 13a) and thevehicle aligned with this direction. Notice that systems thatdo not model dynamic obstacles would consider the humanstatic and the vehicle trapped within a U-shape obstacle.7

Next, a human placed an obstacle in the passage when thevehicle was about to reach it. The vehicle was trapped ina large U-shape obstacle. After a given period, the model-ing module included this obstacle in the static map passedto the planner. Immediately, the planner computed a cruisethat pointed toward the Passage 2. The vehicle was driventoward this passage avoiding the trap situation (Fig. 13b).

The obstacle avoidance module computed the motiontaking into account the geometric, kinematic and dynamicconstraints of the vehicle (Minguez and Montano 2002;Minguez et al. 2006a). The method used the static infor-mation included in the map and also the predicted collisionlocations of the objects computed using the obstacle veloci-ties. Figure 14a depicts an object moving toward the robot,and how the predicted collision creates an avoidance ma-neuver. Note that, although the Nearness Diagram does notconsider dynamic objects, the predicted collision locationallows it to anticipate the maneuver. Furthermore, obstaclesthat move further away from the robot are not considered.In Fig. 13b the two dynamic obstacles were not included inthe avoidance step (whereas systems that do not model thedynamic objects would consider them).

The performance of obstacle avoidance module was de-terminant in some circumstances, especially when the ve-hicle was driven among very narrow zones. For example,when it crossed the door (Fig. 14b), there were less than0.1 m at both sides of the robot. The movement computedby the obstacle avoidance module was free of oscillationsand, sometimes, was directed toward zones with great den-sity of obstacles or far away from the final position. All therobot constraints were considered by the obstacle avoidancemethod generating feasible motions in the different situa-tions. That is, the method achieved robust navigation in dif-ficult and realistic scenarios.

In summary, the modeling module was able to modelthe static and dynamic parts of the environment. The selec-tive use of this information allows the planning and obsta-cle avoidance modules to avoid the undesirable situationsthat arise from false trap situations and improve the obsta-cle avoidance task. Furthermore, the integration within thearchitecture allows to fully exploit the advantages of hy-brid sensor-based navigation systems that perform in diffi-cult scenarios avoiding typical problems such as trap situa-tions.

7This situation is similar to the situation depicted in Fig. 1b.

6 Discussion and conclusions

In this paper we have addressed two issues of great rele-vance in local sensor-based motion: how to model the staticand dynamic parts of the scenario and how to integrate thisinformation with a local sensor-based motion planning sys-tem.

Regarding the modeling aspect, most of previous works(Schulz et al. 2003; Wang and Thorpe 2002; Wang et al.2003; Hähnel et al. 2002) assume a known classificationwithin the optimization process. This means that the classifi-cation is done prior to the estimation of the vehicle location.They focus on the reliable tracking of the moving objectsor on the construction of accurate maps. The algorithm pro-posed in (Hähnel et al. 2003) iteratively improves the clas-sification via an EM algorithm. This is a batch techniquethat focuses on the detection of spurious measurements toimprove the quality of the map. The approach presented in(Modayil and Kuipers 2004a; Modayil and Kuipers 2004b)applies learning techniques but does not improve the vehi-cle localization and does not use probabilistic techniques totrack the moving objects. Our contribution in the model-ing aspect is to incorporate the information about the mov-ing objects within a maximum likelihood formulation of thescan matching process. In this way, the nature of the ob-servation is included in the estimation process. The resultis an improved classification of the observations that in-creases the robustness of the algorithm and improves therobot pose estimation, the map and the moving objects lo-cation.

However, the drawback of this type of techniques is thatthey do not consider the uncertainties and the correspond-ing correlations of the robot poses, the map and the mov-ing objects. Moreover, the set of poses is fixed and cannotbe modified in subsequent steps. This represents a problemwhen closing loops if the accumulated error is big. Also,although the algorithm has some tolerance to wrong classi-fications, they will produce errors that will accumulate inthe local models. In the case of sensor based navigation,the spatial domain of the problem is small and, thus, allowsus to obtain enough accurate models for real-time opera-tion.

In any case, all the mapping methods assume a hardclassification between static and dynamic objects. This isclearly a simplification of the real world since there are ob-jects that can act as static or dynamic; for instance, doors,chairs, tables, cars, etc. Although there exist some prelimi-nary work on the estimation of the state of some of these ob-jects (Stachniss and Burgard 2005; Biber and Duckett 2005),we believe a prior on the behavior of the objects will greatlysimplify the problem. This can be done using another typeof sensors as cameras or 3D range sensors.

Page 18: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

The second issue is the integration of the modeling mod-ule in a local sensor-based motion system taking advan-tage of the dynamic and static information. The system se-lectively uses this information in the planning and obsta-cle avoidance modules. As a result, many problems of ex-isting techniques (that only address static information) areavoided without sacrificing the advantages of the full hybridsensor-based motion schemes. Notice that the planning—obstacle avoidance strategy relies on the information pro-vided by the modeling module. Since our model assumes aconstant lineal velocity model, the predicted collision couldnot be correct when this assumption does not hold. How-ever, this effect is mitigated since the system works at ahigh rate rapidly reacting to the moving obstacle velocitychanges.

One thing to remark is that the local planning strategyis an approximation of the full motion-planning problemwith dynamic obstacles (recall that this problem is NP-hardin nature). In this paper we have addressed it with a hy-brid system made up of a tactical planning module and anobstacle avoidance technique (a simplification). Althoughthere exist reactive techniques that are designed to explic-itly deal with dynamic obstacles (Fiorini and Shiller 1998;Fraichard and Asama 2004; Owen and Montano 2005), wehave selected one that does not account for this informa-tion (this is the reason why we use the collision predictionconcept). The selection of the reactive techniques is a tradeoff between performance facing very dynamic scenarios orplaces where it is very difficult to maneuver. This is becauseit is well known that the techniques that address the motionplanning under dynamic obstacles are conservative in themotion search space (losing maneuverability in constrainedspaces). In our case, due to the wheelchair application, weused a method designed to maneuver in environments withlittle room to maneuver (such as doors or narrow corridors)and we improved the behavior in dynamic situations withthe collision prediction concept. However, for other applica-tions, nothing prohibits the use of other method in the pro-posed framework.

The experimental results confirm that the modelingmethod is able to deal with dynamic environments and pro-vide enough accurate models for sensor-based navigation.The integration with a sensor-based planner system allowsto drive the vehicle in unknown, dynamic scenarios with lit-tle space to maneuver. The system avoids the typical trapsituations found under realistic operation and, in particular,those created by moving obstacles.

Acknowledgements This work was supported in part by the FCTPrograma Operacional Sociedade de Informação (POSC) in the frameof QCA III and PTDC/EEA-ACR/70174/2006 project, by the Spanishprojects DPI2006-07928a and DPI2006-15630-C02-02 and by the Eu-ropean project IST-1-045062.

Appendix A: Likelihood models

This appendix describes the computation of the mean andcovariance of the Gaussian likelihood models ps(.) andpd(.) of Sect. 3.3.1. So as to obtain an analytical expression,we assume Gaussian uncertainties in the robot pose and inthe location of the static and dynamic objects and Gaussiannoise in the measurement process,

x ∼ N(xtrue,P ), (A.1)

q ∼ N(qtrue,Q), (A.2)

z ∼ N(ztrue,R). (A.3)

Note that here we use q as a generic correspondence pointfor the measurement z. Although the computations are thesame for static and dynamic correspondences, the covari-ance matrix for each type of association is different. We usea fixed covariance matrix for grid cells and the uncertaintyof the Kalman filter prediction for moving objects.

The function f (x, q) is the transformation of the pointq = (qx qy)

T through the relative location x = (tx, ty, θ)T ,

f (x, q) =(

cos θqx − sin θqy + txsin θqx + cos θqy + ty

). (A.4)

Linearizing the function f (x, q) using a first order Taylorseries approximation, we define the likelihood term as

p(z|x, q) =∫ ∫

p(z|x, q)︸ ︷︷ ︸N(f (x,q),R)

p(x)︸︷︷︸N(x,P )

p(q)︸︷︷︸N(q,Q)

dxdq

= N(z;f (x, q),C) (A.5)

where the covariance matrix C is

C = R + JxPJTx + JqQJT

q . (A.6)

The matrices Jx and Jq are the Jacobians of f (x, q) withrespect to x and q evaluated at the current estimates,

Jx ≡ ∂f (x, q)

∂x

∣∣∣∣∣x,q

=(

1 0 −qx sin θ − qy cos θ

0 1 qx cos θ − qy sin θ

),

Jq ≡ ∂f (x, q)

∂q

∣∣∣∣∣x,q

=(

cos θ − sin θ

sin θ cos θ

).

In addition to this, using the function f (·, ·) we can definethe Mahalanobis distance between z and q to select the ap-propriate static obstacle from the grid map

D2M(z, q) = [f (x, q) − z]T C−1[f (x, q) − z]. (A.7)

Page 19: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

Appendix B: M-Step minimization

This appendix addresses the minimization of the function

Q(xk, x(t)k )

=Nz∑i=1

[ci0(f (xk, qi) − zk,i)

T P −1i0 (f (xk, qi) − zk,i))

+NO∑j=1

cij (f (xk, ok,j ) − zk,i)T

× P −1ij (f (xk, ok,j ) − zk,i)

]. (B.1)

Due to the nonlinear function f (·, ·), one should use an it-erative method to minimize Q(xk, x

(t)k ). However, since the

correspondences change at each iteration of the EM algo-rithm, we rather use a single iteration to improve the currentestimate. This is known as generalized EM (McLachlan andKrishnan 1997) and the algorithm still converges to the localminimum.

Based on the linearization of f (·, ·) presented in appen-dix A, the estimate of the parameter vector xLS is,

xLS = [HT C−1H ]−1HT C−1E (B.2)

where the matrices E and H are formed by the contributionsof each measurement zk,i to the function Q(xk, x

(t)k )

H =⎡⎢⎣

H1...

HNz

⎤⎥⎦ , E =

⎡⎢⎣

E1...

ENz

⎤⎥⎦ ,

C =⎡⎢⎣

C1 · · · 0...

. . ....

0 · · · CNz

⎤⎥⎦

with

Hi =

⎡⎢⎢⎢⎣

Jx(x(t), qi)

Jx(x(t), ok,1)...

Jx(x(t), ok,NO

)

⎤⎥⎥⎥⎦ ;

E =

⎡⎢⎢⎢⎢⎣

−(f (x(t)k , qi) − zk,i) + Jx(x

(t), qi)x(t)

−(f (x(t)k , ok,1) − zk,i) + Jx(x

(t), ok,1)x(t)

...

−(f (x(t)k , ok,NO

) − zk,i) + Jx(x(t), ok,NO

)x(t)

⎤⎥⎥⎥⎥⎦ ,

(B.3)

Ci =

⎡⎢⎢⎣

1ci0

Pi0 · · · 0...

. . ....

0 · · · 1ciNO

PiNO

⎤⎥⎥⎦ (B.4)

with Nz is the number of measurements and NO is the num-ber of moving objects.

References

Andrade-Cetto, J., & Sanfeliu, A. (2002). Concurrent map building andlocalization on indoor dynamic environments. International Jour-nal on Pattern Recognition and Artificial Intelligence, 16, 361–374.

Arkin, R. C. (1999). Behavior-based robotics. Cambridge: MIT Press.Bar-Shalom, Y., & Fortmann, T. E. (1988). Mathematics in science and

engineering. Tracking and data association. New York: AcademicPress.

Bar-Shalom, Y., Li, X. R., & Kirubarajan, T. (2001). Estimation withapplications to tracking and navigation. New York: Wiley.

Bengtsson, O., & Baerveldt, A.-J. (1999). Localization in changing en-vironments by matching laser range scans. In EURobot (pp. 169–176).

Besl, P. J., & McKay, N. D. (1992). A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and MachineIntelligence, 14, 239–256.

Biber, P., & Duckett, T. (2005). Dynamic maps for long-term operationof mobile service robots. In Robotics: science and systems, 8–10June 2005.

Biber, P., & Strafler, W. (2003). The normal distributions transform: anew approach to laser scan matching. In IEEE international con-ference on intelligent robots and systems, Las Vegas, USA.

Biswas, R., Limketkai, B., Sanner, S., & Thrun, S. (2002). Towardsobject mapping in dynamic environments with mobile robots. InProceedings of the conference on intelligent robots and systems(IROS), Lausanne, Switzerland.

Blackman, S., & Popoli, R. (1999). Design and analysis of moderntracking systems. Norwood: Artech House.

Brock, O., & Khatib, O. (1999). High-speed navigation using theglobal dynamic window approach. In IEEE international confer-ence on robotics and automation (pp. 341–346), Detroit, MI.

Canny, J., & Reif, J. (1987). New lower bound techniques for robot mo-tion planning problems. In Proceedings of the 27th annual IEEEsymposium on the foundations of computer science (pp. 49–60).

Castellanos, J. A., & Tardós, J. D. (1999). Mobile robot localiza-tion and map building: a multisensor fusion approach. Boston:Kluwer Academic.

Cheeseman, P., & Smith, P. (1986). On the representation and estima-tion of spatial uncertainty. International Journal of Robotics, 5,56–68.

Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum like-lihood from incomplete data via the EM algorithm. Journal of theRoyal Statistical Society, Part B, 39, 1–38.

Doucet, A., Godsill, S. J., & Andrieu, C. (2000). On sequential MonteCarlo sampling methods for Bayesian filtering. Statistics andComputing, 10(3), 197–208.

Elfes, A. (1989). Occupancy grids: a probabilistic framework for robotperception. PhD thesis.

Fiorini, P., & Shiller, Z. (1998). Motion planning in dynamic environ-ments using velocity obstacles. International Journal of RoboticResearch, 17(7), 760–772.

Foisy, A., Hayward, V., & Aubry, S. (1990). The use of awareness incollision prediction. In IEEE international conference on roboticsand automation.

Page 20: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

Fraichard, T., & Asama, H. (2004). Inevitable collision states. A steptowards safer robots? Advanced Robotics, 18(10), 1001–1024.

Gutmann, J.-S., & Konolige, K. (1999). Incremental mapping of largecyclic environments. In Conference on intelligent robots and ap-plications (CIRA), Monterey, CA.

Hähnel, D. (2004). Mapping with mobile robots. PhD thesis, Universityof Freiburg.

Hähnel, D., Schulz, D., & Burgard, W. (2002). Map building withmobile robots in populated environments. In Proceedings of theIEEE/RSJ international conference on intelligent robots and sys-tems.

Hähnel, D., Triebel, R., Burgard, W., & Thrun, S. (2003). Map build-ing with mobile robots in dynamic environments. In Proceedingsof the IEEE international conference on robotics and automation(ICRA).

Jensen, B., & Siegwart, R. (2004). Scan alignment with probabilis-tic distance metric. In Proceedings of the IEEE-RSJ internationalconference on intelligent robots and systems, Sendai, Japan.

Koenig, S., & Likhachev, M. (2002). Improved fast replanning for ro-bot navigation in unknown terrain. In International conference onrobotics and automation, Washington, USA.

Lu, F., & Milios, E. (1997a). Globally consistent range scan alignmentfor environment mapping. Autonomous Robots, 4, 333–349.

Lu, F., & Milios, E. (1997b). Robot pose estimation in unknown en-vironments by matching 2D range scans. Intelligent and RoboticSystems, 18, 249–275.

Martinez-Cantin, R., de Freitas, N., & Castellanos, J. A. (2007). Analy-sis of particle methods for simultaneous robot localization andmapping and a new algorithm: Marginal-slam. In Proceedings ofthe IEEE international conference on robotics and automation(ICRA), Rome, Italy.

McLachlan, G., & Krishnan, T. (1997). Wiley series in probability andstatistics. The EM algorithm and extensions. New York: Wiley.

Minguez, J., & Montano, L. (2002). Robot navigation in very complexdense and cluttered indoor/outdoor environments. In 15th IFACworld congress, Barcelona, Spain.

Minguez, J., & Montano, L. (2004). Nearness diagram (ND) naviga-tion: collision avoidance in troublesome scenarios. IEEE Trans-actions on Robotics and Automation, 20(1), 45–59.

Minguez, J., & Montano, L. (2005). Sensor-based robot motion gener-ation in unknown, dynamic and troublesome scenarios. Roboticsand Autonomous Systems, 52(4), 290–311.

Minguez, J., Montano, L., & Santos-Victor, J. (2006a). Abstracting thevehicle shape and kinematic constraints from the obstacle avoid-ance methods. Autonomous Robots, 20, 43–59.

Minguez, J., Montesano, L., & Lamiraux, F. (2006b). Metric-based it-erative closest point scan matching for sensor displacement esti-mation. IEEE Transactions on Robotics, 22(5), 1047–1054.

Modayil, J., & Kuipers, B. (2004a). Bootstrap learning for object dis-covery. In IEEE/RSJ international conference on intelligent ro-bots and systems (IROS-04), Sendai, Japan.

Modayil, J., & Kuipers, B. (2004b). Towards bootstrap learning forobject discovery. In AAAI-2004 workshop on anchoring symbolsto sensor data.

Montesano, L. (2006). Detection and tracking of moving objects froma mobile platform. application to navigation and multi-robot lo-calization. PhD thesis, Universidad de Zaragoza.

Montesano, L., Minguez, J., & Montano, L. (2005). Probabilistic scanmatching for motion estimation in unstructured environments. InIEEE international conference on intelligent robots and systems(IROS).

Montesano, L., Minguez, J., & Montano, L. (2006). Lessons learnedin integration for sensor-based robot navigation systems. Interna-tional Journal of Advanced Robotic Systems, 3(1), 85–91.

Moravec, H. P., & Elfes, A. (1985). High resolution maps from wideangle sonar. In IEEE international conference on robotics and au-tomation (pp. 116–121), March 1985.

Murphy, K. (2002). Dynamic Bayesian networks: representation, infer-ence and learning. PhD thesis, UC Berkeley, Computer ScienceDivision.

Owen, E., & Montano, L. (2005). Motion planning in dynamic envi-ronments using the velocity space. In Proceedings of the IEEE-RSJ international conference on intelligent robots and systems,Edmonton, AB (CA), August 2005.

Philipsen, R., & Siegwart, R. (2003). Smooth and efficient obstacleavoidance for a tour guide robot. In IEEE international conferenceon robotics and automation, Taipei, Taiwan.

Ratering, S., & Gini, M. (1993). Robot navigation in a known envi-ronment with unknown moving obstacles. In International con-ference on robotics and automation (pp. 25–30), Atlanta, USA.

Schlegel, C. (2004). Navigation and execution for mobile robots in dy-namic environments—an integrated approach. PhD thesis, Uni-versity of Ulm.

Schulz, D., Burgard, W., Fox, D., & Cremers, A. (2001). Tracking mul-tiple moving targets with a mobile robot using particle filters andstatistical data association. In IEEE international conference onrobotics and automation, Seoul, Korea.

Schulz, D., Burgard, W., Fox, D., & Cremers, A. B. (2003). Peopletracking with a mobile robot using sample-based joint probabilis-tic data association filters. International Journal of Robotics Re-search (IJRR), 22(2), 99–116.

Stachniss, C., & Burgard, W. (2002). An integrated approach to goal-directed obstacle avoidance under dynamic constraints for dy-namic environments. In IEEE-RSJ international conference on in-telligent robots and systems (pp. 508–513), Switzerland.

Stachniss, C., & Burgard, W. (2005). Mobile robot mapping and local-ization in non-static environments. In Proceedings of the nationalconference on artificial intelligence (AAAI), Pittsburgh, PA, USA.

Thrun, S., Burgard, W., & Fox, D. (2000). A real time algorithm formobile robot mapping with applications to multi-robot and 3dmapping. In IEEE international conference on robotics and au-tomation, San Francisco, USA.

Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cam-bridge: MIT Press.

Ulrich, I., & Borenstein, J. (2000). VFH*: local obstacle avoidancewith look-ahead verification. In IEEE international conference onrobotics and automation (pp. 2505–2511), San Francisco, USA.

Wang, C., & Thorpe, C. (2002). Simultaneous localization and map-ping with detection and tracking of moving objects. In IEEE in-ternational conference on robotics and automation, Washington,USA.

Wang, C.-C., Thorpe, C., & Thrun, S. (2003). Online simultaneous lo-calization and mapping with detection and tracking of moving ob-jects: theory and results from a ground vehicle in crowded urbanareas. In Proceedings of the IEEE international conference on ro-botics and automation (ICRA).

Wang, C., Thorpe, C., Thrun, S., Hebert, M., & Durrant-Whyte, H.(2007). Simultaneous localization, mapping and moving objecttracking. International Journal of Robotics Research, 26(9), 889–916.

Wolf, D. F., & Sukhatme, G. S. (2005). Mobile robot simultaneouslocalization and mapping in dynamic environments. AutonomousRobots, 19(1), 53–65.

Page 21: Modeling dynamic scenarios for local sensor-based …users.isr.ist.utl.pt/~lmontesano/papers/MDS_AR.pdflocal motion planning systems. The paper is organized as follows. Section 2 describes

Auton Robot

Luis Montesano received his de-gree in Computer Science in 1999and the Ph.D. degree in computerscience and systems engineeringin 2006 from the University ofZaragoza, Spain. From 1999 to2002 he worked as a research en-gineer in several aerospace andtelecommunication projects. Hejoined the Robotics group of theUniversity of Zaragoza from 2002to 2006. Since 2006, he is a full-time researcher at the Instituto deSistemas e Robotica of the InstitutoSuperior Tecnico, Portugal. His re-

search interests include cognitive robotics, machine learning and ad-vanced Monte Carlo methods.

Javier Minguez received the phys-ics science degree in 1996 fromthe Universidad Complutense deMadrid, Madrid, Spain, and thePh.D. degree in computer scienceand systems engineering in 2002from the University of Zaragoza,Zaragoza, Spain. He is currently afull-time Researcher in the Robot,Vision, and Real Time Group, Uni-versity of Zaragoza. His research in-terests are techniques for reactivenavigation and sensor-based motionplanning for mobile robots, robot lo-calization and rehabilitation robot-ics.

Luis Montano was born on Sep-tember 6, 1958 in Huesca, Spain.He received the industrial engineer-ing degree in 1981 and the PhD de-gree in 1987 from the University ofZaragoza, Spain. He is a Full Pro-fessor of Systems Engineering andAutomatic Control at the Univer-sity of Zaragoza (Spain). He hasbeen Head of the Computer Scienceand Systems Engineering Depart-ment of the University of Zaragoza.Currently he is the coordinator ofthe Ambient Intelligence Division

in the Aragon Institute of Engineering Research and of the Robotics,Perception and Real Time group of the University of Zaragoza. He isprincipal researcher in robotic projects and his major research interestsare mobile robot navigation and cooperative robots.