10
IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007 753 Modelless Guidance for the Docking of Autonomous Vehicles Goldie Nejat, Member, IEEE, and Beno Benhabib Abstract—A novel line-of-sight sensing-based modelless guid- ance strategy is presented for the autonomous docking of robotic vehicles. The novelty of the proposed guidance strategy is twofold: 1) applicability to situations that do not allow for direct proximity measurement of the vehicle and 2) ability to generate short-range docking motion commands without a need for a global sensing- system (calibration) model. Two guidance-based motion-planning methods were developed to provide the vehicle controller with on- line corrective motion commands: a passive-sensing-based and an active-sensing-based scheme, respectively. The objective of both proposed guidance methods is to minimize the accumulated sys- tematic errors of the vehicle as a result of the long-range travel, while allowing it to converge to its desired pose within random- noise limits. Both techniques were successfully tested via simu- lations and experiments, and are discussed herein, in terms of convergence rate and accuracy, in addition to the types of localiza- tion problems for which each method could be specifically more suitable. Index Terms—Docking, gradient descent, line-of-sight guidance, precision localization. I. INTRODUCTION A FUNDAMENTAL requirement for increased autonomy of robotic vehicles is the ability to achieve precise real- time localization without external intervention. Recent research in the field has resulted in the development of several online motion-guidance systems for the docking of land [e.g., industrial automated guided vehicles, (AGVs)], sea (e.g., underwater), and air (e.g., spacecraft and satellites) vehicles to stationary or mov- ing piers [1]–[11]. Two-stage motion-planning methods have been frequently advocated: in the first stage, an “approximate” movement is generated toward the desired goal (long-range po- sitioning), whereas during the second movement (docking), a “precise” corrective motion is executed based on task-space feedback. An effective real-time (corrective) fine-motion control can only be achieved using online motion planning. The use of external noncontact task-space sensors, for passive or active sensing, has been often advocated for this purpose. Research in the field has resulted in the proposed use of either 1) proximity Manuscript received September 6, 2006; revised January 8, 2007.This paper was recommended for publication by Associate Editor P. Rives and Editor H. Arai upon evaluation of the reviewers’ comments. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). G. Nejat is with the Department of Mechanical Engineering at the State University of New York at Stony Brook, Stony Brook, NY 11794-2300 USA (e-mail: [email protected]). B. Benhabib is with the Department of Mechanical and Industrial Engineer- ing, University of Toronto, Toronto, ON M5S 3G8, Canada. Digital Object Identifier 10.1109/TRO.2007.900634 sensors for navigation-based path planning [3], [10], or 2) high- speed cameras for visual servoing [12], [13]. Passive-sensing techniques require the sensory-system con- figuration to remain static during the measurement process. Ac- tive sensing, on the other hand, refers to the improvement of parameter-measurement accuracy by controlling the dynamic placement of the sensor(s) with respect to the object-of-interest, possibly using an iterative technique [14]. The key difference between the two methods is that, for the latter, the sensors are continuously relocated to obtain accurate data. Although cameras provide several significant advantages over other types of noncontact sensors, acquired images tend to be influenced by numerous external parameters. Low accuracy and unacceptably slow image-acquisition and processing rates also act as significant limitations for real-time applications. In re- sponse, several types of proximity sensors have been used for docking applications. In [1], lateral distances of the front and back of a bus to the edge of a platform are measured using two laser sensors placed on the bus and retroreflectors attached to the platform. In [4], a stationary rotating laser slit beam (LSB) is used to measure the time difference of when the LSB light passes between symmetric pairs of detectors on the vehicle to deter- mine its position. Alternatively, in [6], IR emitters/receivers are used for guiding two modules of a selfreconfigurable robot to align for docking based on distance and orientation information. The majority of docking methods rely on the direct mea- surement of the relative or absolute vehicle pose (position and orientation). However, frequently, even in the presence of task- space sensors, a vehicle’s pose cannot be determined accurately due to the inability of the proximity sensors to measure orien- tation as precisely as position. This can cause a vehicle to un- dershoot/overshoot its desired pose and, hence, possibly never converging to it. This drawback may be overcome by utilizing a guidance-based method to guide the vehicle to its desired dock- ing pose, within required tolerances, using indirect proximity measurements [15], [16]. The abovementioned guidance-based methods, however, require the existence and use of an accurate mathematical calibration model of the sensing system in order to guide the autonomous vehicle, which relates the sensory information mea- sured at the vehicle’s actual pose to position errors in world or relative coordinates. The accuracy of the sensing-system cali- bration model is application-dependent, and primarily, depends on how precisely the vehicle needs to achieve its desired pose (i.e., within millimeters or micrometers). In practice, situations may arise where a sensing-system calibration model may not be available to the guidance sys- tem. In such cases, the need for a modelless guidance method 1552-3098/$25.00 © 2007 IEEE

Modelless Guidance for the Docking of Autonomous Vehicles

  • Upload
    beno

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Modelless Guidance for the Docking of Autonomous Vehicles

IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007 753

Modelless Guidance for the Dockingof Autonomous Vehicles

Goldie Nejat, Member, IEEE, and Beno Benhabib

Abstract—A novel line-of-sight sensing-based modelless guid-ance strategy is presented for the autonomous docking of roboticvehicles. The novelty of the proposed guidance strategy is twofold:1) applicability to situations that do not allow for direct proximitymeasurement of the vehicle and 2) ability to generate short-rangedocking motion commands without a need for a global sensing-system (calibration) model. Two guidance-based motion-planningmethods were developed to provide the vehicle controller with on-line corrective motion commands: a passive-sensing-based and anactive-sensing-based scheme, respectively. The objective of bothproposed guidance methods is to minimize the accumulated sys-tematic errors of the vehicle as a result of the long-range travel,while allowing it to converge to its desired pose within random-noise limits. Both techniques were successfully tested via simu-lations and experiments, and are discussed herein, in terms ofconvergence rate and accuracy, in addition to the types of localiza-tion problems for which each method could be specifically moresuitable.

Index Terms—Docking, gradient descent, line-of-sight guidance,precision localization.

I. INTRODUCTION

A FUNDAMENTAL requirement for increased autonomyof robotic vehicles is the ability to achieve precise real-

time localization without external intervention. Recent researchin the field has resulted in the development of several onlinemotion-guidance systems for the docking of land [e.g., industrialautomated guided vehicles, (AGVs)], sea (e.g., underwater), andair (e.g., spacecraft and satellites) vehicles to stationary or mov-ing piers [1]–[11]. Two-stage motion-planning methods havebeen frequently advocated: in the first stage, an “approximate”movement is generated toward the desired goal (long-range po-sitioning), whereas during the second movement (docking), a“precise” corrective motion is executed based on task-spacefeedback.

An effective real-time (corrective) fine-motion control canonly be achieved using online motion planning. The use ofexternal noncontact task-space sensors, for passive or activesensing, has been often advocated for this purpose. Research inthe field has resulted in the proposed use of either 1) proximity

Manuscript received September 6, 2006; revised January 8, 2007.This paperwas recommended for publication by Associate Editor P. Rives and Editor H.Arai upon evaluation of the reviewers’ comments. This work was supported bythe Natural Sciences and Engineering Research Council of Canada (NSERC).

G. Nejat is with the Department of Mechanical Engineering at the StateUniversity of New York at Stony Brook, Stony Brook, NY 11794-2300 USA(e-mail: [email protected]).

B. Benhabib is with the Department of Mechanical and Industrial Engineer-ing, University of Toronto, Toronto, ON M5S 3G8, Canada.

Digital Object Identifier 10.1109/TRO.2007.900634

sensors for navigation-based path planning [3], [10], or 2) high-speed cameras for visual servoing [12], [13].

Passive-sensing techniques require the sensory-system con-figuration to remain static during the measurement process. Ac-tive sensing, on the other hand, refers to the improvement ofparameter-measurement accuracy by controlling the dynamicplacement of the sensor(s) with respect to the object-of-interest,possibly using an iterative technique [14]. The key differencebetween the two methods is that, for the latter, the sensors arecontinuously relocated to obtain accurate data.

Although cameras provide several significant advantages overother types of noncontact sensors, acquired images tend to beinfluenced by numerous external parameters. Low accuracy andunacceptably slow image-acquisition and processing rates alsoact as significant limitations for real-time applications. In re-sponse, several types of proximity sensors have been used fordocking applications. In [1], lateral distances of the front andback of a bus to the edge of a platform are measured using twolaser sensors placed on the bus and retroreflectors attached tothe platform. In [4], a stationary rotating laser slit beam (LSB) isused to measure the time difference of when the LSB light passesbetween symmetric pairs of detectors on the vehicle to deter-mine its position. Alternatively, in [6], IR emitters/receivers areused for guiding two modules of a selfreconfigurable robot toalign for docking based on distance and orientation information.

The majority of docking methods rely on the direct mea-surement of the relative or absolute vehicle pose (position andorientation). However, frequently, even in the presence of task-space sensors, a vehicle’s pose cannot be determined accuratelydue to the inability of the proximity sensors to measure orien-tation as precisely as position. This can cause a vehicle to un-dershoot/overshoot its desired pose and, hence, possibly neverconverging to it. This drawback may be overcome by utilizing aguidance-based method to guide the vehicle to its desired dock-ing pose, within required tolerances, using indirect proximitymeasurements [15], [16].

The abovementioned guidance-based methods, however,require the existence and use of an accurate mathematicalcalibration model of the sensing system in order to guide theautonomous vehicle, which relates the sensory information mea-sured at the vehicle’s actual pose to position errors in world orrelative coordinates. The accuracy of the sensing-system cali-bration model is application-dependent, and primarily, dependson how precisely the vehicle needs to achieve its desired pose(i.e., within millimeters or micrometers).

In practice, situations may arise where a sensing-systemcalibration model may not be available to the guidance sys-tem. In such cases, the need for a modelless guidance method

1552-3098/$25.00 © 2007 IEEE

Page 2: Modelless Guidance for the Docking of Autonomous Vehicles

754 IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007

Fig. 1 Utilization of LOS concept. (a) Desired versus (b) actual vehicle pose.

arises. Herein, we address the localization problem by propos-ing a novel LOS-based guidance strategy that solely uses indi-rect proximity measurements to determine vehicle-motion com-mands in an online mode. Two different motion-planning meth-ods are proposed: the first uses a passive sensing scheme, whilethe latter uses an active sensing.

II. LINE-OF-SIGHT BASED TASK-SPACE SENSING

The line-of-sight (LOS) concept is implemented in our workto address two main limitations inherent to robotic task-spacesensors: 1) inability to measure orientation with comparableaccuracy to position and 2) limited working range. In our pro-posed system, an individual LOS sensing module comprises alaser source, a galvanometer mirror, and a detector [a positionsensitive diode, (PSD), in our setup], Fig. 1. The detector is at-tached to the vehicle (i.e., target), while the laser beam is alignedusing the galvanometer mirror to hit the center of the detectorwhen the vehicle is at its desired pose. The laser beam, hence,defines the desired LOS.

The LOS sensing module can be optimally configured into aspatial multi-LOS system, where the number and types of LOSsin configuring a multi-LOS sensing system would depend on:1) the mobility requirement for the docking problem at handand 2) the motion range (work space) of the vehicle [15]. Forexample, a planar vehicle moving on a straight line and whichneeds to dock at a particular point on the line would requireonly one LOS with a 1-DOF galvanometer mirror providing arotation about a single axis. A satellite docking in 3-D space, onthe other hand, would require at least three spatial LOSs, or moreif there are possible obstructions in the workspace or redundancyis required for increased certainty (i.e., sensor fusion). A spatialLOS can be achieved with two 1-DOF mirrors providing tworotations about two orthogonal axes, respectively.

It is assumed that the autonomous vehicle is initially com-manded to move to its desired pose during long-range posi-tioning by using closed-loop feedback from the actuator-levelsensors. However, since such internal sensors only provide di-rect measurement of the motion of the actuators of the vehicle(not of the vehicle itself) they cannot effectively compensate forsystematic errors. Thus, external, noncontact LOS sensors needto be used to improve the vehicle’s accuracy (i.e., minimizesystematic errors), without undermining its performance, viaeffective guidance during the final docking phase. In practice,

regardless of the distance traveled, after a vehicle is positionedat its desired pose via long-range positioning, the LOS laserbeam would hit the detector (placed on the vehicle) at an offsetfrom its center due to systematic errors and random noise, Fig. 1.These offsets represent the acquired external proximity-sensingdata. They would be defined in the individual detectors’ framesof reference and not in a global world frame. Therefore, theyare considered “indirect” measurements. They may, however,be used effectively as feedback to an iterative guidance method,which would reduce their values through corrective motion ofthe vehicle to the level of the random noise.

III. MODELLESS GUIDANCE

Both modelless guidance methods presented in this sectionincorporate multi-LOS sensing with corrective motion actionsthat an autonomous vehicle would need to undertake in orderto achieve its desired pose within required tolerances. Theproposed guidance strategy, used by both methods, does notrequire the sensing system’s calibration model in determiningthe necessary optimal corrective vehicle movements, but rathersolely uses offset information from the LOS modules to provideguidance.

The motion-optimization problem minimizes the detectoroffset f based on a set of constraints; where f is definedas a function of the corrective actions of the vehicle, i.e.,f(∆x,∆y,∆z,∆γ,∆β,∆α), or as a function of the scannerangles, f(ζ)

Minimize f

Subject to gi ≤ (random noise limits) i = 1, . . . ,m. (1)

A. Passive Sensing-Based Guidance

The passive sensing-based guidance method requires that theLOSs are initially aligned to hit the centers of the detectorsplaced on the vehicle, when it would be at its desired pose,and keep them locked throughout the docking of the vehicle.Namely, only the vehicle is moved via corrective actions, whilethe LOSs remain static at their desired docking angles, ζdesired,pretaught by a teaching-by-demonstration method.

During operation, the vehicle is commanded to move toits desired pose, wTdv(xd, yd, zd, γd, βd, αd), defined with re-spect to the world coordinate frame Fw (i.e., long-range posi-tioning), while the galvanometer mirrors simultaneously alignthe three LOSs to their pretaught desired angles. However,due to both systematic errors and random noise in vehiclemotion, the vehicle only achieves an actual pose defined bywTav(xa, ya, za, γa, βa, αa), Fig. 2. These offsets are measuredin each detector’s frame Fdi, Fig. 2, and are used as feedback forthe guidance of the vehicle to its desired pose via the gradient-descent method.

The overall guidance procedure of localization (i.e., docking)is iterative in nature, and stops when the vehicle is deemed tohave converged to its desired pose, within acceptable tolerancelevels.

1) The Gradient-Descent Method: A gradient-descent typemethod is used to determine the necessary incremental

Page 3: Modelless Guidance for the Docking of Autonomous Vehicles

NEJAT AND BENHABIB: MODELLESS GUIDANCE FOR THE DOCKING OF AUTONOMOUS VEHICLES 755

Fig. 2 Measurement of PSD offsets.

corrective motions of the vehicle in all degrees of freedom (i.e.,∆x,∆y,∆z,∆γ,∆β, and ∆α) based on the detected PSD off-sets. The rotational and translational motion commands are cal-culated separately. The corrective rotations (∆γ,∆β, and ∆α)follow the world-coordinate notation and rotations about thesame axes in the current frame of interest, i.e., dFc.

a) Rotational-motion commands: PSD1: As can be noted inFig. 2, Fd1 is perpendicular to the x-axis of the vehicle’s (actualpose) frame aFc and, hence, only rotational errors about theother two axes (β and α) can be detected here. Utilizing therelationship sin (θ) ∼= θ for small angles in radians

β1 =e1z

r1zand α1 =

e1y

r1y, (2)

where e1y and e1z are the y- and z-projections of e1 on Fd1,respectively, and r1y and r1z are the distances from the projec-tions of the measured detector offsets to the center of the vehicleat its actual pose aFc which can further be defined as

r1x = h1 r1y =√

e21y + h2

1 and r1z =√

e21z + h2

1,

(3)where h1 is the distance from the center of aFc to the center ofFd1.

PSD2: As can be noted in Fig. 2, Fd2 is perpendicular tothe y-axis of the vehicle’s (actual pose) frame aFc and, hence,only rotational errors about the other two axes (γ and α) can bedetected here

γ2 =e2z

r2zand α2 =

e2x

r2x, (4)

where

r2x =√

e22x + h2

2 r2y = h2 and r2z =√

e22z + h2

2.

(5)PSD3: As can be noted in Fig. 2, Fd3 is perpendicular to

the z-axis of the vehicle’s (actual pose) frame aFc and hence,only rotational errors about the other two axes (γ and β) can bedetected here

γ3 =e3y

r3yand β3 =

e3x

r3x(6)

where

r3x =√

e23x + h2

3 r3y =√

e23y + h2

3 and r3z = h3.

(7)The above rotational angles are averaged to determine the

overall rotational movement required in (γ, β, and α)

∆γ′ = (γ2 + γ3)/2 ∆β′ = (β1 + β3)/2

and ∆α′ = (α1 + α2)/2. (8)

b) Translational-motion commands: The translational-motion commands are determined by minimizing the followingsum of the squared errors (i.e., detector offsets) cost function

transerror = ‖e1‖2 + ‖e2‖2 + ‖e3‖2 . (9)

The motion commands are determined as follows:

∆x′ = 2(e2x + e3x) ∆y′ = 2(e1y + e3y)

and ∆z′ = 2(e1z + e2z). (10)

Since the detectors are two-dimensional, each offset e hasonly two coordinate projections. The logic behind (10) is thatthey were derived based on minimizing the cost function in (9)to create a direct relationship between the PSD offsets and thetranslational errors. The factor 2 is present in these equationsdue to this minimization, and has been shown to work effectivelywhen both rotational and translational errors are present in thesystem. For example, when rotational errors are present, uti-lizing an averaging and weighting scheme for the translationalmotion commands, as opposed to the proposed cost function,would result in the motion commands being one or two orders ofmagnitude smaller than the true translational errors, in the major-ity of cases, and, hence, causing slower convergence. However,if only translation errors were present, with no rotational errors,an averaging scheme would be sufficient and has been incorpo-rated into the method to account for such scenarios. Since we aredealing with 6-DOF cases, in the majority of cases, rotationalerrors will be present. In order to minimize overshooting in bothscenarios, we have introduced the following weighting scheme.

2) Weighting Scheme: In order to prevent excessive over-shooting in the vehicle’s motion and improve the rate of con-vergence, the vehicle may be commanded to move by weightedamounts

∆x = ηx∆x′, ∆y = ηy∆y′, ∆z = ηz∆z′

∆γ = ηγ∆γ′, ∆β = ηβ∆β′, ∆α = ηα∆α′ .

(11)Herein, these weights η are determined by first comparing

the motion commands generated in each degree of mobility

Page 4: Modelless Guidance for the Docking of Autonomous Vehicles

756 IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007

with respect to the repeatability of the vehicle in each degree offreedom, e.g.,

x ratio =abs(∆x′)

Vehicle repeatability x

γ ratio =abs(∆γ′)

Vehicle repeatability γ

(12)

The y_ratio, z_ratio, β ratio, and α ratio can be calculated sim-ilarly. Then, each movement is compared to the total movementof the vehicle

ηx =x ratio

total ratio, ηy =

y ratiototal ratio

, ηz =z ratio

total ratio

ηγ =γ ratio

total ratio, ηβ =

β ratiototal ratio

, ηα =α ratio

total ratio(13)

where

total ratio = x ratio + y ratio + z ratio + γ ratio

+ β ratio + α ratio. (14)

The above algorithm is used when both rotational and trans-lational errors exist. Although the offsets are coupled, the trans-lational and rotational errors can be calculated separately. Once∆x,∆y,∆z,∆γ,∆β, and ∆α are determined, they are repre-sented in transformation matrix form dvT∆ where the trans-formation is an estimation of dvTav , i.e., dvT∆

∼= dvTav . Theimplemented motion command of the vehicle is defined to bethe inverse matrix transformation, dvT−1

∆ .

B. Active Sensing-Based Guidance

The objective of the active sensing-based, three-stage guid-ance method is to actively use the LOSs to move the vehiclealong a guidance trajectory to its desired pose.

Stage 1: The scanner angles corresponding to the vehicle’scurrent location are determined by LOS-scanning. Each of thethree LOSs is defined by two orthogonal angles, ζijactual (i = 1to 3 and j = 1 to 2). The flexible-polyhedron search method[17] was used in our work to determine these angles (withinnoise limits). Namely, an optimization search that minimizesthe absolute distance between the actual point, where the LOShits the PSD, and the PSD’s center point is carried out. Scannerrealignments are carried out independently and concurrently foreach LOS. This optimization search is necessary due to thelimitation of the detector offsets in only providing informationabout the magnitude of the error of where the LOSs hit. Thedirections to move the two angles together are unknown becauseinformation about the sensing-system calibration model is notavailable.

Stage 2: The actual LOS angles, ζactual, determined inStage 1, and the desired angles ζdesired are used to derive aset of continuous guidance trajectories for the active movementof the LOSs. These trajectories are used to effectively and ef-ficiently guide the vehicle to its desired pose from its currentactual pose. As the vehicle is actively guided, sensory data iscontinuously sampled, and used to update the LOS guidancetrajectories. The trajectories are defined to be straight lines in

the LOS angular space, in which LOS movements along thetrajectory are uniformly distributed.

In order to minimize the detrimental effect of systematic er-rors during the vehicle motion, the number of sensory-data sam-ples taken along the above path can be optimized in terms ofthe smallest allowable vehicle motion. Herein, the number ofsamples is determined by dividing the initial detector offsetsmeasured along the three PSDs, i = 1 to 3 (after long-rangepositioning) by n (n > 1) multiples of the repeatability of thevehicle and, then, setting the number of samples to equal themaximum value

# Samples = max(

PSDoffset i

n × Vehicle repeatability

). (15)

This stage is purely a calculation stage, during which neitherthe scanners nor the vehicle are physically moved.

Stage 3: The LOS trajectories, determined in Stage 2 above,are used to actively guide the vehicle via the planned trajec-tory until it reaches its desired pose. Namely, first, the scannerangles (i.e., the LOSs) are aligned to their values as dictatedby the sampling rate, ζij stepk = ζij stepk−1 + ∆ζij , and the PSDoffsets ei are measured, where k = 1 to # Steps and ∆ζij =(ζijdesired − ζijactual)/# Steps. These offsets are, then, used toguide the vehicle to its corresponding pose at this particularsample via the gradient-descent method, (8), and (10).

IV. A SIMULATION EXAMPLE: LOCALIZATION

OF A 6-DOF VEHICLE

Numerous simulations were conducted to test the behaviourof a high-precision vehicle with 6-DOF mobility under the com-mand of the proposed guidance methods.

A. Setup and Procedure

The vehicle’s shape is a 0.1 m × 0.1 m × 0.1 m cube. Thethree spatial LOS sources are placed symmetrically around theperimeter of the (docking) workspace of the vehicle. Each sourceuses two 1-DOF galvanometer mirrors to provide a spatialLOS [Fig. 1(a)]. The three array detectors, 50 mm × 50 mm areplaced centrally on the three faces of the vehicle, respectively.

The inaccuracy of the motion of the vehicle is representedby both systematic and random errors. The combined effect ofthe systematic errors is represented herein, as a function of theoverall motion of the vehicle

Systematic error = (inaccuracy/full range)

× Displacement of vehicle (16)

where (inaccuracy/full range) was chosen as 0.06 mm/m and0.56 millidegree/degree for translation and rotation, based ontypical errors present in linear drives and rotary joints. Randomnoise was represented by a normal distribution; N(µ = 0.0, σ =0.125µm) for translation and N(µ = 0.0, σ = 26µdeg) for ori-entation, based on typical encoder reading errors.

In all simulation tests, the vehicle’s home pose (i.e., its center)was defined in the world coordinate frame Fw by (xhome =0.0 mm, yhome = 0.0 mm, zhome = 0.0 mm, γhome = 0.0◦,βhome = 0.0◦, and αhome = 0.0◦). In the specific simulation

Page 5: Modelless Guidance for the Docking of Autonomous Vehicles

NEJAT AND BENHABIB: MODELLESS GUIDANCE FOR THE DOCKING OF AUTONOMOUS VEHICLES 757

Fig. 3 PSD offsets for passive sensing method.

test presented herein, the vehicle was commanded to move tohome from another pose in its workspace. Due to systematicerrors and random errors, the vehicle’s actual pose, afterthis initial (uncorrected) motion was (x = 20µm, y = 30µm,z = 50 µm, γ = 28.6 millidegree, β = 17.2 millidegree, andα = 40.1 millidegree). The corresponding initial PSD offsetsare 65.2, 47.4, and 35.4 µm, which are on average 50 timeslarger than the random noise limits the offsets should bewithin in order for the vehicle to achieve its desired pose. It isimportant to note that the two methods presented herein canwork in any scale environment. As the proportionality betweenthe initial and final desired PSD offsets increases, however, thevehicle could take longer to achieve its desired pose.

B. A Passive Sensing-Based Simulation Example

Fig. 3 shows the offsets measured along each of the threePSDs versus the number of samples (i.e., iterations) of the pas-sive sensing-based algorithm, while Fig. 4 shows the cumulativemotion of the vehicle during the guidance phase.

C. An Active Sensing-Based Simulation Example

The simulation results for the active sensing-based methodare shown in Figs. 5 and 6. For this simulation n was set to four,so that vehicle incremental movements are in the order of fourtimes the vehicle’s repeatability.

In this simulation example, the passive and active methodstook 20 and 18 iterations to allow the vehicle to converge toits desired pose within random noise limits, respectively. How-ever, one notes that overshoots and jerky motions are, typically,less present in the active method due to the relatively smallvehicle-motion corrections. Oscillations and overshoot motionmay occur due to the fact that the passive method tries to dealwith these overshoots by forcing the vehicle to move in the op-posite direction, due to systematic and random errors. When thevehicle moves, it could have an overshoot in this new direction,which the algorithm would try to deal with by requiring thevehicle to once again move in the opposite direction. The active

Fig. 4 Cumulative vehicle motion for passive sensing method. (a) Translation.(b) Rotation.

method forces movement in only one direction, incorporatingovershoots from intermediate points to determine more effec-tive movements for the next consecutive sample. This active-localization algorithm, however, does require continuous con-trol over the scanners, i.e., LOSs. Thus, in situations where thescanners cannot be controlled in real-time, one would have touse the passive sensing-based guidance method, which solelyplaces emphasis on the magnitude and direction of PSD offsetmeasurements to determine corrective vehicle motion. A de-tailed discussion of convergence of the methods is presented inthe Appendix.

The work presented deals specifically with the performanceof holonomic vehicles. The proposed guidance principles, how-ever, can be extended to nonholonomic vehicles in implement-ing optimal motion trajectories by utilizing the kinematics ofthe vehicle, for example, to generate a velocity vector field anda steering law based on the measurements of the PSD offsets.

V. AN EXPERIMENTAL EXAMPLE: LOCALIZATION

OF A 3-DOF PLATFORM

The two proposed modelless guidance methods were imple-mented in a controlled physical environment.

Page 6: Modelless Guidance for the Docking of Autonomous Vehicles

758 IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007

Fig. 5 PSD offsets for active sensing method.

Fig. 6 Cumulative vehicle motion for active sensing method. (a) Translation.(b) Rotation.

A. Setup and Procedure

A high-precision (x, y, φ) table was used to provide the nec-essary 3-DOF planar motion of a triangular-like platform (i.e.,a simulated autonomous vehicle), Fig. 7. The LOS sensing sys-

Fig. 7 Experimental set up.

tem included three 1-DOF LOS sensing modules: a combinationof beam-splitters, flat mirrors, and a filter to divide the singlelaser beam into three and redirect them to their correspondinggalvanometer mirrors.

1) Passive Sensing-Based Guidance Procedure: For a de-sired platform pose (x, y, φ)d, the corresponding LOS angles(within detection noise limits) are predetermined (ζd1, ζd2, ζd3)by a teaching-by-demonstration method. The following pro-cedure is, then, executed during each experiment in order toimplement the passive sensing-based guidance algorithm:

1. Rotate the galvanometers to (ζd1, ζd2, ζd3) while the plat-form is moving towards its desired pose.

2. Once the platform has stopped, read the offsets on all threePSDs.

3. Implement the iterative gradient-descent method until thePSD offsets are minimized.

2) Active Sensing-Based Guidance Procedure: For a desiredplatform pose (x, y, φ)d, the corresponding LOS angles are pre-determined (ζd1, ζd2, ζd3). The following procedure is, then,executed during each experiment in order to implement theactive sensing-based modelless guidance algorithm:

1. Rotate the galvanometers to (ζd1, ζd2, ζd3) while the plat-form is moving towards its desired pose.

2. Once the platform has stopped, read the offsets on all threePSDs.

3. Implement the active sensing guidance procedure: imple-ment LOS scanning to determine actual scanner angles(ζa1, ζa2, ζa3) determine LOS guidance trajectories, andmove the platform via the gradient-descent method.

B. Results and Discussion

Both the passive sensing-based and the active sensing-basedguidance methods were tested for several distinct poses. Resultsfor one particular pose (x = −50.000 mm, y = 25.000 mm,and φ = 15.00◦) are presented herein. The home position ofthe platform was defined by (x = 0.00 mm, y = 0.00 mm, andφ = 0.00◦).

The systematic errors (inaccuracy/full range) were arbitrarilyset to 37 µm/mm and 3.08 milli-deg/deg for the linear and

Page 7: Modelless Guidance for the Docking of Autonomous Vehicles

NEJAT AND BENHABIB: MODELLESS GUIDANCE FOR THE DOCKING OF AUTONOMOUS VEHICLES 759

Fig. 8 Experimental results for the passive sensing method.

Fig. 9 Experimental results for the active sensing method.

rotational stages. The noise level of the system was determinedto be approximately ±4 µm in terms of PSD-offset readings.

The results for the passive and active sensing-based guidancemethods are shown in Figs. 8 and 9. As noted, both methodsallowed the platform to converge to its desired pose within thepredefined limits, in a similar manner to what was depicted viasimulations, Section IV. The convergence rate of the two meth-ods are comparable, i.e., 5 steps for the passive sensing-basedmethod and 6 for the active sensing-based method. However,the latter guidance method provided a smoother trajectory forthe platform motion.

These experimental results show that both methods have theability to allow for the vehicle to converge to its desired posein situations in which the signal (i.e., magnitude of command)to noise ratio is large. The initial errors on the PSDs are mainlydependent on the dimensions of the detectors and on the anglesof impingement of the LOSs for the various vehicle poses. Nei-ther method exhibited any significant change in convergence

properties due merely to a change in magnitude of the initialerrors (Appendix A).

The primary difference between the implementation of thetwo methods is that the active sensing guidance method hascontrol over the scanner angles. For the passive sensing method,the control over the scanners is a hard-limit control, where presetscanner angles are used, whether it is one vehicle docking atseveral different poses, or several different vehicles dockingat one particular pose. This hard-limit control does not allowfor the scanners to be moved elsewhere. For active sensing, avariable control over the scanner angles exists.

VI. CONCLUSION

In practice, situations may arise when a task-space sensingsystem’s calibration model may be unknown or unreliable forprecision motion control. In such cases, a need for a model-less online motion-planning method exists. The objective ofthe modelless guidance methodology proposed in this paperis to provide the controller of an autonomous vehicle with ef-fective and accurate corrective motion commands for efficientdocking. The novelty of the two guidance algorithms, devel-oped within the framework of the proposed methodology, isthat they employ a guidance method that solely uses indirectproximity measurements in order to generate corrective motioncommands. The desired spatial lines-of-sight, corresponding tothe vehicle’s desired pose, are predefined by a teaching-by-demonstration method. Extensive simulated and experimentaltesting of the proposed guidance methodology (via the twoalgorithms) verified its ability to generate effective and accu-rate corrective motion commands for an autonomous vehicle indocking applications.

APPENDIX

CONVERGENCE DISCUSSION

Our proposed system could be considered stable if the poseerrors that start at some initial values: 1) do not grow as cor-rective actions increase, and 2) at time of final convergence,‖qdi − qai‖ < Req , where qi = x, y, z and ‖rdi − rai‖ < Rer ,where ri = γ, β, α, hold true. Re represents the bounds of thesystem’s random noise. During the implementation of the pro-posed guidance algorithms, convergence is defined solely interms of the PSD readings: ‖PSDkOffset‖ < δ, k = 1, . . . , p.p represents the number of LOS sensing modules and δ is theabsolute value of the random error expressed in PSD offset read-ings. For the purpose of this discussion, it is assumed that thesensing system’s repeatability is at least one order of magnitudebetter than that of the vehicle.

A. Convergence Conjecture

Let xi be the vehicle pose generated by the gradient methodand

xi+1 = xi + υi∇E + Se + Re (A1)

Page 8: Modelless Guidance for the Docking of Autonomous Vehicles

760 IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007

Fig. 10. Case 1 results for the passive sensing-based modelless method.

Fig. 11. Case 1 results for the active sensing-based modelless method.

where υi is defined as the positive step size which satisfies∞∑

i=0

υi → ∞ and υ 1 (A2)

∇E represents the motion commands determined by the gra-dient method, and Se is the systematic errors of the vehicle. Itneeds to be shown that if the descent direction is correct, then thegradient method guides the vehicle to its desired pose within itsrandom noise limits as the number of corrective actions increase

limi→∞

PSDiOffset ≤ δ (A3)

⇒ limi→∞

∆x = 0 (A4)

and

⇒ limi→∞

xi → xdesired or limi→∞

xi = xd + Re (A5)

where ∆x = υ∇E.From (A3), the PSD offsets should be decreasing: since vehi-

cle motion commands are directly determined from magnitudes

Fig. 12. Case 2 results for the passive sensing method.

Fig. 13. Case 2 results for the active sensing method.

of corresponding PSD offsets, as they decrease in value and en-ter random noise limits, the corrective actions that are generatedwould also diminish in value and approach zero. This minimiza-tion of PSD offsets also implies that the vehicle is approachingits desired pose, since, theoretically, if the offset readings arezero, i.e., LOSs are locked to the center of their correspondingPSDs, then the vehicle is at its desired pose.

1) Case 1: No Motion Errors: After the vehicle’s long-rangemotion with corresponding errors, which prevent it from perfectdocking, the vehicle is no longer subjected to any motion errorsduring the guidance procedure. The overall vehicle short-rangemotion is, thus, determined to be

xi+1 = xi + ∆x, where limi→∞

∆x = 0 (A6)

Since there are no errors, as the corrective actions are ap-plied, the vehicle should achieve its desired pose at an accel-erated rate. However, since it is a modelless environment, theguidance process would still require a number of iterations to

Page 9: Modelless Guidance for the Docking of Autonomous Vehicles

NEJAT AND BENHABIB: MODELLESS GUIDANCE FOR THE DOCKING OF AUTONOMOUS VEHICLES 761

Fig. 14. Case 3 results for the passive sensing method.

guide the vehicle via the gradient algorithm. The localizationrate depends on the initial magnitude and direction of the PSDoffsets. The results for passive and active sensing-based guid-ance are shown in Figs. 10 and 11. In Fig. 11, the number ofsamples is predefined as 22 to allow for an incremental traveldistance of about 2–4 µm during each sampling period. Thus, inthe absence of short-range motion errors, it should take exactly22 steps to achieve the vehicle’s desired pose.

2) Case 2: Systematic Motion Errors Only: Herein, conver-gence is discussed with respect to systematic errors introducedduring the corrective (short-range) motion of the vehicle, whichare directly related to the length of distance traveled at eachsampling period. These errors are usually a small percentageof the overall motion of the vehicle. Their effect on the vehiclemotion can be defined as

xi+1 = xi + ∆x + Se. (A7)

If limi→∞∆x = 0 is true as discussed in case 1, then, the prop-erty

limi→∞

Se = 0 (A8)

Fig. 15. Case 3 results for the active sensing method.

is also true, due to the fact that the errors are directly related tothe magnitude of motion.

The results for the passive and active methods are shown inFigs. 12 and 13, respectively. The systematic errors are defined tobe 5% of the vehicle motion. The number of iterations requiredfor the vehicle to reach its desired pose is higher than that ofcase 1; however, the vehicle still successfully achieves this pose.The effects of systematic errors on the vehicle motion for theactive method are not as profound as they are for the passivemethod due to the relatively small travel distance the vehiclefollows between two consecutive samples along the trajectory.

3) Case 3: Systematic and Random Motion Errors: Whenboth systematic and random errors are present in the system, thevehicle motion can be defined as

xi+1 = xi + ∆x + Se + Re. (A9)

Since random noise can never be eliminated from the vehiclemotion, the vehicle can achieve its desired pose as best as therandom noise limits would allow. Figs. 14 and 15 present theresults for the addition of both systematic and random errors tothe vehicle motion while under the command of both methods.Figs. 14(b) and 15(b) are close-ups of the results for the latter

Page 10: Modelless Guidance for the Docking of Autonomous Vehicles

762 IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 4, AUGUST 2007

iterations. As noted, the PSD offsets may oscillate within therandom noise limits, defined as 0.8 µm, but stay within theselimits.

ACKNOWLEDGMENT

The authors would like to thank Shichun Zhu and JosephWong for their contributions to this work.

REFERENCES

[1] P. Debay, V. Eude, S. Hayat, and M. Edel, “Fuzzy control for the futureautomatic guidance near the bus stations,” in Proc. 5th IEEE Int. Conf.Fuzzy Syst., New Orleans, LA, Sep. 1996, pp. 660–666.

[2] G. Ortega and J. M. Giron-Sierra, “Genetic algorithms for fuzzy controlof automatic docking with a space station,” in Proc. IEEE Int. Conf. Evol.Comput., Perth, Australia, Dec. 1995, pp. 157–161.

[3] P. Mira-Vaz, R. Ferreira, V. Grossmann, and M. I. Ribeiro, “Docking of amobile platform based on infrared sensors,” in Proc. IEEE Int. Symp. Ind.Electron., Guimaraes, Portugal, Jul. 1997, pp. 735–740.

[4] S. H. Kim, K. K. Park, K. T. Park, and J. H. Ahn, “Pose detection ofmoving vehicle using rotating LSB (Laser Slit Beam),” in Proc. IEEE Int.Symp. Ind. Electron., Pusan, Korea, Jun. 2001, pp. 83–88.

[5] M. D. Feezor, P. R. Blankinship, J. G. Bellingham, and F. Y. Sorrell,“Autonomous underwater vehicle homing/docking via electromagneticguidance,” in Proc. IEEE/MTS’ 97 Oceans Conf. Exhib., Halifax, Canada,Oct. 1997, pp. 1137–1142.

[6] W. Shen and P. Will, “Docking in self-reconfigurable robots,” in Proc.IEEE Int. Conf. Intell. Robots Syst., Maui, Hawaii, Nov. 2001, pp. 1049–1054.

[7] S. L. Laubach and J. W. Burdick, “An autonomous sensor-based path-planner for planetary microrovers,” in Proc. IEEE Conf. Robot. Autom.,Detroit, MI, May 1999, pp. 347–354.

[8] R. Waarsing, M. Nuttin, and H. Van Brussel, “Introducing robots into ahuman-centred environment,” in Proc. 4th Int. Conf. Climbing WalkingRobots From Biol. Ind. Appl., Karlsruhe, Germany, 2001, pp. 465–470.

[9] R. C. Arkin and R. R. Murphy, “Autonomous navigation in a manufac-turing environment,” IEEE Trans. Robot. Autom., Cincinnati, OH, vol. 6,no. 4, pp. 445–454, Aug. 1990.

[10] H. Roth and K. Schilling, “Navigation and docking manoeuvres of mobilerobots in industrial environments,” in Proc. IEEE Conf. Ind. Electron.Soc., Aachen, Germany, Aug. 1998, pp. 2458–2462.

[11] A. G. Ledebuhr, L. C. Ng, M. S. Jones, B. A. Wilson, R. J. Gaughan,E. F. Breitfeller, and W. G. Taylor, “Micro-satellite ground test vehicle forproximity and docking operations development,” in Proc. IEEE Aerosp.Conf., Big Sky, MT, Mar. 2001, pp. 2493–2504.

[12] P. Roessler, S. A. Stoeter, P. E. Rybski, M. Gini, and N. Pananikolopoulos,“Visual servoing of a miniature robot toward a marked target,” in Proc.IEEE Int. Conf. Digital Signal Process., Santorini, Greece, Jul. 2002,pp. 1015–1018.

[13] D. Kragic and H. Christensen, “Cue integration for visual servoing,” IEEETrans. Robot. Autom., vol. 17, no. 1, pp. 18–27, Feb. 2001.

[14] P. Mowforth, “Active sensing for mobile robots,” in IEEE Int. Conf. Con-trol, Edinburgh, U.K., Mar. 1991, pp. 1141–1146.

[15] G. Nejat and B. Benhabib, “A guidance-based motion-planning method-ology for the docking of autonomous vehicles,” J. Robot. Syst., vol. 22,no. 12, pp. 779–793, 2005.

[16] G. Nejat, A. Membre, and B. Benhabib, “Active task-space sensing andlocalization of autonomous vehicles,” in Proc. IEEE Int. Conf. Robot.Autom. (ICRA), Barcelona, Spain, Apr. 2005, pp. 3781–3786.

[17] J. A. Nelder and R. Mead, “A simplex method for function minimization,”Comput. J., vol. 7, pp. 308–313, 1964.

Goldie Nejat (S’03–M’06) received the B.A.Sc. andPh.D. degrees in mechanical engineering from theUniversity of Toronto, Toronto, Canada, in 2001 and2005, respectively.

In 2005, she joined the Department of Mechan-ical Engineering as an Assistant Professor at theState University of New York at Stony Brook, StonyBrook, NY. She is currently the Director of the Au-tonomous Systems Lab and Deputy Director of theRockwell Automation Anorad Mechatronics Lab.Her current research interests include autonomous

sensing, motion-planning and control for micro localization and manipulationof miniature robotic systems or micro manipulating robots, and for autonomousvehicles for search and rescue, surveillance, reconnaissance, planetary surfaceexploration, rehabilitation and biomedical, and manufacturing applications, andsensor and sensor network design.

Dr. Nejat is a member of IEEE Robotics and Automation Society (RAS)and the American Society of Mechanical Engineers (ASME). She is also therecipient of several awards from the Natural Sciences and Engineering ResearchCouncil of Canada (NSERC).

Beno Benhabib received the B.Sc. and M.Sc. degrees in mechanical engineeringfrom Bogazici University, Istanbul, Turkey, and the Technion–Israel Institute ofTechnology, Haifa, Israel, in 1980 and 1982, respectively, and the Ph.D. degreein mechanical engineering from the University of Toronto, Toronto, Canada, in1985.

Since 1986, he has been a Professor of mechanical and industrial engineer-ing, and electrical and computer engineering at the University of Toronto. Hiscurrent research interests include development of autonomous robotic systemsfor manufacturing and service industries.