6

Click here to load reader

[IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

  • Upload
    beno

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

Proceedings of the 2006 IEEE/RSJInternational Conference on Intelligent Robots and Systems

October 9 - 15, 2006, Beijing, China

Localization of Autonomous Robotic Vehicles UsingA Neural-Network Approach

Joseph WongI, Goldie Nejat2'Computer Integrated Manufacturing Laboratory

Department ofMechanical and Industrial EngineeringUniversity ofToronto

Toronto, ON, M5S 3G8, Canadawongjo@,mie. utoronto. ca

Abstract- In this paper, a neural-network-based guidancemethodology that utilizes line-of-sight based task-space sensoryfeedback is proposed for the localization of autonomous roboticvehicles. The novelty of the overall system is its applicability tocases that do not allow for the direct proximity measurement ofthe vehicle's pose (position and orientation).

Herein, the proposed neural-network (NN) based guidancemethodology is implemented on-line during the final stage of thevehicle's motion (i.e., docking). The systematic motion errors ofthe vehicle are reduced iteratively by executing the correctivemotion commands, generated by the NN, until the vehicleachieves its desired pose within random noise limits.

The guidance methodology developed was successfully testedvia simulations for a 6-dof (degree-of-freedom) vehicle and viaexperiments for a 3-dof high-precision planar platform.

Index Terms - Line-of-sight sensing, high-precision localization,neural networks.

I. INTRODUCTION

Localization of an autonomous vehicle refers to its on-lineguidance for the purpose of achieving a desired docking pose(position and orientation) within required tolerances [ 1 ].Commonly, after an initial gross but fast movement of theautonomous vehicle toward its desired goal, external task-space sensory feedback is utilized for high-precisioncorrective motion (i.e., localization or docking). Numerousresearchers, thus, have suggested the use of (i) high-precisionproximity sensors for navigation-based path planning [2-6], or(ii) high-speed cameras for visual servoing [7-9]. There havealso been several early attempts in using laser trackingsystems, consisting of retroreflectors placed on the end-effector of an articulated robot or on the bodies of mobilerobots, for direct task-space robot-motion perception [10-13].

All abovementioned methods, however, rely on relative orabsolute proximity measurements. Frequently, even in thepresence of external task-space sensors, a vehicle's pose maynot be determined accurately due to our inability to measureorientation as precisely as position. In our earlier work, wehave proposed several guidance algorithms that utilize indirectproximity measurements from Line-of-Sight (LOS) basedtask-space sensing system. These methods can successfullyguide the vehicle to its desired docking pose within requiredtolerances; namely, eliminate the effects of systematic motion

Robert G. Fenton', and Beno Benhabibl2Department ofMechanical Engineering

State University ofNew York at Stony BrookStony Brook, NY, 11794-2300, USA

Goldie.Nejat@stonybrook. edu

errors of the vehicle. These techniques, however, eitherrequire the use of the analytical calibration model of thesensing system [1, 14], or implementation of an active task-space sensing strategy [15].

A LOS sensing-system model can be obtained via acalibration procedure that employs high-precision externalsensors (e.g., interferometers) to assist in identifying thegeometric parameters of the system. In situations where sucha sensing-system model is not available, a model-independentdocking methodology is needed [16]. In this paper, weaddress the localization problem by presenting a genericneural-network-based LOS guidance approach that does notemploy the sensing-system's calibration model andsuccessfully copes with the drawbacks of the (gradient-decenttype) methodology proposed in [16], for example, slow andunstable convergence.

Neural Networks (NNs) have been in the past used in themotion planning of autonomous mobile vehicles. For example,sonar-ring sensors and a multi-layer NN were used (i) togenerate a path for a robotic vehicle while avoiding obstacles[17], and (ii) to localize the pose of an autonomous vehicle bycomparing a set of range readings with an a priori knownenvironment map [18]. NNs have also been used to learn thesystematic errors of a robot by performing off-line calibrationusing a Coordinate-Measurement Machine (CMM) [19-20].Despite intensive efforts, most methods cannot compensatefor unpredictable errors such as wheel slippage and floorirregularities in an autonomous vehicle application, unlesssome real-time sensory information is employed.

In the above context, the objective of this paper is topresent a generic sensing-system calibration-model-independent, LOS-based motion-guidance technique thatutilizes background NN learning from previously off-linerecorded data and on-line sensory information during thecourse of guidance.

In Section II, we first present an overview of the task-space LOS sensing system and modelling of sensoryinformation using NNs. In Section III, the proposed sensingsystem model is explained in greater detail. In Section IV, theimplementation architecture is outlined. The results of motion-planning simulations and experiments, conducted on a 6-dof(degree-of-freedom) vehicle and 3-dof high-precision

1-4244-0259-X/06/$20.00 C)2006 IEEE448

Page 2: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

platform, utilizing our proposed approach, are presented anddiscussed in Sections V and VI, respectively.

II. SYSTEM OVERVIEW

A. Line-of-Sight Sensing SystemThe LOS concept is utilized herein for autonomous vehicleguidance. In our proposed system, an individual LOS sensingmodule consists of a laser source, a galvanometer mirror, anda detector (e.g., PSD). The detector is mounted on the vehicle,while the laser beam, which defines the desired LOS, isaligned using a galvanometer mirror to hit the center of thedetector when the vehicle is at its desired pose.

A multi-LOS system can be configured using severalLOS sensing modules to provide sufficient and accuratesensory data for guidance-based motion planning ofvehicles/robots translating and/or rotating freely in multi-dimensional space. The minimum necessary number and thetypes of LOS in configuring such a system would depend on(i) the mobility requirement for the localization problem athand and (ii) the motion range of the vehicle, Table 1.

Docking -A(do)2 (x,3 (x,y

6 (x,yvz,'

Table 1: Configuration ofLOS sensing systems.

Vlobility Minimum No. LOS Type Det9 ofLOSY) 2 Planar,y) 3 Planarii,r, ) 3 Spatial

tector Type

LinearLinearArray

B. Sensing-System ModelIn two-stage vehicle motion, it is assumed that theautonomous vehicle is initially moved to its desired pose byusing closed-loop feedback from the actuator-level sensors.This action is regarded as long-range positioning. Task-spacesensors are, then, used to improve the vehicle's accuracy (i.e.,minimize systematic errors) via effective guidance during thesecond short-range positioning phase.

In our sensing system, the mirrors are aligned so that theLOSs would hit the centers of their respective PSDs at thevehicle's desired pose. Inaccurate long-range positioning ofthe vehicle, due to systematic and random motion errors,would, however, result in offsets on the PSDs. These offsetsare defined in their own individual detector's coordinate frameand not in the global world frame of reference. The offsets areutilized herein as sensory feedback to guide the vehicle to itsdesired pose during the short-range positioning stage.

For (short-range) vehicle guidance, the use of a NeuralNetwork (NN) is proposed herein in lieu of an analyticallydetermined sensing-system model. The off-line trained NN isused to predict the necessary corrective vehicle motion, duringdocking, in an on-line manner based on the measured PSDoffsets and the LOS angles that correspond to the desiredpose. During run-time, the NN sensing-system model isimproved through on-line training.

Off-line training is achieved in our work via discretemeasurement points at various possible docking poses of the

vehicle within its workspace. Namely, in order to emulate theeffects of systematic errors and random noise after long-rangepositioning of the vehicle, these data points are obtained by ateaching-by-demonstration method, as follows: the LOSs arealigned and pre-taught the desired angles, O/desired,corresponding to the vehicle's desired pose, then, the vehicleis commanded to move a small incremental motion to emulateimperfect docking, and the resultant PSDs readings aremeasured.

On-line refinement of the sensing-system model isachieved by incremental adaptation of the NN parameters. Ineach step of the implementation of the sensing-system model,the performed incremental corrective motions and theresultant changes in the PSDs readings are recorded and,subsequently used to update the network parameters after thelearning sample is presented.

C. Neural-Network Architecture and LearningThe NN presented in this paper has a standard multi-layerfeed-forward architecture, with one hidden layer of tangentsigmoidal neurons and an output layer of linear neurons.

Conventional batch learning is used for off-linemodelling. A regularization approach is essential, for thebatch training of a NN, in order to determine a good balancebetween bias and variance. The three most commonregularization methods are early stopping, weight decay, andBayesian regularization.

Early stopping, as used herein, requires the dataset to beseparated into three subsets: training, validation, and testingsets. The validation set is used to monitor when the modelshould cease training. Weight decay applies an additionalsum-of-squares of network weight term in the error functionof the optimization in order to control the growth in value ofthe interconnection weights in the network. However, weightdecay requires finding the suitable preset error functionparameters by trial-and-error. Gauss-Newton approximationto the Bayesian Regularization (GNBR) algorithm [21]automatically optimizes the error function parameters in orderto find a suitable compromise between bias and variance.Bayesian approach does not require a validation set as in earlystopping and its benefit is prevailing when scarcity of dataexists. However, GNBR suffers from excessive computationoverhead. In this paper, the scaled conjugate gradient withearly stopping [22] and a GNBR algorithm are employed inthe off-line modelling of the sensing system for the 6-dof and3-dof vehicle docking problems, respectively.

For on-line incremental learning, the choice of thelearning rate is more important than in batch training, sinceeach learning sample is only presented once. Advancedoptimization techniques, which can be used in batch-modelearning, are not applicable for incremental-mode learning[23]. Hence, a standard gradient-descent algorithm isemployed herein to optimize the interconnection weights foron-line learning.

449

I

Page 3: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

III. A NEURAL-NETWORK SENSING-SYSTEM MODEL

The objective of the off-line trained NN sensing-system modelis to predict the vehicle's necessary corrective docking afterlong-range positioning, given the sensory feedback from PSDdetectors and the desired pose information from the LOSangles. It is also expected to provide a start-up model that canbe improved in run-time for enhancing accuracy andconvergence rate. The development of the sensing-systemmodel is separated into three stages, with the assumption thatthe vehicle has the most generalized 6-dof mobility (x, y, z, y,,6, a). The vehicle's shape is assumed to be a cube, with arraytype detectors on at least three of its orthogonal faces, Figure1.

Spatial LOS

MeasurePSDs Offsets

Store Mirror Angles,Motion Commands, and

PSDs Offsets intoDatabase for Training

2 GalvanometerMirrors \

fY_A,,a(a)

z

ye

x~~~~~~~~~~~~~~~~~~~~~~~ ..........

(b)

Figure 1: (a) Desired vehicle location and (b) actual vehiclerelocation, respectively.

Stage-], Network Definition: The network model requiresknowledge on the desired pose and the deviationsbetween the vehicle's actual and desired poses in the formof sensory feedback in order to determine the requiredcorrective motion. The inputs, thus, are the detectoroffsets eu (i = 1 to 3 and j = y, z), and the LOS angles O5mn(m = 1 to 3 and n = 1 to 2) that correspond to the desiredpose. The outputs are the incremental corrective motioncommands that guide the vehicle to its desired pose (Ax,Ay, Az, Ay, A, and Aa).

Stage-2, Data Collection: Discrete measurement data arerequired to train and test the network, as described in thefollowing steps and summarized in Figure 2. Each pointcontains the tuples (eu, mn, Ax, Ay, Az, AY, A/I, Aa):

Figure 2: Data collection framework.

(i) Move the vehicle to a desired docking pose within theworkspace and, concurrently, align the individualscanners so that each LOS would hit as near its respectivePSD's center as possible at the vehicle's desired pose.

(ii) After the vehicle reaches its actual pose, realign theLOSs, using an optimization method (e.g., [24]) to searchfor the optimal scanner angles such that they correspondto minimal offset values registered on the detectors, ei.The search for each scanner angle occurs independentlyand concurrently. These angles directly correspond to theactual vehicle pose (within noise limits).

(iii) Command the vehicle to move a small randomincremental displacement and measure the PSD offsets,while keeping the LOSs locked.

Stage-3, Training/Testing: The collected data set is split intotraining and testing subsets (a validation subset isrequired if early stopping is used). An additional set ofconstraining data, equivalent to the number of trainingdata, is appended to the training subset to train the NN.The constraining set has identical LOSs angles as thetraining set, but has zero PSD offsets and incrementalmotion {q5mn (m = I to 3 and n = I to 2), epq= 0 (p = I to 3andq y,z),Ax =0,Ay =0,Az =0,Ay=0,A,=0,Aa=O}r, r 1 to R, where R is the total number of trainingpoints. The purpose of this constraining set is to compelthe NN regression model to output a zero incrementalcorrective motion when the LOSs hit the centers of theirrespective PSDs.

IV. A NEURAL-NETWORK IMPLEMENTATION FORAUTONOMOUS GUIDANCE

The proposed localization (docking) methodology is based onthe estimation of the current pose of the vehicle, after its long-

450

Page 4: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

range motion, in order to estimate the necessary correctivemotions via the use of the NN. The desired vehicle pose andits corresponding LOSs are pre-defined. Once the vehicle hasreached its actual pose upon completion of the long-rangepositioning stage, the PSD offsets are measured and fed intothe temporary NN. The temporary NN is initialized with theidentical architecture and interconnection weights as theoriginal model. The NN, then, subsequently predicts therequired corrective motions to minimize the PSDs offsets.

As the vehicle-motion commands are predicted, themotion commands and the resultant changes in the PSDoffsets, due to the execution of the corrective actions, are usedas the training sample for the temporary NN in an on-linemanner: Each learning sample contains the tuples (Aeu, qmn,Ax, Ay, Az, Ay, Afl, and Aa). Although the training tuples donot provide exact information about the necessary correctivemotion commands, they do provide an adequateapproximation, whose goodness progressively improves as thevehicle approaches its desired pose. Due to the nature of aconnectionist learning system, the succession of localinterconnection weight corrections may not improve globalperformance. Hence, the temporary NN is only employed forthe docking task at hand and discarded upon its completion.

The effect of systematic errors, random noise, and theimperfections of the trained network, necessitate the use of aniterative procedure to achieve a desired accuracy. Docking isconsidered to be successful if all of the PSDs offsets aresmaller than a pre-defined threshold, c2, Figure 3.

V. SIMULATIONS

In order to illustrate the workings of the NN-based approachin minimizing the systematic errors and converging within therandom noise of the autonomous system, a computersimulation environment was developed and numeroussimulations were implemented for a vehicle with 6-dofmobility.

A. Set-up and ProcedureThe vehicle's shape is a 0.lxO.lxO.1 m cube. The threespatial LOS sources are placed symmetrically around theperimeter of the (docking) workspace of the vehicle. Eachsource uses two 1-dof galvanometer mirrors to provide aspatial LOS. The three array detectors, 50x50 mm, are placedcentrally on the three faces of the vehicle, respectively.

The inaccuracy of the motion of the vehicle is representedtwofold: systematic errors and random errors. Instead ofmodelling each source of systematic errors separately, theircombined effect is represented herein as a function of theoverall motion of the vehicle:

Systematic error= (Inaccuracylfull range) x (1)Displacement-of vehicle,

where in our simulations, (inaccuracylfull range) was chosenas 5.8 pm/mm for translation and 5.6 milli-deg/deg forrotation. Random noise was represented by a normal

distribution: N(u = 0.0, u = 0.5 pm) for translation and N(u =

0.0, u= 0.262 milli-deg) for rotation. The PSDs readingswere also modelled with random noise N(u = 0. 0, u = 1.0 pm)due to imperfect A/D signal conversions. The vehicle isassumed to have a motion range of [-150 mm, 150 mm] and[-150, 150] for each of the translational and rotational dof,respectively. The data collection range for the incrementalmotion of each dof is calculated as:

zlmax, min =:-(Vehicle range x Systematic error), (2)which were computed to be (±1.75 mm) for translation and(±0.170) for rotation.

Desired Vehicle Pose Off-line Neural-l ~~~~Network Sensing

CoSystem Model

DefinePoseMirror

Intaizto

Angles * Temporary *4Neural-Network m

sensin-systemode, 400 Sensing Systemg ~~~~~Model

Vehicle l tPositioning s acShort-Range Vehicle

300 datafromtheonstainigse,aPositioning

Read PSD ec TOffsets Compute Changesl ~~~~~ofPSD offsets

/ PSD \Nolffsets</

Vehicle has ReachedDesired Pose

Figure 3: Implementation of neural-network model.

The NN was trained off-line with 150 tangent sigmoidalneurons in its hidden layer. For the constiruction of thesensing-system model, 400 discrete data points were taken(300 data for the training subset, and 50 data for each of theevaluation and testing subsets, respectively). By incorporating300 data from the constraining set, a total of 600 data pointswere used to train the network. The implemented networkceased training at 3602 epochs. The training took about 648seconds in MatLAB on a Pentium IV 2.0 GHz 512 Mb RAMsystem.

In the on-line incremental learning, each learning sample(i.e., combination of LOS angles, corrective motioncommands, and the resultant differences in PSDs readings)was passed once at a learning rate of 0.01, and each passrequired approximately 0.17 seconds.

The aim of the simulation was to demonstrate that theproposed NN-based methodology has the ability to lessen the

451

Page 5: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

impact of the vehicle's systematic errors, by predicting thecorrective vehicle motion commands, and allow it to convergeto the desired pose within the predefined tolerance.Furthermore, it is to be shown that this convergence is not aresult of a statistical coincidence and is maintainable.

B. Simulation ResultsIn order to illustrate that the proposed NN-based dockingstrategy can minimize systematic errors and that the vehiclecan converge within the random noise of the autonomoussystem, the results of 100 random simulation runs arepresented in Figure 4. The 100 desired poses were uniformlydistributed within the workspace of the vehicle and wereapproached randomly from different locations. The noise levelof the system was calculated to be approximately +12 ,um interms of PSD-offset readings. For comparison, the results forsolely utilizing an off-line trained NN model (without on-lineimprovement) are also shown. It is clearly noted that theeffectiveness of the system tangibly increases by combiningoff-line training and on-line incremental learning during theimplementation stage. In all 100 simulation runs, the vehicleconverged to the desired pose within the random noise levelwithin 11 steps when on-line sequential learning wasemployed. This was not the case for the NN without on-linelearning.

100- - --- Off-line +On-line Learning --

90 --- Off-line Learning

3 80

, 70

= 60

50

; 40

30

20

10

additional corrective actions are also shown (CorrectiveActions #9 to #1 1).

200

180

160

140

120

100

80

60

40

20

-*--- PSDI Offsets

---- PSD2 Offsets

-A-- PSD3 Offsets

0 1 2 3 4 5 6 7 8

Number of Corrective Actions

9 10 11

Figure 5: Convergence chart for a simulation test point.

VI. EXPERIMENTS

The proposed guidance method was implemented and testedin a controlled physical environment for the on-line dockingof a planar high-precision 3-dof (x, y, 5) platform, Figure 6.The experimental set-up included three 1-dof LOS sensingmodules, a combination of beam-splitters, flat mirrors, and afilter to split the beam from the laser source into three beamsand, then, redirect them to their corresponding galvanometermirrors. The systematic errors were set to values of 5.8,um/mm for the linear stages and 6.7 milli-degldeg for therotational stage. The linear and rotary stages have a motionrange of [-80 mm, 80 mm] and [-250, 250], respectively.

0 1 2 3 4 5 6 7 8 9 10 11

Number of Corrective Actions

Figure 4: Results for 100 simulation runs.

A specific simulation test point is also presented herein:The vehicle was asked to move to a desired pose defined by x= 90.0 mm, y 70.0 mm, z = -100.0 mm, y= -15.00, g=-15.00, and a 15.00. Due to systematic and random errors,the vehicle's actual pose, after this initial motion was noted tobe x = 88.5 mm, y = 71.5 mm, z = -101.5 mm, y= -14.840,,8= -14.830, and a= 14.850. The corresponding PSDi (i = 1 to3) offsets were "measured" as 0.865 mm, 2.075 mm, and2.089 mm, respectively. The NN model was invokediteratively to guide the vehicle to its desired pose within 8steps. The results are shown in Figure 5. In order to illustrateconvergence is maintainable, the PSDs readings for several

F1D

\ VLaser Source Beamsplitters Filter

Figure 6: Experimental set-up.

The off-line network was trained using 30 hiddenneurons, with 25 and 10 measurement points for the trainingand testing subsets, respectively. With the inclusion of 25 data

452

I

vwwE0ci

.EW.

w

4C)(APO

F.

et

E)

Page 6: [IEEE 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems - Beijing, China (2006.10.9-2006.10.15)] 2006 IEEE/RSJ International Conference on Intelligent Robots

points from the constraining set, the network ceased trainingin 124 seconds at 737 epochs. Each learning sample is passedonce with the learning rate set to 0.0025 and takes around 0.10second to complete.

The guidance methodology was tested for several distinctposes within the platform's workspace. Result for one desiredpose is presented herein: x= -45.342 mm, y= 20.596 mm, 0=-18.7230, Figure 7. The proposed NN-based guidancemethodology needed 4 steps to guide the platform to thedesired pose within the random noise limits of the PSDs (+10,um). In order to ensure convergence, an extra correctionaction was performed for all tests.

500

400 0̂ 1 nff!Pt!

300

5^ 200

° 100i.,

-202-100 -

-400

,A

- - - - - - -r iji uiisets0---PSD2 Offsets

---A--- PSD3 Offsets

3 4

* Initial Offsets(not shown):

.' * 944.509N -1226.632A -444.233

Number of Corrective Actions

Figure 7: Convergence graphs for experimental result.

VII. CONCLUSIONS

A generic NN-based task-space multi-LOS guidance methodhas been proposed in this paper for use in the localization ofrobotic autonomous vehicles. LOS-based sensory data andPSD offsets are utilized by the off-line trained NN and on-lineupdated NN for on-line docking during the final short-rangepositioning stage, namely, minimizing the systematic errors ofthe vehicle and allow it to converge within the random noiselevel of the system, independent of the sensing system'scalibration model. The proposed method was verifiedsuccessfully in both simulations and experiments.

ACKNOWLEDGEMENTS

This work was supported by the Natural Sciences andEngineering Research Council of Canada (NSERC).

REFERENCES

[1] G. Nejat, I. Heerah and B. Benhabib, "Line-of-Sight Task-SpaceSensing for the Localization of Autonomous Mobile Devices," IEEEConference on Intelligent Robots and Systems, Las Vegas, NV, pp. 968-973, 2003.

[2] R.C. Arkin, and R.R. Murphy, "Autonomous Navigation in aManufacturing Environment," IEEE, International Conference onRobotics and Automation, Cincinnati, OH, pp. 445-454, 1990.

[3] H. Roth, and K. Schilling, "Navigation and Docking Manoeuvres ofMobile Robots in Industrial Environments," IEEE, Conference ofIndustrial Electronics Society, Aachen, Germany, pp. 2458-2462, 1998.

[4] P. Mira, R. Ferreira, V. Grossmann, and M.I. Ribeiro, "Docking of AMobile Platform Based on Infrared Sensors," IEEE, Int. Symp. onIndustrial Electronics, Guimaraes, Portugal, pp. 735-740, 1997.

[5] R.T. Howard, T.C. Bryan, and M.L. Book, "An Advanced Video Sensorfor Automated Docking," NASA Scientific and Aerospace TechnicalReport, pp. 8.B.6-1-8..6-7, 2001.

[6] J. Vaganay, P. Baccou, and B. Jouvencel, "Homing by Acoustic Rangingto a Single Beacon," MTS/IEEE, Oceans Conference, Providence, RI, pp.1457-1462, 2000.

[7] P. Roessler, S.A. Stoeter, P.E. Rybski, M. Gini, and N. Pananikolopoulos,"Visual Servoing of A Miniature Robot Toward A Marked Target," IEEEConference on Digital Signal Processing, Santorini, Greece, pp. 1015-1018, 2002.

[8] C.J. Taylor, and J.P. Ostrowski, "Robust Visual Servoing Based onRelative Orientation," IEEE, Computer Vision and Pattern Recognition,Fort Collins, CO, pp. 574-580, 1999.

[9] D. Kragic, and H. Christensen, "Cue Integration for Visual Servoing,"IEEE Trans. on Robotics and Automation, Vol. 17, pp. 18-27, 2001.

[10] T.A.G. Heeren, and F.E. Veldpaus, "An Optical System to Measure theEnd-Effector Position for On-line Control Purposes," The InternationalJournal of Robotics Research, Vol. 11, No. 1, pp. 53-63, 1992.

[11] J.R.R. Mayer, and G.A., "A Portable Instrument for 3-D Dynamic RobotMeasurements Using Triangulation and Laser Tracking," IEEE Trans. onRobotics and Automation, Vol. 10, No. 4, pp. 504-516, 1994.

[12] Y. Koseki, T. Arai, K. Sugimoto, T. Takatuji, and M. Goto, "Design andAccuracy Evaluation of High-Speed and High Precision ParallelMechanism," IEEE Int. Conference on Robotics and Automation, Leuven,Belguim, pp. 1340-1345, 1998.

[13] C.J. Leigh-Lancaster, B. Shirinzadeh and Y.L. Koh, "Development ofLaser Tracking System," IEEE, Conference on Mechatronics and MachineVision in Practice, Toowoomba, Australia, pp. 163- 168, 1997.

[14]G. Nejat and B. Benhabib, "A Guidance-Based Motion-PlanningMethodology for the Docking of Autonomous Vehicles," Journal ofRobotic Systems, Vol. 22, Iss. 12, pp. 779-793, 2005.

[15] G. Nejat and B. Benhabib, "Guidance-Based Short Range Docking ofAutonomous Vehicles- A Comparative Study," Robotics and Computer-Integrated Manufacturing Journal, Vol. 21, Iss. 4-5, pp. 401-411, 2005.

[16] G. Nejat and B. Benhabib, "Docking of Autonomous Vehicles: AComparison of Model-Independent Guidance Methods", VirtualInternational Conference on Intelligent Productions and Machines, 2005.

[17] I.K. Jung, K.B. Hong, S.K. Hong, and S.C. Hong, "Path Planning ofMobile Robot Using Neural Network", IEEE, International Conference onFuzzy Systems, New Orleans, LA, Vol. 3, pp. 2208-2214, 1996.

[18] I.K. Sethi, G. Yu, "A Neural Network Approach to Robot LocalizationUsing Ultrasonic Sensors", IEEE, International Symposium on IntelligentControl, Philadelphia, PA, Vol. 1, pp. 513-517, 1990.

[19] W.L. Xu, K.H. Wurst, and T. Wantanabe and S.Q. Yang, "Calibrating aModular Robotic Joint Using Neural Network Appraoch", IEEE, WorldCongress on Computational Intelligence, Orlando, FL, Vol. 5, pp. 2720-2725, 1994.

[20] X. L. Zhong and J.M. Lewis, "A New Methodfor Autonomous RobotCalibration", IEEE, International Conference on Robotics andAutomation, Nagoya, Aichi, Japan, pp. 1790- 1795, 1995.

[21] F.D. Foresee and M.T. Hagan, "Gauss-Newton Approximation toBayesian Regularization", IEEE, International Joint Conference onNeural Networks, Houston, TX, pp. 1930-1935, 1997.

[22] M.F. Moller, "A scaled conjugate gradient algorithm for fast supervisedlearning," Neural Networks, Vol 6, pp.525-533, 1993.

[23] D. Saad, On-line Learning in Neural Networks, Cambridge, NY:Cambridge University Press, 1998.

[24] J.A. Nelder and R. Mead, "A Simplex Method for FunctionMinimization," Computer Journal, Vol. 7, pp. 308-313, 1964.

453