8
ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012 13

Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

Embed Size (px)

DESCRIPTION

The main challenge in using low-cost inertial sensorsin an Inertial Navigation System (INS) to estimate theorientation, position, and velocity of an UnmannedArial Vehicle (UAV) is the decreased accuracy of suchsensor due to the inherent noises and biases, whichleads to large drift in the navigation system. Thispaper investigates building an ecient Vision-AidedINS (VAINS) based on integrating a camera mountedon the vehicle with low-cost inertial sensor availablein a MEMS-based Inertial Measurement Unit (IMU).VAINS relies on real-time processing of imageframes captured from the camera to extract visualfeatures that can be compared to those extracted andpre stored in the system as a map for the land underthe vehicle. The estimation procedure is based on anExtended Kalman Filter (EKF) that integrates thecamera measurements with the IMU measurements,but is prone to divergence due to large propagationand linearization errors. To reduce these errors, twoapproaches are considered: adding dynamic modelconstraints to the propagation process, and speedingup the iterated cycles by executing image processingalgorithms on General-Purpose Graphics ProcessingUnit (GPGPU) hardware accelerator from NVIDIAusing CUDA C language.

Citation preview

Page 1: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

Ahmed Zakaria1, Mohsen Mahroos2, and Amin Nassar21SAS-Automation, Maadi, Cairo, Egypt

2Cairo University, Giza, [email protected], [email protected], [email protected]

Abstract

The main challenge in using low-cost inertial sensorsin an Inertial Navigation System (INS) to estimate theorientation, position, and velocity of an UnmannedArial Vehicle (UAV) is the decreased accuracy of suchsensor due to the inherent noises and biases, whichleads to large drift in the navigation system. Thispaper investigates building an e�cient Vision-AidedINS (VAINS) based on integrating a camera mountedon the vehicle with low-cost inertial sensor availablein a MEMS-based Inertial Measurement Unit (IMU).

VAINS relies on real-time processing of imageframes captured from the camera to extract visualfeatures that can be compared to those extracted andpre stored in the system as a map for the land underthe vehicle. The estimation procedure is based on anExtended Kalman Filter (EKF) that integrates thecamera measurements with the IMU measurements,but is prone to divergence due to large propagationand linearization errors. To reduce these errors, twoapproaches are considered: adding dynamic modelconstraints to the propagation process, and speedingup the iterated cycles by executing image processingalgorithms on General-Purpose Graphics ProcessingUnit (GPGPU) hardware accelerator from NVIDIAusing CUDA C language.

Keywords: Feature tracking, Kalman �lter,vision-based inertial navigation, GPU, CUDA C.

1 Introduction

UAV applications with high-precision navigation areincreasing rapidly. GPS-Aided INS (GAINS) used incontemporary UAVs integrates a GPS receiver with anIMU for reliable and accurate navigation with smallerrors [8, 3]. The IMU provides fast measurementswith errors whose bound decreases at the expense ofincreasing its cost, while the GPS receiver providesslow measurements with high precision to correct thedrift of the system due to the IMU errors.

In some cases, however, GAINS cannot be used,e.g. when the GPS signal is absent such as indoors,underwater, and in space for landing on other planetswith space shuttles, or when the GPS signal is weakdue to multipath fading or jamming e�ects, and incase of fast maneuvers. In such cases, it is unreliableto use GAINS only, and another kind of sensor hasto be integrated with INS to build an autonomousnavigation system.

VAINS is an alternative to GAINS, in which amounted camera is used to acquire images for the landsurface under the arial vehicle. VAINS is divided intotwo parts: Image Processing (IP) and EKF. IP is usedfor features detection and matching. EKF is usedfor propagating the system state, and for updatingthis state to the optimal expected value based on thepropagation results and the measurements receivedfrom the IMU and the camera. The state vector of thesystem includes the position, orientation, velocity, andinertial sensors bias when measuring the accelerationand angular velocity.

VAINS attempts to extract visual features fromthe acquired images, and to search for these featuresin the stored map for the land under the vehicle. Thecamera in VAINS replaces the GPS receiver in GAINSto provide low measurement rate with high precisionthat corrects the drift of the inertial system. EKF isused to estimate the state of the UAV from the twomeasurement units. The work presented in this paperinvestigates the operability of VAINS using low-costinertial sensor with high noise and decreased accuracy.

The remainder of the paper is organized asfollows. Section (2) presents related work. Section (3)describes the image processing algorithms used forfeature extraction and matching. Section (4) presentsthe mathematical model of the EKF. Section (5)discusses challenges in using low-cost inertial sensorsin VAINS, and approaches used to overcome thesechallenges. Section (6) presents the experimentalresults illustrating the reliability of the proposedsystem. The paper concludes with few remarks anddirections for future research.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

13

Page 2: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

2 Related Work

Simultaneous Localization and Mapping (SLAM) [4]is among the most common vision-based localizationmethods used to estimate the position and orientationof UAVs. The basic SLAM algorithm is a vision-onlymethod in which the camera position and orientationin addition to the 3D positions of all visual landmarksare jointly estimated.

The main advantage of SLAM is its capabilityto compute the optimal estimation for both cameraposes and the 3D position of automatically-detectedfeatures. The main limitation of SLAM is the highcomputational complexity, which is quadratic in thenumber of the feature matches between successiveimages. Several variations extend the basic SLAMalgorithm to achieve more accurate localization andreduce the computational requirements. For example,methods to integrate IMU inputs with SLAM arereported in [10, 14, 5].

Other vision-based localization methods based onestimating the camera position without estimating thefeature positions at the same time are reported [12].The idea that is based on a multi-state constraintKalman �lter is to use the change in the positionof the extracted features in two consecutive imagesto obtain an estimation for the displacement that issubsequently fused with inertial measurements. Awindow of the previous states (camera position andorientation) is included in the state vector of the �lterinstead of the feature positions. Like SLAM, theoptimal position estimation is attained.

The main advantage of the previous method isthe reduced computational complexity, which makesit suitable for real-time vision-based localizationapplications. The delayed linearization used in thismethod increases the robustness of the system againstlinearization error. Furthermore, the method alsoavoids the loss of information associated with pairwisedisplacement estimation methods by including largenumber of poses in the state vector.

Constraints between current and previous imagecomputed using Epipolar Geometry combined withIMU measurements in an EKF are reported in [19, 1].Epipolar geometry deals with relating 3D points tothe location of their 2D projections in two consecutiveimages. The Epipolar geometry method is used in [15]after adding constraints based on the dynamic modelof an airplane.

A localization method that is suitable in spaceshuttle missions for landing on planets or asteroidsis reported in [11, 17]. VAINS is used to control thevehicle maneuvers during entry, descend, and landing(EDL). The presented algorithm aims at achievingpin point accuracy at the landing point with errorbound within 10 meters. The algorithm assumes thata pre stored map for the land under the arial vehicleis available.

Using a feature-extraction algorithm from imagesacquired by a mounted camera in the previousmethod, a template-matching algorithm is used to�nd the corresponding location of these features inthe map. The extracted features can be craters orpoints with high gradient in the image like corners.The global position of the matching point is foundusing the map.

A complete autonomous localization system usinga window of camera poses stored in the state vector ispresented in [13]. Another algorithm is presented toestimate the initial position of the vehicle using FFTtemplate matching. The estimated initial point isused as an initial state to the EKF. Another algorithmis also presented to track features in a sequence ofimages in the presence of modeling uncertainties,outliers, and nonlinearities. This algorithm estimatesthe optimal position and orientation of the vehiclewith a linear complexity in the number of featurestracked in each cycle.

3 Image Processing

The objective of the image-processing algorithm isto generate a list of the 2D projected locations offeature points in acquired the images, along with theirequivalent 3D global position obtained from the map.It is therefore required to use algorithms for featuredetection and extraction, feature warping to matchthe scale and the orientation of the map, and imagecorrelation to match the extracted features with themap.

3.1 Feature detection and extraction

Feature points need to be extracted from the imageto search for them on the map. Those featuresmust be distinguished points, e.g. points with veryhigh intensity gradients such as corners and edgesin order to reduce the probability of mismatch.Many feature extraction algorithms can be used suchas Harris corner detector [7], Kanade-Lucas-Tomasi(KLT) tracker [9], and hough transform. There arealso more complex features extraction techniques asScale-Invariant Features (SIF) and A�ne-InvariantFeatures(AIF) [18].

In this work, the system has an initial estimationfor the relative position and orientation between thecamera and features from IMU readings. Therefore,there is no need to use methods with highcomputational demands such as SIF and AIF. BothHarris corner detection and KLT tracker are suitablefor this system, as both can be implemented usingparallel computing on GPU to reduced the processingtime. In this work, Harris corner detection method isused for its moderate computational requirements.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

14

Page 3: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

3.2 Feature warping

After performing the feature extraction operation, thearea around each extracted feature point is warped tomatch the scale and orientation of the feature points inthe pre stored map. A�ne transformation is used forthis purpose as a linear transform for shifting, rotatingand scaling images which preserves the parallelismof straight lines and the ratios of distances betweenpoints lying on any straight line in the image.

Using the A�ne transformation matrix T2×3, an(x, y) pixel location in the original image is mapped toa pixel location in the output image (u, v) computedas [u, v]T = T2×3 [x, y, 1]T . At least three pixellocations in the original image along with theircorresponding locations in the output image areneeded to calculate the transformation matrix.

To perform the previous calculation, the outputfeatures are assumed to have a constant size, e.g. 5×5square meters. Then, the four corners of each outputsquare feature and its equivalent location in the mapcan be computed as a function of the camera positionand orientation, camera lens angle, and the map scale.When T2×3 is known, interpolation is used to computeinteger values for (u, v) in the output image.

3.3 Feature matching

Normalized image correlation is used to maximize theinvariance of the feature matching operation over theillumination conditions. This method is reliable andis commonly used for template matching and featuretracking [16]. To reduce the complexity of the imagecorrelation operation, the feature matching processis divided into two steps: FFT matching and MapLandmarks (ML) matching as follows.

Firstly, the Fast Fourier Transform (FFT)matching algorithm shown in Figure (1) is appliedonce at the beginning of localization process.

Figure 1: FFT matching algorithm

Secondly, the ML matching algorithm shown inFigure (2) is applied to match a large number of smallfeatures (e.g. 16 × 16 pixels square) with windows inthe map. Each window in the map represents the areawhere the feature can be found. The location of thiswindow inside the map is determined based on theexpected orientation computed from inertial sensors,and the coarse estimate for camera location.

Figure 2: ML matching algorithm

Dividing the feature matching process into twosteps aims to reduce its the processing time, as imagecorrelation is a time consuming operation due to thelarge number of arithmetic operations required. TheFFT feature matching algorithm aims to computea coarse estimate for the location of the camera bymatching single large size feature (e.g. 64 × 64 pixelssquare) with the entire map. This matching is donein the frequency domain as a multiplication operationto reduce the processing time.

Two-dimensional normalized correlation in thespatial domain is expressed as follows

c (m,n) =m+M−1∑

i=m

n+N−1∑j=n

t (i−m, j − n) ∗ r (i, j) (1)

where t represents a feature block of size N ×M , andr represents the map.

Two-dimensional normalized correlation in thefrequency domain is expressed as follows [6].

C(k, l) = T (k, l)R∗ (k, l) (2)

where T and R denote the discrete Fourier transformof t and r, respectively, after adding zero padding intot to be the same size as r.

Figure (3) shows extracted features on the upperimage and their equivalent match points on the lowerimage which represents the map. The two photoswere taken from di�erent heights and orientations,and feature matching was performed using the MLmatching algorithm.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

15

Page 4: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

Figure 3: Features extracted form upper image andtheir matches found in lower image

4 Extended Kalman Filter

EKF is used for optimal estimation of the systemstate in VAINS based on the camera and IMUmeasurements. The system state vector is expressedas

X =[qe bg V e ba P e

]T(3)

where qe is the orientation of the vehicle bodycoordinate frame, bg is the bias of the gyroscope, V e

is the velocity vector of the vehicle, ba is the bias ofthe accelerometer, and P e is the 3D position of thevehicle. The orientation, position, and velocity ofthe vehicle are measured with respect to the EarthCentered Frame (ECF) of coordinates.

Error-state Kalman �lter is normally used innavigation problems, as its error state has smallervalues which give numerical stability to the �lter. Theerror of this �lter is expressed as

X =[δθ bg V e ba P e

]T(4)

where the symbol �∼� denotes error vector whose valueis the di�erence between actual state and estimatedstate, and δθ denotes the attitude-error vector whichgives the error in estimating the orientation of thevehicle body coordinate frame. Each time a newimage is captured, the system state and covariancematrix are augmented with the current camera poseestimate, and the captured image is then processed.

4.1 State Propagation

State propagation is done each time an IMUmeasurement is received to predict the system stateand its error-state covariance matrix using the 6degree-of-freedom dynamic model of the vehiclekinematics given in [13, 17]. Propagation of thesystem state is done by 4th-order Runge-Kuttanumerical integration of the state equations [20].

The gyroscope and accelerometer measurementsare assumed to be corrupted by zero-mean whiteGaussian noise, and their biases are modeled as �rst-order random processes driven by the measurementnoises. A �rst-order discrete-time model for thecontinuous-time state propagation is used, and adiscrete-time state transition matrix is calculatedalong with the covariance matrix of the discrete-timesystem noise.

4.2 Measurement update

Let P ec and qce be the position of camera frame origin

in the global coordinates frame, and the quaternionthat represent the orientation of camera frame relativeto global frame, respectively. Let P b

c and qcb be theposition of the camera frame, and the orientation ofits frame relative to body frame, respectively. If theposition of a landmark l in the global frame is P e

l ,then its position in the camera frame is expressed as

P cl = Cc

b

(Cb

e (P el − P e

b ) − P bc

)= Cc

e (P el − P e

c ) (5)

where Cce is the Direct Cosine Matrix that represents

the transformation between the global frame and thecamera frame. The measurement vector Zl whichrepresents the 2D projection of l in the image plane isexpressed as

Zl =

[uv

]=

1

zcl

[xclycl

]+ η (6)

where η is 2D image-noise vector with covariancematrix R = σ2

IM ·I2×2. The residual rl = Zl−Zl is thedi�erence between actual and expected measurementsZl and Zl, respectively, where Zl is computed fromits global position and camera expected position andorientation obtained from propagation process as

Zl =

[ulvl

]=

1

zcl

[xclycl

](7)

where rl represents the error in estimating the vehicleposition and orientation (propagation error). Themeasurement matrix H that relates the magnitude ofrl to the magnitude of error-state vector is given by

H = HC · Ccb

[ ⌊Cb

e

(P el − P e

b

)×⌋

03×9 −Cbc

](8)

where

HC =1[

0 0 1]· P c

l

[I2×2 −Z

]

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

16

Page 5: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

In any newly captured image, there can bemany map landmark points can be found. Allthese feature points are used in the single updateprocess. Therefore, the expected measurement vector,residual, and measurement matrix calculated for eachlandmark is computed and concatenated in a singlevector to perform a batch update process of the EKF.

5 Challenges in using low-cost

sensors

MEMS-based sensors have large background noise andlow accuracy. In INS, the position residual is verysensitive to the orientation residual, as the orientationerror leads to double integrating a component of thegravitational force as a real acceleration.

The orientation residual causes an error incalculating the location of the map window to searchfor a certain feature, an error in warping featuresusing A�ne transformation, and a linearization errorin all EKF matrices. As these problems can cause theEKF to diverge, few techniques to mitigate them areaddressed as follows.

5.1 Adding propagation constraints

A dynamic constraint checking is used at the end ofeach state propagation step to reject results with higherror value. The acceptance threshold is based onthe dynamic speci�cations of the vehicle; for example,maximum speed, maximum acceleration, maximumclumping rate, and propagation time period.

Based on the added propagation constraints,a spherical volume whose center is the previouslocation is constructed. This volume represents allpossible locations for the propagation result. If thepropagation result is outside this volume, then theerror value is replaced by a point on the boundary ofthat volume at the same displacement direction fromthe previous point.

If the propagation constraints checking leads torejecting the propagation results, then the newlyselected point is used only in the image processingprocedure (A�ne transform & image correlation). Insuch case, the EKF update process uses the originalpropagation result, so that the residual does notdiverge.

5.2 Iterated Kalman update

The iterated Kalman �lter is used to reduce theerror e�ect resulting from the linearization error.In iteration J , the expected measurement vector iscomputed as a function of state vector of the current

iteration X(J)K+1|K+1. The measurement equation is

then linearized around the current state iteration toget the measurement matrix.

The current iteration residual is computed bysubtracting the expected measurement of the currentiteration from the real measured vector. Thecovariances of the residual and the Kalman �lter gainare then computed. Calculation of the state correctionvector ∆X(J) is then performed. The estimated

next state update X(J+1)K+1|K+1 is then computed based

on its previous iteration value X(J)K+1|K+1 and state

correction vector ∆X(J).The aforementioned iteration steps are repeated

until the state vector converges. The iteration starts

with X(0)K+1|K+1 = XK+1|K . After the iterative

procedure converges, the covariance matrix of currentstate is updated. Figure 4 shows a block diagram ofthe Iterated Kalman update system.

Figure 4: Block diagram of the system

5.3 Using CUDA to speed up Kalman

�lter cycles

The propagation and linearization errors that resultwhen low-cost inertial sensors are used increaserapidly with increasing the propagation time. Onetechnique to reduced the propagation error is toincrease the frequency of Kalman �lter cycles.

To speed up the Kalman �lter cycles, CUDA Clanguage [2] is used to implement the FFT, Harriscorner detector, 2D image correlation, and A�netransform. CUDA C is a programming language usedto implement algorithms using parallel computingon NVIDIA GPUs. This reduces the processingprocessing of the captured images, and lead to fasterimplementation of the proposed system.

6 Experiment Results

Indoor experiments were done to validate thepresented system using the Razor IMU_9DOFelectronic board which contains triple-axis digitaloutput gyro InvenSense ITG-3200 with max angularrate 300o/s, triple-axis accelerometer Analog devicesADXL345, and triple-axis magnetometer HoneywellHMC5843. The software on the board measures theoutputs of the three sensors and scales them in a 50Hz IMU measurement rate.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

17

Page 6: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

The camera used in the indoor experiments is a480 × 640 pixels USB camera con�gured to work inmonochrome mode. The feature size is 15× 15 pixels,and the number of Kalman iteration is 10. A laptopPC with Intel Core I7 quad core, 8GB DRAM is usedwith NVIDIA GeForce GT 525M GPU that has 96cores and 1GB local DRAM.

To �nd the initial condition before startingthe EKF, the initial orientation is computed usingthe gravitational force direction and the magnetic�eld direction, measured by the accelerometer andmagnetometer, respectively. The height of thecamera/IMU block above the �oor in this indoorexperiment is known, but an altimeter can be used inoutdoor experiments to measure the altitude. A singlefeature matching is then used to compute the initialcondition for the vehicle horizontal location. Aftercomputing the initial condition, the EKF cycles start.

In the �rst experiment, the camera/IMU blockmounted on the vehicle moves in a known trajectory,and the algorithm generate the estimated trajectoryusing EKF. The actual trajectory (dashed line) andthe EKF estimated trajectory (solid line) are shownin Figure (5).

Figure 5: The actual and estimated trajectories

The proposed system is capable of estimatingcamera trajectory correctly, but there is largeoscillation around the real trajectory due to largepropagation error. This oscillation results from thelow accuracy of the IMU used. The experimentalso showed that without using dynamic modelpropagation constraint the �lter diverged due to lowaccuracy of the used inertial sensors.

In the second experiment, the EKF cycle timeis compared when MATLAB (CPU) is used to thecase when MATLAB & and CUDA (GPU) is used forparallel computing of the image processing step. Theresults in both cases are shown in Figure (6), and thespeedup achieved by the GPU as a function of numberof feature per image is shown in Figure(7).

Figure 6: CPU vs GPU cycle processing time

Figure 7: GPU/CPU speedup ratio

7 Conclusion

The main contribution of this paper is introducingmodi�cations to VAINS in order to use low-costinertial sensors. The experimental results showedthat by inserting vehicle dynamic model constraintsto Kalman propagation process, and by increasingthe frequency of Kalman �lter cycles using CUDAimplementation on NVIDIA GPU, the divergence ofthe Kalman �lter can be mitigated when low-costinertial sensors are used.

Several limitations in the proposed algorithmsare considered for future research. Firstly, theA�ne transform assumes that the land area underthe vehicle is �at. More complex image processingtechnique such as the A�ne invariant interest pointalgorithm is needed so that the navigation overcomplex terrain land is feasible. This methodis e�ective in tracking any feature from di�erentdirections without caring about the relative positionbetween the camera and the interest point, but iscomputationally intensive. Parallel computing usingGPUs can reduce the processing time of this method.

Secondly, many sensors are used in the design ofVAINS such as the magnetometer and the altimeter.The latter sensors are used only during the initialphase to get a coarse estimate of the initial positionof the vehicle over the map area. However, thesesensors can be also used as inputs for Kalman �lterestimator, and can be considered as other observersfor the system state like the camera.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

18

Page 7: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

References

[1] D. Bayard and P. Brugarolas. An EstimationAlgorithm for Vision-Based Exploration ofSmall Bodies in Space. In American ControlConference, June 2005.

[2] NVIDIA CO. NVIDIA CUDA C programmingguide, 2011. version 4.1.

[3] S. Cooper and H. Durrant-Whyte. A Kalman�lter model for GPS navigation of landvehicles. In Proceedings of the IEEE/RSJ/GIInternational Conference on Intelligent Robotsand Systems, volume 1, pages 157�163, 1994.

[4] A. Davison and D. Murray. SimultaneousLocalization and Map-Building Using ActiveVision. IEEE Transactions on Pattern Analysisand Machine Intelligence, 24(7):865 �880, July2002.

[5] E. Eade and T. Drummond. Scalable MonocularSLAM. In IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition,June 2006.

[6] Rafael C. Gonzalez and Richard E. Woods.Digital Image Processing. Perntice Hall, 2ndedition, 2002.

[7] C. Harris and M. Stephens. A combined cornerand edge detector. In Proceedings of the 4th AlveyVision Conference, pages 147�151, 1988.

[8] Congwei Hu, Wu Chen, Yongqi Chen, andDajie Liu. Adaptive Kalman Filtering forVehicle Navigation. Journal of Global PositioningSystems, 2(1):42�47, October 2003.

[9] Myung Hwangbo, Jun-Sik Kim, and T. Kanade.Inertial-aided KLT feature tracking for a movingcamera. Proceeding IEEE/RSJ internationalconference on Intelligent robots and systems ,pages 1909�1916, October 2009.

[10] J. Langelaan. State Estimation for AutonomousFlight in Cluttered Environments. PhD thesis,Department of Aeronautics and Astronautics atStanford University, March 2006.

[11] F. Mirzaei and S. Roumeliotis. A KalmanFilter-Based Algorithm for IMU-CameraCalibration: Observability Analysis andPerformance Evaluation. IEEE Transactions onRobotics, 24(5):1143�1156, October 2008.

[12] A. Mourikis and S. Roumeliotis. A Multi-State Constraint Kalman Filter for Vision-aidedInertial Navigation. In IEEE InternationalConference on Robotics and Automation, April2007.

[13] A. Mourikis, N. Trawny, S. Roumeliotis,A. Johnson, A. Ansar, and L. Matthies. Vision-Aided Inertial Navigation for Spacecraft Entry,Descent, and Landing. IEEE Transactions onRobotics, 25(2):264�280, April 2009.

[14] L. Ong, M. Ridley, J. Kim, E. Nettleton, andS. Sukkarieh. Six DOF decentralised SLAM.In Australasian Conference on Robotics andAutomation (ACRA 2003), 2003.

[15] R. Prazenica, A. Watkins, and A. Kurdila.Vision-Based Kalman Filtering for Aircraft StateEstimation and Structure for Motion. In AIAAGuidance, Navigation and Control Conference,2005.

[16] S. Theodoridis and K. Koutroumbas. PatternRecognition. Elservier, 2006.

[17] N. Trawny, A. Mourikis, S. Roumeliotis,A. Johnson, and J. Montgomery. Vision-aidedInertial Navigation for Pin-Point Landing usingObservations of Mapped Landmarks. Journal ofField Robotics, 24(5):357�378, May 2007.

[18] Tinne Tuytelaars and Krystian Mikolajczyk.Local Invariant Feature Detectors: A survey.Foundations and trends in computer graphics andvision, 3(3):177�280, 2007.

[19] D. Zachariah and M. Jansson. Camera-AidedInertial Navigation Using Epipolar Points. InPosition Location and Navigation Symposium(PLANS), 2010 IEEE/ION, May 2010.

[20] P. Zarchan and H. Muso�. Fundamentalsof Kalman Filtering , A Practical Approach,volume 208 of Progress in Astronautics andAeronautics. American Institute of Aeronauticsand Astronautics, 2nd edition, 2005.

Biographies

Eng. Ahmed Zakaria graduatedfrom the Electronics and ElectricalCommunications Department atCairo University, Egypt in 2006 andcurrently he is an MSc. studentin the Electronics and ComputerCommunication division at CairoUniversity. His research interests

include Image processing, and vision aided navigation.Ahmed is a Senior Automation Engineer in SAS-Automation, Maadi, Cairo, Egypt.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

19

Page 8: Fast Vision-Aided Inertial Navigation Using Low-Cost Sensors

Dr. Mohsen Mahroos graduatedfrom the Electronics and ElectricalCommunications Department at CairoUniversity in 1987 with Honours. Hereceived MSc degree and PhDdegree from the same Departmentwith specialization in ElectronicCircuits and Systems in 1990 and

2003, respectively. In 2004, he was appointed asan Assistant Professor in the same Department.Between 2009 and 2012, he was appointed as anAssistant Professor in the Electrical EngineeringDepartment, Umm Al-Qura University, Makkah,Saudi Arabia. Between 1993 and 1995, he has beenPhD candidate in the Computer Science Department,University of Saskatchewan, Canada. In 1995, hehas been lecturer in the same Department. His mainresearch interests are computer architecture andparallel computing, arti�cial intelligence, signal andimage processing, electronic circuits and systems,and integrated circuits design and fabrication.

Dr.Amin Nassar received hisBSc. from Cairo University, in 1965,in electronics and communicationengineering. He got his PhD inelectronics and communicationengineering in 1973 from Germany.He is an Emeritus Professor inthe Electronics and Electrical

Communications Department, Faculty of Engineering,Cairo University, Giza, Egypt.

ICGST Conference on Computer Science and Engineering, Dubai, UAE, 16-18 July, 2012

20