196
Royal Institute of Technology Key Technologies in Low-cost Integrated Vehicle Navigation Systems Yueming Zhao Doctoral Dissertation in Geodesy Royal Institute of Technology (KTH) Division of Geodesy and Geoinformatics 10044 Stockholm Sweden September 2013

Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

Royal Institute of Technology

Key Technologies in Low-cost Integrated Vehicle Navigation Systems

Yueming Zhao

Doctoral Dissertation in Geodesy

Royal Institute of Technology (KTH) Division of Geodesy and Geoinformatics

10044 Stockholm

Sweden

September 2013

Page 2: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

Yueming Zhao: Key Technologies in Low-cost Integrated Vehicle Navigation Systems Supervisor: Professor Lars E. Sjöberg Docent Milan Hormuž Faculty Opponent:

Professor Hossein Nahavandchi Norwegian University of Science and Technology, Department of Civil and Transport Engineering, Trondheim, Norway. Evaluation committee:

- Professor Magnus Jansson. Signal Processing Lab (SB), KTH, Sweden. - Professor Martin Vermeer. School of Engineering, Aalto Univerity, Finland. - Dr. Helen Rost. Blom Sweden AB, Sollentuna, Sweden.

© Yueming Zhao 2013

Printed by Universitetsservice US-AB, Drottning Kristinas väg 53B, 100 44 Stockholm, Sweden

TRITA SoM 2013-11 ISSN 1653-6126 ISNR KTH/SoM/13-11/SE ISBN 978-91-7501-887-4

Page 3: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

To my wife Yue Wang

and my father Qingyu Zhao

Page 4: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks
Page 5: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

i

Abstract

Vehicle navigation systems incorporate on-board sensors/signal receivers and provide necessary positioning and guidance information for land, marine, airborne and space vehicles. Among different navigation solutions, the Global Positioning System (GPS) and an Inertial Navigation System (INS) are two basic navigation systems. Due to their complementary characters in many aspects, a GPS/INS integrated navigation system has been a hot research topic in recent decades. Both advantages and disadvantages of each individual system and their combination are analysed in this thesis. The Micro Electrical Mechanical Sensors (MEMS) successfully solved the problems of price, size and weight with traditional INS, and hence are widely applied in GPS/INS integrated systems. The main problem of MEMS is the large sensor errors, which rapidly degrade the navigation performance in an exponential speed. By means of different methods, such as autoregressive model, Gauss-Markov process, Power Spectral Density and Allan Variance, we analyse the stochastic errors within the MEMS sensors. The test results show that different methods give similar estimates of stochastic error sources. An equivalent model of coloured noise components (random walk, bias instability and ramp noise) is given. Three levels of GPS/IMU integration structures, i.e. loose, tight and ultra-tight GPS/IMU navigation, are introduced with a brief analysis of each character. The loose integration principles are presented with detailed equations as well as the INS navigation principles. The Extended Kalman Filter (EKF) is introduced as the data fusion algorithm, which is the core of the whole navigation system. Based on the system model, we show the propagation of position standard errors with the tight integration structure under different scenarios. Even less than 4 observable GNSS satellites can contribute to the integrated system, especially for the orientation errors. A real test with loose integration is carried out, and the EKF performance is analysed in detail.

Since the GPS receivers are normally working with a digital map, the map matching principle and its link-choosing problem are briefly introduced. This problem is proposed to be solved by the lane detection from real-time images. The procedures for the lane detection based on image processing are presented. The test on high ways, city streets and pathways are successfully carried out, and analyses with possible solutions are given for some special failure situations. To solve the large error drift of the IMU, we propose to support the IMU orientation with camera motion estimation from image pairs. First the estimation theory and computer vision principles are briefly introduced. Then both point and line matches algorithms are given. Finally the L1-norm estimator with balanced adjustment is proposed to deal with possible mismatches (outliers). Tests and comparisons with the RANSAC algorithm are also presented. For the latest trend of MEMS chip sensors, their industry and market are introduced. To evaluate the MEMS navigation performance, we augment the EKF with an equivalent coloured noise model, and the basic observability analysis is given. A realistic simulated navigation test is carried out with single and multiple

Page 6: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

ii

MEMS sensors, and a sensor array of 5-10 sensors are recommended according to the test results and analysis. Finally some suggestions for future research are proposed. Keywords: GPS, INS, MEMS, error modelling, integration structure, lane detection, camera motion, balance adjustment, sensor array

Page 7: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

iii

Acknowledgments First of all, I would like to express most sincere gratitude to my supervisors, Professor Lars E. Sjöberg and Docent Milan Hormuž, for their inspiring guidance, patient explanations and constructive suggestions during my study at KTH. Thanks to the strict requirements and great support from Prof. Sjöberg, I gradually learned how to carry out my research, how to write formal papers/ academic thesis and how to query/defend my scientific research in many different aspects. Docent Hormuž is no doubt one of nicest and kindest persons that I have ever met. I always feel at ease to discuss with him about no matter which detailed problems I encountered from theoretical methods to code debugging, from results analysis to thesis writing. Without their continuous support and meticulous supervision, this thesis could not be finished. I wish to thank all the staff members of our division for their kind help, especially Dr. Huaan Fan. Right after arrival in Sweden I felt quite puzzled with different problems about both my study and my life. Dr. Fan is always warm-hearted to help both as one of the faculty members and as an elder friend from the same country. I will always remember his words “to be a man with integrity”. Thanks to Dr. Masoud Shirazian for the discussions and help on GPS during my study period; Thanks Prof. Mehdi Eshagh, Docent Mohammad Bagherbandi and Mr. Erick Asenjo for the everyday help and informal discussions during the past years; Thanks to my successive office mates Dr. Yuriy Reshetyuk, Dr. Prosper Ulotu, Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks to Dr. Johan V. Andersson, Dr. Jonas Ågren, Mr. Ronald Ssengendo and Mr. Majid Abrehdary, etc. for the discussions and help during everyday life. Special acknowledgement goes to Dr. Krzysztof Gajdamowiczs, CEO of Visimind AB and also my Licentiate opponent, for his constructive suggestions on the content and writing skills of my Licentiate thesis as well as the project data for this thesis. Sincere thanks to Prof. Georg G. Kampmann for his support and comments on balanced adjustment as the original author.

I also want to express my sincere thanks to my former supervisor Prof. Bo Wang and the former Dean Prof. Yu Wang at BIT in China for their great support before I came to Sweden; my Chinese friends Dr. Zhiqiang Zhou, Dr. Zhihui Zheng, Dr. Chengyou Wang, Mr. Zhongping Wu and Mr. Jing Meng, etc. as well as my Chinese friends at KTH or other universities in Sweden including but not limited to Dr. Xin Niu, Dr. Bo mao, Dr. Yuheng Li, Dr. Yikang Ri, Dr. Yuchu Qin, Ms. Jingyi Lin, Ms. Nan Sheng, Ms. Weiwei Jin, Mr. Yajun Wei and Mr. Peng Qi, etc. Thanks all of you for bringing me the happy time during the past years even far away from our home country. I will always cherish this invaluable memory and our friendship.

Page 8: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

iv

Thank China Scholarship Council (CSC) for providing the scholarship during my PhD study period at KTH.

Finally, I would like to express most special appreciations to my wife Yue Wang, the “third supervisor”, for her endless love, encouragement and patience during my PhD and working years; deepest gratitude to my parents and my dear sister for their limitless love and support during my whole life. It is all your love that makes me become who I am today. No matter where I am, no matter what I am doing, my heart will always belong to you. Yueming Zhao September 2013, KTH, Stockholm

Page 9: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

v

Contents Abstract ................................................................................................................... i Acknowledgement .................................................................................................. iii List of tables ............................................................................................................ viii List of figures .......................................................................................................... ix List of abbreviations .............................................................................................. xi Chapter 1 Introduction 1.1 Navigation solutions .................................................................................................................. 1 1.2 GNSS positioning and its features ............................................................................................. 2 1.3 INS and its errors ....................................................................................................................... 4 1.4 Map matching and image-based navigation ............................................................................... 5 1.5 Integrated navigation systems .................................................................................................... 6 1.6 The author’s contribution ........................................................................................................... 8

Chapter 2 MEMS error modelling 2.1 MEMS inertial sensors ............................................................................................................... 11 2.1.1 Basic principles of MEMS inertial sensors ....................................................................... 12 2.1.2 MEMS error sources ......................................................................................................... 14 2.2 Methodology of MEMS error modelling .................................................................................. 14 2.2.1 Autoregressive model ........................................................................................................ 14 2.2.2 Gauss-Markov process ..................................................................................................... 16 2.2.3 Power Spectral Density ..................................................................................................... 18 2.2.4 Allan Variance .................................................................................................................. 19 2.3 Testing results ............................................................................................................................ 24 2.4 Discussion and conclusion on different error models ................................................................ 30 2.5 Equivalent coloured noise modelling ......................................................................................... 30 2.6 Summary .................................................................................................................................... 36

Page 10: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

vi

Chapter 3 GPS/IMU integrated navigation 3.1 Three levels of integration structures ......................................................................................... 37 3.2 Principle of loose integration ..................................................................................................... 40

3.2.1 GPS observations and positioning accuracy ...................................................................... 40 3.2.2 INS and navigation principles ........................................................................................... 42 3.2.3 INU sensors calibration ..................................................................................................... 47 3.2.4 INS Initialization and Alignment ...................................................................................... 48 3.2.5 INS error dynamics in the e-frame .................................................................................... 49

3.3 Extended Kalman filter and implementation ............................................................................. 51 3.3.1 State vector and system dynamic equations ...................................................................... 53 3.3.2 Differenced GPS observation equations ............................................................................ 56 3.3.3 System observation equations ........................................................................................... 59 3.3.4 Implementation structure of loose integration ................................................................... 61

3.4 Tests and analyses ...................................................................................................................... 62 3.4.1 Test setup and detailed steps ............................................................................................. 62 3.4.2 Error of tight integration on simulated observations ......................................................... 64 3.4.3 Positioning of loose integration on real observations ........................................................ 68 3.4.4 Analysis on Kalman filter performance ............................................................................. 71

3.5 Summary .................................................................................................................................... 74 Chapter 4 Vision aided map matching 4.1 Map matching and existing problems ........................................................................................ 75 4.2 Lane detection ............................................................................................................................ 76

4.2.1 Pre-processing .................................................................................................................. 76 4.2.2 Edge detection ................................................................................................................. 79 4.2.3 Line detection ................................................................................................................... 83 4.2.4 Practical constraints and difficulties ................................................................................. 84

4.3 Experiments and analyses .......................................................................................................... 85 4.2.1 Highway tests ................................................................................................................... 85 4.2.2 Pathways around KTH .................................................................................................... 90

4.4 Existing problems and discussion .............................................................................................. 95 4.5 Lane detection applications for map matching .......................................................................... 98 4.6 Summary .................................................................................................................................... 98

Chapter 5 Camera motion estimation from images 5.1 Introduction ................................................................................................................................ 100 5.2 Optimal and robust estimation ................................................................................................... 100 5.2.1 Different norms and linear programming ........................................................................ 101 5.2.2 Robust estimation with balanced adjustment ................................................................... 104 5.2.3 RANSAC ......................................................................................................................... 107

5.3 Camera motion estimation from image pairs ............................................................................. 110 5.3.1 Geometry of a pinhole camera ......................................................................................... 111 5.3.2 Basic formulas with projective coordinates ..................................................................... 113 5.3.3 Geometry of point/line motion in image pairs ................................................................. 114

Page 11: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

vii

5.4 Statement of the problem and the proposed solution ................................................................. 115 5.4.1 Point matches algorithm ................................................................................................... 115 5.4.2 Line matches algorithm .................................................................................................... 118 5.4.3 Proposed motion estimation solution with detailed equations ......................................... 119

5.5 Support of relative orientation between camera and IMU ......................................................... 122 5.6 Tests with two scenarios ............................................................................................................ 123 5.6.1Detailed implementation and comparisons ....................................................................... 125 5.6.2 Point matches vs. Point/line matches ............................................................................... 125 5.6.3 Non-balanced adjustment vs. Balanced adjustment ........................................................ 128 5.6.4 Comparisons with RANSAC ........................................................................................... 129

5.6 Summary .................................................................................................................................... 132

Chapter 6 MEMS sensors and performance evaluation 6.1 Introduction ................................................................................................................................ 133 6.2 MEMS industry and market background ................................................................................... 133 6.3 Augmented Kalman filter with MEMS sensor errors ................................................................ 136 6.4 System observability analysis .................................................................................................... 140 6.4.1 Observability definition and properties ............................................................................ 141 6.4.2 Observability analysis of MEMS coloured noise in integrated system............................ 142

6.5 Simulation tests for MEMS chip performance evaluation ......................................................... 145 6.5.1 Test set-up and the trajectory ........................................................................................... 146 6.5.2 INS navigation error propagations ................................................................................... 147 6.5.3 Performance evaluation and analysis ............................................................................... 149

6.6 Summary .................................................................................................................................... 155

Chapter 7 Conclusions and proposals on further research 7.1 Summary and conclusions.......................................................................................................... 157 7.2 Proposals on further research ..................................................................................................... 159

References .......................................................................................................................... 161

Appendix Appendix A ...................................................................................................................................... 171 Appendix B ...................................................................................................................................... 172 Appendix C ...................................................................................................................................... 176 Appendix D ...................................................................................................................................... 177 Appendix E ...................................................................................................................................... 179 Appendix F ....................................................................................................................................... 181

Page 12: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

viii

List of tables Table 1.1. Comparison of existing GNSS systems ........................................................................................ 2 Table 2.1. Summary of PSD and AV ............................................................................................................. 21 Table 2.2. Summary of characteristic curve slopes and coefficient values. ................................................... 22 Table 2.3. Error coefficients for gyros. ........................................................................................................... 29 Table 2.4. Error coefficients for accelerometers ............................................................................................. 29 Table 5.1. Estimated parameters with point matches and point/vertical line matches (Acceleration, noise std. error: 1 pixel). ........................................................................................ 125 Table 5.2. Estimated parameters with point matches and point/vertical line matches (Acceleration, noise std. error: 2 pixels). ...................................................................................... 126 Table 5.3. Estimated parameters with point matches and point/line matches (Turning, noise std. error: 1 pixel). ................................................................................................ 127 Table 5.4. Estimated parameters with point matches and point/line matches (Turning, noise std. error: 2 pixels). ............................................................................................... 127 Table 5.5. Estimated parameters by non-balanced and balanced adjustments (1 outlier within 10 point matches). ............................................................................................... 128 Table 5.6. Estimated parameters by non-balanced and balanced adjustments (4 outliers within 10 point matches, i.e. maximum number of outliers). ....................................... 128 Table 5.7. Estimated values by non-balanced and balanced adjustments (1 outlier within 10 point matches) ................................................................................................ 130 Table 5.8. Estimated values by non-balanced and balanced adjustments (4 outlier within 10 point matches, i.e. maximum number of outliers). ......................................... 130

Page 13: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

ix

List of figures Fig. 1.1. Benefits of INS/DGPS integration. ................................................................................................... 6 Fig. 1.2. Integrated navigation on an airplane. ................................................................................................ 8 Fig. 1.3. The thesis structure. .......................................................................................................................... 10 Fig. 2.1. The Crista IMU produced by Cloud Cap Technology. ..................................................................... 11 Fig. 2.2. The development of MEMS INS. ..................................................................................................... 12 Fig. 2.3. Left: Illustration of the Coriolis effect. Right: A tuning-fork angular rate sensor based on the Coriolis effect. ............................................................................................................................ 13 Fig. 2.4. Schematic structure of an accelerometer. ......................................................................................... 13 Fig. 2.5. ACF of GM process. ......................................................................................................................... 17 Fig. 2.6. Hypothetical Gyro in Single-sided PSD Form. ................................................................................ 19 Fig. 2.7. Schematic algorithm of the overlapped AV...................................................................................... 20 Fig. 2.8. Allan variance analysis noise term results. ....................................................................................... 23 Fig. 2.9. ISIS-IMU. ......................................................................................................................................... 24 Fig. 2.10. ACF of AR(2) residuals for the Y-Accelerometer. ......................................................................... 25 Fig. 2.11. X-Gyro PSD. .................................................................................................................................. 26 Fig. 2.12. X-Gyro PSD after frequency averaging. ......................................................................................... 27 Fig. 2.13. AD plot for 3 gyros. ........................................................................................................................ 28 Fig. 2.14. AD plot for Y-Gyro with percentage errors (I bar). ........................................................................ 28 Fig. 2.15. AD plot for 3 accelerometers. ......................................................................................................... 29 Fig. 3.1. Loose, tight and ultra-tight GPS/INS navigation systems. ............................................................... 37 Fig. 3.2. Classic definition of GPS/INS tight integration................................................................................ 38 Fig. 3.3. Static PPP positioning error. ............................................................................................................. 41 Fig. 3.4. Precision of North coordinates in kinematic mode. .......................................................................... 42 Fig. 3.5. The e-frame and the n-frame. ........................................................................................................... 43 Fig. 3.6. The circular progress of EKF. .......................................................................................................... 52 Fig. 3.7. The offset between GPS antenna and IMU. ..................................................................................... 59 Fig. 3.8. Structure of loose GPS/INS integration. ........................................................................................... 61 Fig. 3.9. Flowchart of GPS/INS integration initialization algorithm. ............................................................. 63 Fig. 3.10. Flowchart of GPS/INS integration dynamic process. ..................................................................... 64 Fig. 3.11. Test flight path. ............................................................................................................................... 65 Fig. 3.12. Position errors of INS + 4 satellites with 40s GPS outage.............................................................. 65 Fig. 3.13. Orientation errors of INS + 4 satellites with 40s GPS outage. ....................................................... 66 Fig. 3.14. Position errors of INS + 2 satellites with 40s GPS outage ............................................................. 67 Fig. 3.15. Orientation errors of INS + 2 satellites with 40s GPS outage. ....................................................... 67 Fig. 3.16. Setup of GPS antenna and IMU. ..................................................................................................... 68 Fig. 3.17. Position errors with EKF. ............................................................................................................... 69 Fig. 3.18. Velocity errors with EKF. .............................................................................................................. 70 Fig. 3.19. Orientation errors with EKF. .......................................................................................................... 71 Fig. 3.20. Estimated positioning errors within 1 minute. ................................................................................ 72 Fig. 3.21. Estimated positioning errors within 150 seconds. .......................................................................... 73 Fig. 4.1. The principles of spatial filtering. ..................................................................................................... 79 Fig. 4.2. Common edge detector masks and the first-order derivatives. ......................................................... 81 Fig. 4.3. 2D Convolution. ............................................................................................................................... 82 Fig. 4.4. A straight line with polar coordinates. .............................................................................................. 83

Page 14: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

x

Fig. 4.5. Test driving on Road 68. .................................................................................................................. 86 Fig. 4.6. A general high way grayscale image. ............................................................................................... 87 Fig. 4.7. ROI of Fig. 4.6. ................................................................................................................................. 87 Fig. 4.8. Edge detection .................................................................................................................................. 87 Fig. 4.9. Hough transform image. ................................................................................................................... 88 Fig. 4.10. Detected lines. ................................................................................................................................ 88 Fig. 4.11. Delete redundant lines. ................................................................................................................... 88 Fig. 4.12. The frame with other vehicle and turning. ...................................................................................... 89 Fig. 4.13. ROI of Fig. 4.12. ............................................................................................................................. 89 Fig. 4.14. Edge detection within ROI ............................................................................................................. 89 Fig. 4.15. Detected lines. ................................................................................................................................ 90 Fig. 4.16. Drottning Kristinas väg at the L-building of KTH. ........................................................................ 90 Fig. 4.17. Edge detection within ROI. ............................................................................................................ 91 Fig. 4.18. Detected lines. ................................................................................................................................ 91 Fig. 4.19. Y junction on Körsbärsvägen besides KTH. .................................................................................. 92 Fig. 4.20. Edge detection within ROI. ............................................................................................................ 92 Fig. 4.21. Detected lines. ................................................................................................................................ 92 Fig. 4.22. Delete redundant lines. ................................................................................................................... 93 Fig. 4.23. At the center of Y junction. ............................................................................................................ 93 Fig. 4.24. Edge detection within ROI. ............................................................................................................ 93 Fig. 4.25. Only one road edge detected (one curve approximated with 2 lines). ............................................ 94 Fig. 4.26. After the turning. ............................................................................................................................ 94 Fig. 4.27. Road edge detection (within ROI). ................................................................................................. 95 Fig. 4.28. Large shadows covering the road edge. .......................................................................................... 95 Fig. 4.29. Scene with lines parallel with the lanes. ......................................................................................... 96 Fig. 4.30. Complex scenes beside the city street (Valhallavägen nearby KTH). ............................................ 97 Fig. 5.1. Flowchart of RANSAC algorithm. ................................................................................................... 107 Fig. 5.2. The relation between outliers’ proportion and the minimum number of iterations. ......................... 109 Fig. 5.3. Principle of a pinhole camera. .......................................................................................................... 110 Fig. 5.4. The geometry of a pinhole camera (positive image)......................................................................... 111 Fig. 5.5. The geometry of a pinhole camera as seen from the X axis. ............................................................ 112 Fig. 5.6. The geometry of line motion. ........................................................................................................... 114 Fig. 5.7. Epipolar geometry between two views. ............................................................................................ 116 Fig. 5.8. Two line segments in correspondence .............................................................................................. 118 Fig. 5.9. The acceleration scene. ..................................................................................................................... 124 Fig. 5.10. The turning scene............................................................................................................................ 124 Fig. 5.11. Yaw angle errors with different number of outliers. ....................................................................... 129 Fig. 5.12. Elapsed time with different number of outliers. ............................................................................. 131 Fig. 6.1. MEMS market forecast per application. ........................................................................................... 134 Fig. 6.2. MEMS market forecast ..................................................................................................................... 135 Fig. 6.3. Maturity of MEMS devices in 2012. ................................................................................................ 135 Fig. 6.4. iphone 4 main board top side with the MEMS chips L3G4200D and LIS331DLH ......................... 145 Fig. 6.5. Test driving path. .............................................................................................................................. 147 Fig. 6.6. Position standard errors. ................................................................................................................... 148 Fig. 6.7. Orientation standard errors. .............................................................................................................. 148 Fig. 6.8. Position errors due to algorithm limitations(sampling rate: 10 Hz) .................................................. 150 Fig. 6.9. Position errors due to algorithm limitations(sampling rate: 100 Hz). ............................................... 150 Fig. 6.10. Single MEMS chip positioning errors (one instance). .................................................................... 151 Fig. 6.11. Five MEMS chips positioning errors and their mean values. ......................................................... 152 Fig. 6.12. Single MEMS chip orientation errors (one instance)...................................................................... 153 Fig. 6.13. Orientation errors of 5 MEMS chips and their mean value (one instance). .................................... 154 Fig. 6.14. Standard error of the mean values .................................................................................................. 155

Page 15: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

xi

List of abbreviations ACF Autocorrelation function AD Allan Deviation AR Autoregressive AV Allan Variance b-frame body frame c-frame camera frame CG center of gravity DPR Delta Pseudorange e-frame Earth-Centered-Earth-fixed Cartesian frame EKF Extended Kalman Filter GM process Gauss-Markov process GNSS Global Navigation Satellite System GPS Global Positioning System i-frame inertial frame INS Inertial Navigation System IMU Inertial Navigation Unit LTI Linear Time-Invariant LTV Linear Time-Variant LVN Land Vehicle Navigation MCU Micro Controller Unit MEMS Micro Electrical Mechanical Sensor M-estimation Maximum Likelihood Estimation MM Map Matching NED North-East-Down n-frame navigation frame RMS Root Mean Square RTK Real Time Kinematic PDOP Position Dilution of Precision PPP Precise Point Positioning PR Pseugorange PSD Power Spectral Density SBC Schwarz’s Bayesian Criterion SINS Strapdown Inertial Navigation System TDF Two-degree of Freedom ZVUP Zero Velocity Update

Page 16: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

1

Chapter 1

Introduction

Navigation and guidance are in great demand in both industrial and general civilian applications in the society. As we all know, GPS is the most widely used navigation tool, but it has several limitations, e.g. the receiver must be in open areas to receive the signals from at least 4 satellites. Therefore GPS combined with other navigation sensors is a more reliable solution.

The fields of navigation applications can be divided into four general categories: land, marine, aeronautic, and space navigation. The navigation system users are also divided into two different groups: civilian and military users. The civilian users are assigned to commercial and recreational applications, while the military users are aimed for vehicle and weapon guidance. In this thesis, we mainly focus on civilian land vehicle navigation (LVN) applications.

For general civilian navigation applications, the normal inertial sensors are still at a high price level. This motivates the use of cheap MEMS sensors, especially in consumer electronic products.

1.1 Navigation solutions

Navigation is a research field that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another, and it deals with trajectory determination and guidance for moving objects in real time. Trajectory determination relates to the determination of the state vector of an object at any given time. The state vector can include position, velocity, attitude and/or other parameters. On the other hand, guidance forces the moving object onto a predetermined route to reach a given destination.

The navigation methods have varied gradually through the history. In general, there are several basic navigation solutions: radio navigation (e.g. GPS, terrestrial radio navigation), dead reckoning (e.g. INS), image-based (e.g. camera, laser scanner) and celestial navigation. Each navigation solution has its advantages and disadvantages (or different application fields). The first two navigation solutions will be introduced in detail in the following sections.

The performance of a navigation system is characterized by a set of quality parameters, including accuracy, continuity, reliability, update rate etc. The detailed definitions of these quality parameters can be found in the 2001 U.S. Federal Radio Navigation Plan (Department of Defence and Department of Transportation, 2002).

Page 17: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

2

Among different performance requirement for a navigation system, there are two basic dilemmas: accuracy versus real time and accuracy versus cost (weight, volume). A large amount of theoretical research has been carried out to solve or compromise these two dilemmas in different applications. At the same time lots of new techniques on the sensor fabrication greatly help to accelerate the modernization process and accuracy level of the navigation systems, e.g. fibre optic gyroscope (FOG) senses.

For the LVN system, it is sensitive to the cost but does not have high requirement on the accuracy. Therefore our motivation is trying to integrate different low-cost sensors into a navigation system, and as a result the accuracy and reliability of the whole system can be increased at a reasonable cost.

1.2 GNSS positioning and its features

Up to April 2013, there are only two operational Global Navigation Satellite Systems (GNSS), namely the American GPS and the Russian GLONASS. In addition, there are two other systems under construction, namely the European Galileo and the Chinese Compass systems. A comparison among these four navigation systems is given in Table 1.1.

Table 1.1 Comparison of existing GNSS systems (from Wikipedia, GNSS)

System GPS GLONASS COMPASS Galileo Political entity

United States Russian Federation

China European Union

Coding CDMA FDMA/CDMA CDMA CDMA Orbital height

20,180 km (12,540 mi)

19,130 km (11,890 mi)

21,150 km (13,140 mi)

23,220 km (14,430 mi)

Period 11.97 hours (11 h 58 m)

11.26 hours (11 h 16 m)

12.63 hours (12 h 38 m)

14.08 hours (14 h 5 m)

Evolution per sidereal day

2 17/8 17/10 17/10

Number of satellites

At least 24

24 operational 1 in preparation 2 on maintenance 3 reserve 1 on tests

5 geostationary orbit (GEO) satellites, 30 medium Earth orbit (MEO) satellites

4 test satellites in orbit, 22 operational satellites budgeted

Frequency

1.57542 GHz (L1 signal) 1.2276 GHz (L2 signal)

Around 1.602 GHz (SP) Around 1.246 GHz (SP)

1.561098 GHz (B1) 1.589742 GHz(B1-2) 1.20714 GHz (B2) 1.26852 GHz (B3)

1.164-1.215 GHz (E5a, E5b) 1.260-1.300 GHz

Page 18: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

3

(E6) 1.559-1.592 GHz (E2-L1-E11)

Status Operational Operational, CDMA in preparation

15 satellites operational, 20 additional satellites planned

In preparation

The GNSS is a space-based satellite system that provides reliable positioning and time information in all weather and at all times and anywhere on or near the Earth. The most famous one GPS was established in 1973 by the U.S. Department of Defence. It is composed of three segments: the space segment, the control segment and the user segment. The space segment consists of 24 satellites in six planes with 2 evolutions per sidereal day. Among all different navigation methods, GPS navigation is no doubt the most commonly used one. Therefore we are going to use GPS in this thesis, but generally any other GNSS could also be used. The most important advantage of GPS positioning is the limited positioning errors. Once signals from more than 4 satellites are received with suitable PDOP (Position Dilution of Precision), the quality of positioning results can be guaranteed, i.e. the errors are limited. Other important advantages are the light weight and cheap price. As a result, since the beginning GPS positioning is becoming more and more popular. Nowadays GPS is used almost everywhere, in cell phones, cars, laptops and so on. The use of GPS (or GNSS) is only limited by our imaginations. Current GPS receiver chips are reaching a unit price of $5, which is predicted to drop to about $1in the future. With a network of fixed, ground-based reference stations, the differential GPS (DGPS) techniques can be applied, which is a method of utilizing simultaneous observations between a pair of GPS receivers to perform relative positioning of high accuracy. DGPS could improve the positioning accuracy from the 15-metre nominal GPS accuracy to centimetre level. Therefore DGPS is adopted in our navigation system, and in open area the navigation system mainly rely on the positioning results from DGPS. However, GPS positioning has many inherent shortcomings. The most important problem is the signal outage. It is quite common that satellite signals are blocked in urban, forest and mountainous areas, as well as indoor and in underground situations. Another situation is that for high dynamic movements the signal acquisition and tracking is difficult. The availability of satellite signals must be considered in these situations. Moreover, the security of GPS signals is also a problem. As we all know that the satellite signals are vulnerable to interference and spoofing. The security of wireless signal including satellite signal is always a vital problem for electronics engineers, especially when it is related with communication and military applications. Theoretically a 1-W jammer located 100 km from the GPS antenna could prevent the acquisition of the C/A code (Schmidt, 2010).

Page 19: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

4

Another problem related with high dynamics applications is the low update frequency. The update frequency of GPS receivers is normally 1-10 Hz, which is 1-2 orders of magnitude lower than that of an Inertial Navigation System (INS). For sport cars and air planes this update frequency is too low to be acceptable. With the fast development of CPU and on-going research, this problem is hoped to be solved gradually in the future. In real applications, if there is only one GPS receiver or one antenna available there is no possibility to get the attitude information from GPS navigation. This is a great loss of sufficient information in many cases like airplane navigation.

1.3 INS and its errors

INS is based on the Newton’s second law and has many advantages as a means of navigation. The most important one is that it does not rely on external information and does not radiate any energy when operating. Therefore it is a kind of autonomous or self-contained navigation system, which is quite suitable for military applications. INS is mainly divided into two types: platform INS and strapdown INS (SINS). Now SINS is becoming more and more dominating with the powerful computation capability of embedded MCU (Micro Controller Unit). Hence the SINS is adopted in our navigation system. The occurrence of so-called MEMS (Micro-Electro-Mechanical Systems) significantly reduces both price and weight. Although the accuracy of current MEMS is still at a low level, it has become a hot research topic right after adopted in the navigation field. The accuracy performance of MEMS is improved rapidly all the time and the price is very low. INS is very accurate over short periods with high update frequency. Usually the update frequency is at least 100 Hz. But meanwhile the cost is also high and the high accurate INS is heavy and of large volume. Furthermore, the fine initial alignment is tricky and time-consuming. The biggest problem of INS is the sensor errors, the mean value of which is not zero but keeps on increasing with time. We cannot completely eliminate this physical phenomenon, but the errors can be modelled and reduced thereafter. To get the precise result of the noise parameters, usually a long time testing and recording the error data are needed beforehand. This part is crucial to reduce the effects of different kinds of error sources. The MEMS technology is one of the very active development areas and is widely used in consumer electric products like cell phones/tablets, cameras and GPS navigators since its early emergence (Nasiri, 2005). However, the accuracy of MEMS sensors degrade fast due to the large error drifts characteristics. As a result, the MEMS accuracy is in a very low level.

There are different methods to model and estimate the inertial sensor stochastic errors,

Page 20: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

5

such as Autoregressive (AR) Model (Babu et al., 2004, 2008; Nassar , 2005), Gauss-Markov (GM) model (Mohammed and Spiros , 2009) and Allan Variance (AV) (Kim et al., 2004; EI-Sheimy, 2008). Hou (2004) has verified these methods on modelling inertial sensors errors. Similarly the test on our IMU with these methods is carried out, and the MEMS based IMU error model is derived. Based on the noise error model, the coloured noise components can be augmented into the Kalman filter of integrated system model.

1.4 Map matching and image-based navigation

Nowadays a digital map is often available to work with GPS receivers in different applications, and therefore map matching (MM) is another important positioning method in addition to GPS and IMU sensors.

MM algorithms use the real time positioning data to determine the location of a vehicle on a road segment. In general, MM can be categorised into four groups: geometric, topological, probabilistic, and other advanced techniques (Quddus et al., 2003 and 2007), e.g. Fuzzy Logic. In MM, the geometric and topological algorithms have the basic problem of choosing correct links when facing a junction and with inaccurate GPS heading information due to the low speed. It is common that the vehicle has to decelerate and stops at a street junction waiting for the traffic light. During this waiting time, MM algorithms may give wrong link information due to the IMU drift errors and the inaccurate GPS heading.

To solve the problem of inaccurate heading direction, we integrate the navigation system with a camera and the reliable rotation information can be provided by the camera motion in real time navigations. Motion estimate from images is a basic topic of computer vision (Hartley and Zisserman, 2004, pp. 237-260), which is a discipline for acquiring, processing, analysing, and understanding images in order to produce numerical or symbolic information. Computer Vision has lots of application fields including positioning and navigation, e.g. for the motion of an autonomous vehicle or mobile robot.

We mainly apply the models and theories on rotation and translation estimation with image sequences from computer vision for the navigation purpose. In real time applications the possible outliers in the data set is one common problem. To solve this practical problem, balanced adjustment is proposed and analysed in details in Chapter 5 (Kampmann, 2009; Jurisch and Kampmann, 1998).

On the other hand, in LVN the cheap IMU sensors have large drift error in attitude determination when the car is not accelerating, which would greatly degrade the navigation accuracy. Therefore the lane detection from real time images can be used to

Page 21: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

6

estimate the rotation angles between images, and the rotation estimation is an important support for the IMU in the integrated system.

1.5 Integrated navigation systems

The GPS/INS integrated navigation system is no doubt a hot research topic due to the complementary error characteristics of each of the two systems. The INS position errors are small in a short term, but they grow without bound over time. On the other hand, GPS position errors are not so small within a short term, but the accuracy is bounded over time. However, the GPS signal outage is always a severe problem for GPS positioning (e.g. a 3-minute GPS signal outage can cause positioning error up to 1 km for a vehicle with speed 20 km/h). The normal GPS positioning update frequency is 1 Hz, and therefore it is not suitable for high dynamic applications individually. With INS the integrated system can provide at least 100 updates during each GPS update interval, which is quite suitable for such high dynamic situations. The benefits of INS and DPS integration are summarized in Fig. 1.1.

Fig. 1.1 Benefits of INS/DGPS integration (from Skaloud, 1999)

Page 22: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

7

There are several different architectures as to the GPS/INS integration, namely the loose, tight and ultra-tight architecture. Initially, two broad classes of integration structure: loose and tight coupling were developed (Grewal et al., 2007). However, in the recent years a third class has been proposed, i.e. deep integration or ultra-tight integration (Sun, 2010). The salient difference among these couplings is the different levels of combining INS and GPS observables. The deeper GPS and INS are integrated, the more information we can get, but meanwhile the more dependently they rely on each other. The detailed information about GPS/INS integration architectures will be given in Chapter 3. Traditionally GPS and INS are coupled through a Kalman filter for the processing of raw observables to obtain position, velocity and time. It is the core algorithm of the navigation system, which fuse the input data from GPS and IMU and estimate the position, velocity and orientation in real time. The Kalman filter is a powerful tool for estimating the errors and data fusion in navigation systems. It takes advantage of different characteristics of both individual systems to provide an integrated navigation solution, which performance is superior to that of either subsystem. A key procedure of the Kalman filter is estimating the drifting parameters of the IMU sensors with the statistical information of GNSS and IMU. So that the IMU can provide inertial navigation solution with better accuracy during GPS signals outage periods, and the improved position and velocity estimates from the IMU can then be used to reacquire GPS signal much faster when the GPS signal becomes available again. As introduced above, a camera can be integrated into the navigation system to track the traffic lanes and determine the camera motion by real time images. It is straightforward that the lanes or roads edges detected and tracked from the image sequences can solve the link-choosing problem in MM. The camera motion estimation can provide the reliable rotation information to support the IMU. Therefore our integrated navigation system will be composed of GPS, INS and a camera (images). An explicit example of such an integrated navigation system is shown in Fig. 1.2.

Page 23: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

8

Fig. 1.2 Integrated navigation on an airplane (from Skaloud, 1999)

1.6 The author’s contribution

Our goal is to design a low-cost LVN system, which can provide reliable, continuous and accurate navigation information under different situations. There are several common problems of existing navigation systems, which we aim to solve in this thesis:

Urban situations when the GPS signal is lost

IMU large error drift (both position and orientation)

The link-choosing problem of MM at a road junction

Performance evaluation of MEMS sensors under realistic situations.

Page 24: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

9

Accordingly our proposed solution is:

In open area, we mainly rely on DGPS for the vehicle navigation.

When losing GPS signal or signal degradation, the navigation system works with IMU and camera.

The camera works as a compensatory sensor to support the IMU orientation estimate.

When the vehicle is rotating at a road crossing, the camera tracks the lanes or road edges and therefore the reliable rotation estimation can be obtained.

MEMS sensors error models are built up and augmented into the navigation system.

The navigation performance of single and multiple MEMS sensors are evaluated with realistic driving situations.

The author’s contribution in this thesis can therefore be summarized as:

MEMS error tests and noise modelling (Chapter 2) Loose integration implementation and tight integration simulation (Chapter 3) Lane detection and MM improvement (Chapter 4) 3D camera motion and robust estimation in real time applications (Chapter 5) System augmentation with MEMS sensors and the performance evaluation

(Chapter 6)

The content of this thesis is arranged following the sequence as shown above, and the whole thesis structure is given by Fig. 1.3. We can see that the GPS/INS integration in Chapter 3 is the core of the whole navigation system. Chapter 2 and Chapter 6 mainly deal with combination with MEMS sensors and the performance evaluation. Chapter 4 focuses on the lane tracking and solves the link-choosing problem of MM, which also is the foundation for the line match algorithm of camera motion estimation in Chapter 5. On the other hand, the camera rotation estimation can be used to support the IMU orientation in the GPS/INS integration. In short, we will design an integrated navigation system with GPS, IMU, camera and digital maps for LVN, which can solve the existing problems above and keep the system cost at a relatively low level.

Page 25: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

10

Fig. 1.3 The thesis structure

Chapter 1 : Introduction

Chapter 2: MEMS and error modelling

Chapter 3: GPS/IMU integrated navigation

Chapter 4: Vision aided map matching

Chapter 5: Camera motion estimate from images

Chapter 6 : MEMS sensors and performance evaluation

Chapter 7: Conclusions and proposals on further research

Page 26: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

11

Chapter 2

MEMS and error modelling

Since the gyroscope behaviour of precession and nutation are known, gyroscopes are used to construct gyrocompasses, which can replace magnetic compasses on different vehicles to assist in chassis stability or be used as part of an INS. The IMU is composed of two parts, i.e. three single degree of freedom gyros and three mutually orthogonal accelerometers based on Newton’s second law.

2.1 MEMS inertial sensors

For long time the IMU used in navigation systems were heavy and expensive. Then gradually the Micro Electrical Mechanical Sensor (MEMS, Fig. 2.1) with great advantages of low price and volume has become widely used in GPS/IMU integrated systems. Although the accuracy of the MEMS is improved rapidly during the latest years as shown in Fig. 2.2, the MEMS stand-alone navigation during a GPS signal blockage could still only maintain a reasonable error within a very short time due to quickly growing sensor errors.

Fig. 2.1 The Crista IMU produced by Cloud Cap Technology (from Brown and Lu, 2004)

Page 27: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

12

Fig. 2.2 The development of MEMS INS (from Schmidt, 2010)

2.1.1 Basic principles of MEMS inertial sensors MEMS inertial sensors are a very versatile group of sensors with applications in many areas. They measure either linear acceleration or angular motion along one or several axes. In this section we give a very brief introduction on the basic principles of MEMS gyros and accelerometers without the MEMS fabrication technologies e.g. photolithography, etching and doping (Judy, 2001), which is out of the scope of this thesis. Gyroscope: Almost all MEMS angular rate sensors use vibrating elements to sense the rotation. The velocity of the vibrating element together with a rotation results in a Coriolis acceleration in a direction perpendicular to the direction of the vibration. As a result the vibrating element starts to vibrate following a curved path, which is called Coriolis effect as shown in Fig. 2.3 left. The Coriolis effect is the behaviour caused by the Coriolis acceleration, and it is only seen in a rotating reference frame. The vector formula for the magnitude and direction of the Coriolis acceleration is given by

2Coriolis a Ω× v (2.1)

where Ω is the angular velocity vector directed along the axis of rotation of the rotating reference frame, and v the velocity vector of the object in radial direction to the centre of rotation.

Page 28: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

13

Fig. 2.3 Left: Illustration of the Coriolis effect: a ball moving from the centre to the edge of a rotating disk moves along a curved trajectory on the disk. Right: A tuning-fork angular rate sensor based on the Coriolis effect (from Elwenspoek and Wiegerink, 2001, Section 7.2) Based on the Coriolis effect, different gyroscope structures are adopted for sensing the frame rotation. One well-known structure used in angular rate sensors is the tuning-fork structure depicted in Fig. 2.3 right. A rotation of the stem of the tuning-fork results in a Coriolis acceleration of the tines in a direction perpendicular to the drive direction. The Coriolis force can be detected from the bending of the tines or from the torsional vibration of the tuning-fork stem. There are also other gyro structures available; e.g. vibrating ring gyroscopes (Putty and Najafi, 1994) and tunneling gyroscopes (Kubena et al., 1999). Accelerometer: Basically an accelerometer consists of a mass connected by some kind of suspension to a frame as well as a mechanical damper as shown in Fig. 2.4.

Fig. 2.4 Schematic structure of an accelerometer (from Beeby et al., 2004, p.175)

The transfer function of the system in terms of the natural frequency n and the

quality factor Q (Beeby et al., 2004, pp.175-176) is given by the Laplace transform

Page 29: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

14

2 2

( ) 1

( ) nn

x s

a s s sQ

(2.2 a)

with

nm mkQ

b b

, n

k

m (2.2 b)

where x is the displacement of the proof mass from its rest position with respect to a reference frame, a is the acceleration to be measured, b is the damping coefficient, m is the mass of the proof mass, k is the mechanical spring constant of the suspension system, and s is the Laplace operator. Based on this principle, there are many MEMS accelerometer prototypes reported during the last two decades, including piezoresistive accelerometers, capacitive accelerometers, piezoelectric accelerometers, tunnelling accelerometers and resonant accelerometers. A detailed introduction can be found in Beeby et al. (2004, pp.181-188). 2.1.2 MEMS error sources The systemic errors of the MEMS should be estimated before use, which means that one has estimated the bias and scale errors during initialization. Therefore when applying the MEMS one mainly focuses on the stochastic errors. General stochastic error sources existing in inertial sensors include (IEEE STD 647, 2006): Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. In the next section we will apply different methods to analyse the stochastic sensor errors, i.e. autoregressive (AR) modelling, Gauss-Markov (GM) process, Power Spectral Density (PSD) and Allan Variance (AV). Then the tests on a MEMS based IMU are carried out with these methods. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

2.2 Methodology of MEMS error modelling

2.2.1 Autoregressive model (AR model)

An m-variate p-order autogressive (AR(p)) model for a stationary time series of state

vectors mv υ , observed at equally spaced instants 1, 2,...,v p , is defined by

Page 30: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

15

1

p

v l v l vl

υ w A υ ε (2.3)

or

v v v υ Bu ε (2.4)

where vε are uncorrelated random vectors with zero mean and covariance matrixC ,

lA are the coefficient matrices of the AR model, w is a vector of intercept terms to

allow for a nonzero mean of the time series, 1( ... )pB w A A and

1(1 ... )Tv v v p u υ υ . From Eq. (2.3) we can see that the current sample can

be estimated by the previous p samples. The first-order single-variate AR model in discrete time can be simply given as

1 1k k kx x w (2.5)

where x is the random variable, subscript k and k-1 are the discrete time indices, 1 is

the AR model coefficient and wk is the zero-mean Gaussian white noise with variance

.

To use the AR model we should first determine the order and then the value of each coefficient. There are several methods to determine the order of the AR model. Here Schwarz’s Bayesian Criterion (SBC, Schwarz 1978) is used, which states that the AR order p should minimize the criterion

( ) (1 ) log min.p p

p

l nSBC p N

m N (2.6)

where

log(det )p pl Δ (2.7)

pΔ is the residual cross-product matrix (see Appendix C), N is the number of samples,

and 1pn mp is the dimension of the predictor vυ (Neumaier and Schneider,

2001).

Page 31: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

16

Then the AR model parameters can be estimated by means of different methods, e.g. least-squares fitting, Yule-Walker equations (Eshel, 2010) or Burg’s method (Bos et al., 2002). A stepwise least-squares algorithm is used to determine the AR coefficients after the order is fixed (Schneider and Neumaier, 2001). Here the method of least-square fitting is used. By means of the moment matrices

N N N

T T Tv v v v v v

v=1 v=1 v=1

U = u u , V = υ υ , W = υ u (2.8)

the least-squares estimate of the parameter matrix B and the residual covariance matrix of this estimate can be written as (Neumaier and Schneider, 2001)

ˆ -1B = WU (2.9)

1ˆ- pN n

-1 TC = (V - WU W ) (2.10)

where np = mp+1 is the dimension of predictor vυ . The variance of this model

estimation can be given as

2 2

1

1ˆ( )

k

nd

w i ii

x xn

(2.11)

where dix is the known value of the process (desired output), and ˆix is the

corresponding estimated output. 2.2.2 Gauss-Markov process (GM process)

Due to its simplicity a first-order GM (GM1) process is the most frequently used model in Kalman filtering of an integrated system. The GM process is defined by the exponential Auto Correlation Function (ACF):

2( )xR e (2.12)

where 2 is the noise variance, 1 is the correlation time and is the time interval.

The ACF of Eq. (2.12) is shown in Fig. 2.5, which shows that there is a peak at zero,

Page 32: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

17

and there are two symmetrical descendent slopes at both sides, the gradient of which is getting steep if the value of goes up, and the gradient is discontinuous at zero. The first-order GM process in discrete time and its variance are given by

1t

k k kx e x w (2.13)

and

22 2 (1 )k

k k

tx w e (2.14)

where t is the discrete time sampling interval.

Fig. 2.5 ACF of GM process The GM1 process has been widely used in INS due to its bounded uncertainty characteristic, which is suitable for modeling slowly varying sensor errors, i.e. bias and scale factors (El-Diasty and Pagiatakis, 2008). With fixed sampling rate, the GM1 process is equal to a first order AR model. From Eqs. (2.3) and (2.12) we obtain

1te (2.15)

and therefore the relation between these two models can be shown as

Page 33: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

18

1ln t (2.16)

However, the principle difference between the AR model and the GM process should be declared: the AR model does not consider the sampling interval, which may be considered as a sub-optimal estimation. The GM-only model needs a long data set; for example 200 times the expected correlation time for 10% uncertainty of the parameters (Nassar, 2005). 2.2.3 Power Spectral Density

The Power Spectral Density (PSD) is a commonly used and powerful tool for analysing a signal or time series. In statistical signal processing, the PSD describes the

distribution of energy in frequency domain. For a finite-energy signal ( )f t , the

mathematical definition of the PSD is

2 *1 ( ) ( )

( ) ( )22

i t F FS f t e dt

(2.17)

where is the frequency, ( )F and *( )F are the Fourier transforms of ( )f t and

its complex conjugate, respectively. A key point is that the two-sided PSD ( )S and

autocorrelation function ( )K are Fourier transform pairs, if the signal can be treated

as a wide-sense stationary random process (IEEE Std. 952, 1997):

( ) ( )jS e K d

(2.18)

and

1

( ) ( )2

jK e S d

(2.19)

This relation also provides an approach to compute the PSD. The typical characteristic slopes for typical sensor errors of the PSD are shown in Fig. 2.6, where the actual units and frequency range are hypothetical. With real data, gradual transitions would exist between the different PSD slopes (IEEE Std1293-1998), rather than the sharp transitions in Fig. 2.6, and the slopes might be different from –2, -1, 0, and +2 values.

Page 34: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

19

Fig. 2.6 Hypothetical Gyro in Single-sided PSD Form (from IEEE Std952-1997)

2.2.4 Allan Variance

The Allan Variance (AV) is a method of representing root mean square (RMS) random drift error as a function of averaging time (Allan, 1966). As a time domain analysis technique, it is an accepted IEEE standard for gyro specifications (IEEE STD 647, 2006). If there are N samples of data points with sampling interval Δt , then a group of n data points with n < (N-1)/2 can be created. Each group is called a cluster within the interval n t . Assume that the instantaneous sample of the sensor is the angular

velocity ( )t , and its integration is the angle:

' '( ) ( )t

t t dt (2.20)

The average angular velocity in the k-th cluster, i.e. between time intervals kt and

kt , is

1

( ) ( )k

k

t

k

t

t dt

(2.21)

and the AV is defined as

Page 35: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

20

2 2 222

222

1 1( ) [( ) ( )]

2 21

( 2 )2

k n k k n k n k n k

k n k n k

(2.22 a)

where the symbol < > is the infinite time average. In practice, the AV can be estimated with a finite number of samples by the so-called overlapped AV.

22 2

1

1

22

221

1( ( ) ( ))

2( 2 )

1( 2 )

2 ( 2 )

N n

k k

k

N n

k n k n kk

T TN n

N n

(2.22 b)

The principle of the overlapped AV is shown by Fig. 2.7, which has been accepted as the preferred AV estimator in many standards.

Fig. 2.7 Schematic algorithm of the overlapped AV (from Guerrier, 2008) There is a unique relation between the AV and the PSD of a stationary process (IEEE Std.952-1997):

4

22

0

sin ( )( ) 4 ( )

( )S d

(2.23)

Table 2.1 clearly shows the relationships between the PSD and the AV with Eq. (2.23). The detailed derivations of these noise components are given in (Tehrani, 1983). Table 2.2 shows the relation between curve slopes and coefficient values on the log-log figure of the AV. The detailed derivation can be found in (Hou, 2004). A typical AV plot is shown in Fig. 2.8.

Page 36: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

21

Table 2.1 Summary of PSD and AV (from IEEE Std.647-2006)

Error Type PSD ( S ) AV ( 2 ( ) ) Comments

Quantisation Noise

2 2(2 ) sf Q T 2

2

3Q

Q: Quantisation

noise coefficient

Random Walk

2N 2N

N: Random walk

coefficient Bias

Instability 2 1

( )2

B

f

0f f

32

2

sinln 2 (sin 4 cos )2

2(2 ) (4 )i i

xx x xB

xC x C x

B: Bias instability coefficient.

x: 0f

iC : Cosine

integration function

Rate random walk

22

1( )2

K

f

2

3

K

K: Rate random

walk coefficient

Rate ramp 2

3(2 )

R

f

2 2

2

R

R: Rate ramp

coefficient Correlated

Noise 2

2

( )

1 (2 )c c

c

q T

fT

2

2

( ),

,3

c cc

cc

q TT

qT

cq : Noise

amplitude

cT :

correlation

time

Sinsoidal noise

200

0

( )

( )2

f f

f f

22 200

0

sin( )

f

f

0 : point

amplitude

0f : point

frequency

( )x : Delta

function

Page 37: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

22

The AV of the total stochastic process can be assumed to be the sum of all error terms, since they are independent on different time regions. The total AV of the system can therefore be expressed as:

2 2 2 2 2 ...tot quant RW Bias RRW (2.24)

where the subscripts tot, quant, RW, Bias and RRW denote the terms of total noise, quantisation noise, random walk, bias instability, and rate random walk, respectively, as shown in Table. 2.2.

Table 2.2 Summary of characteristic curve slopes and coefficient values (from IEEE Std.647-2006)

Error Type log( ) .log( )vs Curve

slope Coefficient

value Quantisation

Noise log( ) log( ) log( 3 )Q

-1 ( 3)Q

Random Walk

1log( ) log( ) log( )

2N

-1/2 (1)N

Bias Instability 0

2 ln 2log[ ( )] log( ) log(0.664 )f B B

0 0( )

0.664

fB

Rate random walk

1log( ) log( ) log

2 3

K +1/2

(3)K

Rate ramp log( ) log( ) log

2

R +1

( 2)K

Correlated Noise

1log( ) log( ) log

2 c cq T

1log( ) log( ) log

2 3cq

1/2 (1)

(3)c c

c

q T

q

Sinsoidal noise

1

Page 38: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

23

Fig. 2.8 Allan variance analysis noise term results (from IEEE Std. 952, 1997)

From the standard AV (i.e. Allan Deviation, AD) plot, it is straightforward to identify various random processes that exist in the raw data and get the noise coefficients with Table 2.2. For example, it is well known that the random walk characterizes the typical noise terms in low cost inertial sensors. The angular random walk process can be identified at T=1h , and with a straight line of slope -1/2 as shown in Fig. 2.8. Actually the parameters of different noise components are given by the AD plot with the relations in Table 2.2. Therefore in the test part, AD plot will be present instead of AV plot. The noise PSD rate is represented by (IEEE Std.647, 1998):

2( )x ARWS f (2.25)

where ARW is the angular random walk coefficient from Fig. 2.8. Substituting Eq.

(2.25) in Eq. (2.23), we get

2

2 ( ) ARW

T

(2.26)

In practice, the AV is based on a finite number of independent clusters that can be formed from any finite length of data. The AV of any noise terms is estimated using the total number of clusters of a given length that can be created. The confidence of the estimation improves as the number of independent clusters is increased. A

Page 39: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

24

straightforward calculation shows that the percentage error is given by (Papoulis, 1991)

1

2[2( 1)] 100%N

n

(2.27)

where N is the total number of data sets, and n is the number of points in the cluster.

2.3 Testing results

Our testing platform is based on ISIS-IMU from Inertial Science Inc. (Inertial Science, 1999), a six-degree of freedom inertial measurement unit designed for commercial use. It consists of three RRS75 (solid state rate sensors) and three solid state accelerometers, as shown in Fig. 2.9. The following testing results are all based on 2 hour static data from the ISIS- IMU.

Since there are 3 gyros and 3 accelerometers in the IMU, there would be 6p

parameters and 6 noise variances in the whole AR model, if the model is fixed to p-th order for all gyros and accelerometers. The ARMFIT

TM package (Schneider and Neumaier, 2001) is applied to estimate the AR model and parameters. In the test with real data, we found that the order of the AR model that minimizes the SBC varies with different length of the data samples as well as with different sensors, while the residual variance matrix just changes a little.

Fig.2.9 ISIS-IMU

Page 40: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

25

One way to check whether the estimated order of the AR model is valid is to verify the whiteness of the model residuals. From Fig.2.10, we can see that the residuals of AR (2) for the raw data of the accelerometer along the y-axis are nearly white noise (within 95% confidence interval between two horizontal lines). Therefore, we can assume that a second-order AR model (AR (2)) can fulfil the requirement of accuracy when applied for the Y-accelerometer. In the same way, we can verify that AR(2) is also suitable for other gyros and accelerometers.

Fig.2.10 ACF of AR(2) residuals for the Y-Accelerometer

From the collected static data of the IMU, we can get the PSD plot. A certain amount of noise would exist in the plotted curve, and it causes the uncertainty of the PSD value at each frequency. The PSD of the gyro along x-axis is shown in Fig.2.11. The bunching and flickers in high frequency of the PSD plot prevent us from further analysis. The frequency averaging technique (IEEE Std1293-1998) is used to average the points among the high frequencies of the PSD result in Fig. 2.12.

0 5 10 15 20 25-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1ACF

Lag

Aut

ocor

rela

tion

func

tion

Page 41: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

26

Fig.2.11 X-Gyro PSD

Fig.2.12 X-Gyro PSD after frequency averaging

10-5

10-4

10-3

10-2

10-1

100

101

102

10-4

10-2

100

102

104

106

108

Frequency [Hz]

PS

D [

(deg

/h)2 /H

z]

10-6

10-4

10-2

100

102

100

101

102

103

104

105

106

107

108

Frequency [Hz]

PS

D [

(deg

/h)2 /H

z]

Page 42: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

27

The slopes of the PSD curve include –2, 0, and +2, which indicate that the X-axis gyro data contains rate random walk, random walk, and quantization noise, respectively (see Fig. 2.6). The acquisition of parameters for noise terms from the PSD result plot is complex. According to IEEE Std 952 1997, the random walk parameter is given by

0 0 21 1( / ) [( / ) / ]

60 2Q h PSD h Hz (2.28)

Applying this equation, we estimate the random walk parameter to about 0.024 deg/h½. According to the specification, the random walk parameter should be less than 0.5 deg/h½, but the reason why the experiment result is much less may be that this experiment is carried out during static situations. In dynamic situations the value may go up. As discussed before, the noise parameters are obtained by the AD plot of the sensors. The AD plot for gyros and accelerometers are shown in Fig. 2.13 and Fig. 2.15 respectively. The standard error of each AD can be evaluated with Eq. (2.27). The AD and its standard errors are shown in Fig. 2.14. With these two figures, we can identify different error coefficients by means of Table 2.2 as listed in Table 2.3 and Table 2.4 for gyros and accelerometers, respectively. The reason for the big quantization noise for accelerometer Z as seen in Table 2.4 may be that the specific force sensed by the accelerometer along the vertical direction is overwhelmingly influenced by the gravity acceleration in the static tests.

Page 43: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

28

Fig.2.13 AD plot for 3 gyros

Fig.2.14 AD plot for Y-Gyro with percentage errors (I bar)

10-2

10-1

100

101

102

103

104

10-1

100

101

102

103

T [sec]

Alla

n D

evia

tion

[deg

/h]

Gyro X

Gyro Y

Gyro Z

10-2

10-1

100

101

102

103

104

100

101

102

103

T [sec]

Alla

n D

evia

tion

[deg

/h]

Page 44: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

29

Table 2.3 Error coefficients for gyros

Gyros X Gyros Y Gyros Z

Quantisation noise

(deg)

2.5 10-3 7.2 10-3 9.1 10-4

Random walk

(deg/h½)

4.8 10-2 N/A 2.6 10-2

Bias instability

(deg/h)

1.4 N/A 1.2

Rate random walk

(deg/h3/2)

5.2 N/A 8.1

Rate ramp (deg/h2) N/A 72 N/A

Table 2.4 Error coefficients for accelerometers

Accelerometer X Accelerometer Y Accelerometer Z

Quantisation noise

(m/s)

4.5 10-3 2.1 10-3 4.8 10-1

Bias instability (m/s2) 1.4 0.9 N/A

Rate ramp (m/s3) N/A 3.2 N/A

Fig.2.15 AD plot for 3 accelerometers

10-2

10-1

100

101

102

103

104

10-1

100

101

102

103

104

105

T [sec]

Alla

n D

evia

tion

[m/s

/h]

Acc X

Acc Y

Acc Z

Page 45: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

30

2.4 Discussion and conclusion on different error models

Using different models, we analysed the sensor errors and got the parameters. We can use the AR model to estimate the next consecutive errors, if we do not care much about different error components themselves and treat them totally together as stochastic errors. Although the optimal AR model would change its order with different sensors, a second-order AR model can be accepted as the unique model for all sensors of the IMU. Since the first-order GM process can be treated as a first-order AR model, if the sample interval is fixed, we do not specially analyse this most commonly used model. The PSD is also a common tool to analyse both periodic and non-periodic signals. Frequency averaging is needed to get the exact slopes of the curve, but the parameters are difficult to compute directly. Also it may be somewhat difficult for the non-system analyst to understand, since it is in the frequency domain. AV analysis has been widely used for MEMS stochastic modelling, due to its simplicity and efficiency. It is a great advantage that the noise terms differ from region to region, so that we can obtain different noise coefficients at different regions of the AV plot (see Fig. 2.8), and there is also a relation between the AV and the PSD (see Eq. 2.23). However, a long static data record may be required, even dozens of hours, as we can see that the long-time slope is sometimes not fully determined within several hours of data. The test results show that different methods give similar values of parameters of stochastic error models. Although different methods may give different values of the noise coefficients, similar noise sources can be verified for a specific sensor, and it is likely that with longer data sets and many iterations a set of more accurate coefficients can be estimated. After estimating the coefficients for the main stochastic errors, a better error model can be applied in the GPS/INS integrated system, which is of much help for the Doppler frequency estimate and for bounding the error drift during GPS outage.

2.5 Equivalent coloured noise modelling

The first 5 noise components in Table 2.2 will be modelled in our system equations. The white noise can be treated directly as the driving noise of a Kalman filter. Therefore it can be concluded as the driving noise of the Kalman filter without further treatment. The quantization noise arises due to the sampling mechanism of the inertial sensors. According to Savage (2002) and Han & Wang (2011) this quantization noise can be converted into its equivalent white noise by means of INS error dynamics. Therefore, special analytical conversions should be done with the quantization noise.

Page 46: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

31

For the other three coloured noise, an equivalent representation of multiple coloured noise can be derived (Han 2010; Han and Wang 2011)

As already mentioned, there are two main kinds of errors in inertial sensors: systematic errors and stochastic errors. The systematic errors generally contain misalignment error, scale factor, turn-on bias, temperature-sensed error, etc. Different types of systematic errors are usually eliminated or reduced during the initialization stage before the actual motion, and therefore will not be discussed in this chapter. Stochastic error sources are usually specified by the items in Table 2.2. Strapdown inertial sensor errors can be shown in the following forms (Savage 2000, pp. 12-110, 12-111)

/b b

scal mis bias white quant col ω K ω K ω ω ω (2.29)

/b bsf scal mis sf bias white quant col a L a L a a a (2.30)

where bω and bsfa are the errors in the inertial sensor output, bω and b

sfa are the

the inertial sensor outputs (angular velocity and specific force, which will be introduce in Section 3.2.2), and the terms with subscript scal/mis, bias, white, quant and col are the error components for the scale factor/misalignment, bias, white, quantization and coloured noise errors for the sensor outputs. Please note that we add the coloured noise terms, which are not included in the original reference. We assume that the systematic errors are compensated already before the inertial sensors are adopted into practical applications, i.e. we only deal with the last three error terms in the dynamic stage.

The quantization error is caused by the sampling mechanism and quantization operation of the inertial sensors. The quantization error components of strapdown inertial sensor errors present random errors on the direct sensor output, instead of the integrated outputs. For a MEMS gyro, the angular rate is sampled and output with quantization error and in the same way the delta angle quantization noise is produced for accelerometers. According to (Savage 2000, p. 12-113), the delta angle quantization noise is white noise. The same conclusion can be drawn for the MEMS gyros. Therefore we can see that the quantization error of MEMS sensors can be treated as white noise. This error will analyse further, and an equivalent white noise for the IMU model will be derived in Chapter 6. In the following part, we deal with the other three coloured noise (random walk, bias instability and ramp noise). There are several different approaches to model the coloured noise, namely statistical modelling, difference equation and differential equation modelling.

Page 47: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

32

The statistical modelling usually adopts a Gauss-Markov (GM) process or random walk process to describe the coloured noise. However, neither of the two is suitable for inertial sensors. This is because the GM process is a stable process, which can only be an approximation for the coloured noise in inertial sensors. What is more, it is not practical to accurately determine the Auto Correlation Functions (ACF) with experimental data due to the long period of observation time needed.

Difference equation modelling refers to autoregressive–moving-average (ARMA) models or simplified autogressive (AR) models. As we all know, the orders and parameters of ARMA models are very model-sensitive and dependent on the prior experiment data. The ARMA model is a kind of correlation method, which is not well suited to dealing with odd power law processes, higher order processes or wide dynamic range (IEEE Std. 952, 1997).

Differential equation modelling is widely applied in integrated navigation systems. The basic principle is that the inertial sensor stochastic errors are treated as the combination of white noise and coloured noise. However, this model usually just considers one simple coloured noise model, which is not reasonable with inertial sensors.

From Appendix B, we get the differential equations for three coloured noise models (i.e. Eq (B.5), (B.11) and (B.20) ) given by

1( )rwd Ku t (2.31)

222 ( )bi bid d B u t (2.32)

and

3( )rr rrd d R u t (2.33)

where d is the coloured noise with the subscripts rw, bi and rr denoting random walk,

bias instability and rate ramp, respectively; ( ), 1, 2,3iu t i is white noise; K, B, R ,

and are the parameters derived in Appendix B.

To combine these three noise models into one, we first present a theorem (Han, 2010): Assume that x(t) and y(t) are two coloured noise, and each of them can be

described by the following differential equations

Page 48: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

33

( ) ( )0 1 2 0 1 2( ) ( ) ( ) ... ( ) ( ) ( ) ( ) ... ( )m n

m na x t a x t a x t a x t b u t b u t b u t b u t (2.34)

and

( ) ( )0 1 2 0 1 2( ) ( ) ( ) ... ( ) ( ) ( ) ( ) ... ( )p q

m nc y t c y t c y t c y t d v t d v t d v t d v t (2.35)

where u(t) and v(t) are two independent white noise.

Define a combined noise model (for simplicity we omit the time variable t of u(t) and v(t))

( )0 1 2 0 1 2( ) [ ... ] [ ... ]pn qm

pn qms t f u f u f u f u g v g v g v g v (2.36)

where

( )

0 1 2

( ) ( )0 1 2 0 1 2

...

[ ... ] [ ... ]

pnpn

n nn n

f u f u f u f u

b u b u b u b u c u c u c u c u

(2.37)

and

( )

0 1 2

( ) ( )0 1 2 0 1 2

...

[ ... ] [ ... ]

qmpn

q mn n

g u g u g u g u

d v d v d v d v a v a v a v a v

(2.38)

Then its autocorrelation function can be shown as

(2 )0 1 2[ ( ) ( )] ( ) ( ) ( ) ... ( )l

lE s t s t r r r r (2.39)

where ( ) is Dirac function, max( , )l pn qm and the coefficients , 1, 2,...,ir i l are

computed in Appendix E. If the series ei exists and fulfil the following relations

20 0r e

2 11 1( 1) ( 1) 2i i

i i i ir e e e , i=1, 2, ..., l-1 (2.40)

and

2( 1)ll lr e

then another new stochastic process ( )0 1 2( ) ( ) ( ) ( ) ... ( )l

ls t e w t e w t e w t e w t ,

where w(t) is white noise, is an equivalent wide-sense stationary (WSS) process to the

stochastic process s(t), i.e. [ ( ) ( )]E s t s t = [ ( ) ( )]E s t s t . The detailed proof can

Page 49: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

34

be found in Appendix E (Han, 2010). It is obvious that this theorem can be easily applied in the situations of more than two coloured noise components. To apply this theorem, we define the sum of these three coloured noise components as

( ) ( ) ( ) ( )rw bi rrz t d t d t d t (2.41)

Rewriting Eqs. (2.31)-(2.33) with the differential operator D as

1( )rwd Ku t D (2.42)

222 ( ) ( )bid B u t D (2.43)

23( ) ( )rrd R u t D (2.44)

and substituting Eq. (2.42)-(2.44) into Eq. (2.41), one obtains

2 2 2 2

1

2 22 3

( )( ) ( ) ( )( ) ( )

( )2 ( ) ( ) ( )

D D D z t D D Ku t

D D B u t D D R u t

(2.45)

The left side of Eq. (2.45) can be rewritten as

4 2 3 2 2( ) ( )D D D D z t (2.46)

and the right side of Eq. (2.45) we denote as a new stochastic process s(t) made up of the three components

2 2 2 21 2 3( ) ( )( ) ( ) ( )2 ( ) ( ) ( )s t D D Ku t D D B u t D D R u t (2.47)

Then according to the theorem above, s(t) is equivalent to another stochastic process

(3) (2) (1) (0)0 1 2 3( ) ( ) ( ) ( ) ( )s t e w t e w t e w t e w t (2.48)

where the coefficients are given by the following relations as indicated in Eq. (2.40):

2 2 2 20 4e K B

Page 50: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

35

2 4 2 2 2 21 0 22 ( 2 ) 8e e e K B R

2 4 2 2 2 2 2 4 22 1 32 (2 ) 4e e e K B R (2.49)

2 4 2 23e K

It can be easily verified that

[ ( )] [ ( )] 0E s t E s t (2.50)

and

[ ( ) ( )] [ ( ) ( )]E s t s t E s t s t

2 2 2 (6) 4 2 2 2 2 (4)( 4 ) ( ) [( 2 ) 8 ] ( )K B K B R

4 2 2 2 2 2 4 2 (2) 4 2 2[(2 ) 4 ] ( ) ( )K B R K (2.51)

From Eqs. (2.46) and (2.48) we obtain the following equivalent noise model to represent the three coloured noise components:

(4) (3) (2) (1) (0)

1 2 3 4

(3) (2) (1) (0)0 1 2 3

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

z t a z t a z t a z t a z t

e w t e w t e w t e w t

(2.52)

where 21a ; 2a ; 2

3a ; 4 0a .

Up to now we can draw the conclusion that an equivalent single noise model, although with higher order, could be derived to represent the three coloured noise components in terms of WSS. The physical meaning for this operation is also straightforward, i.e. we can construct a single complicated (higher order) coloured noise model to represent the combination of several simple coloured noise components. Since the driving noise of this equivalent model in Eq. (2.52) is white noise and the Kalman filter only quires the noise PSD, which is the Fourier transform pair of autocorrelation function, the equivalent model can be applied in a Kalman filter to replace the original three noise models. The Kalman filter augment with coloured noise will be implemented in Chapter 6.

Page 51: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

36

2.6 Summary

In this chapter, the following content and conclusions are included:

- MEMS is introduced with emphasis on its advantageous size, weight and price,

but the accuracy is low.

- Basic principles for MEMS inertial sensors, which is the foundation for INS

navigation principles in Chapter 3.

- Different stochastic error analysis methods including AR, GM process, PSD

and AV: We can see that AV produces much more results of noise components,

and it is the preferred analysis method by many standards.

- Tests and discussion on IMU stochastic modelling based on the methods

above: We can see that, although different methods may give different values

of the noise coefficients, the similar noise sources can be verified with a

specific sensor. After estimating the coefficients for the main stochastic errors,

a better error model can be applied in the GPS/INS integrated system.

- Equivalent coloured noise modelling of three coloured noise components

(random walk, bias instability and ramp noise) is derived, which is the

foundation for further integrated system augmentation and MEMS sensors

performance evaluation in Chapter 6.

Page 52: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

37

Chapter 3

GPS/IMU integrated navigation

Due to the complementary character in many aspects, different GPS/INS integrated systems are hot research topics and widely applied during the last decade. With a GPS receiver we can obtain the position with bounded errors, while INS can provide high update frequency of both position and attitude information. Since the GPS positioning has a relatively low update rate, for high dynamic applications the measurements from INS can be used to determine the navigation solution between the intervals of GPS updates. When the new GPS update is available, we can correct the INS sensor errors in real time so that the errors of INS solutions are also bounded.

3.1 Three levels of integration structures

Initially, there are two broad integration structures of GPS and INS, namely the loose and tight integration. In recent years there is a new structure proposed, namely ultra- tight integration or deep integration (Gebre-Egziabher, 2007). The main differences between these three integration structures are the different levels of integration, as shown in Fig. 3.1.

Fig. 3.1 Loose, tight and ultra-tight GPS/INS navigation systems (from Babu and

Wang, 2004)

Page 53: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

38

There is also another set of terms: centralized and decentralized integration, which were used in many studies in the past. These two terms are the same as loose and tight integration. Since the term ultra-tight integration came out, the terms loose and tight integration have been adopted in most recent works. For the tightly coupled system, the measurements from the INS sensors are combined with the GPS pseudorange (PR) and delta pseudorange (DPR) measurements, which is the variation of PR between two continuous epochs. The classic tight integration structure is shown in Fig. 3.2. A tightly integrated system can have a more accurate navigation solution, because the basic GPS observable are not as correlated as the position and velocity solutions, and it is possible to implement more sensitive fault detections and to use the GPS data even with less than 4 satellites available. GPS signals can help INS initial alignment, while INS data can be helpful to the GPS ambiguity resolution. But this integration responds more slowly to INS errors than the loosely coupled system.

Fig. 3.2 Classic definition of GPS/INS tight integration (from Gebre-Egziabher, 2007) (ri is the pseudorange between the receiver and satellite i)

There are great differences between the loose and tight architectures. In loosely coupled systems, the position, velocity and time from the GPS receiver are combined

Page 54: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

39

with position, velocity and attitude from INS by a mathematical depiction of the error characteristics of the systems by an Extended Kalman Filter (EKF). In such instances, the key character is that INS and GPS are both independent navigation systems. This is a kind of open loop collaboration with or without feedback to correct INS sensor errors, and the performance of a loosely coupled integrated system greatly depends on the solution of GPS. On the other hand, in the tightly coupled system, the angular rate and specific force measurements from the gyros and accelerometers are combined with the PR and DPR measurements. In this sense, both INS and GPS are reduced to basic sensors. A tightly integrated system can have more accurate navigation solution because the basic GPS observable are not as correlated as the position/velocity solutions, and more sensitive fault detection and isolation scheme can be used to verify the quality of PR. Moreover, it is possible to use the GPS data even with less than 4 satellites available. Since only with four or more GPS satellite signals we can perform the GPS positioning, one great benefit of the tightly coupled system is that even less than four tracked satellites can still help to estimate INS errors and improve the precision of position and attitude (Horemuz and Sjöberg, 2001). GPS signals can help INS initial alignment, while INS data can be helpful to the GPS ambiguity resolution. On the other hand, loose integration has the advantage of redundancy since both INS and GPS can provide their navigation solution independently. A more complex and potentially more beneficial level of GPS/INS integration occurs at the GPS tracking-loops level named ultra-tight integration. In an ultra-tight coupled system, the inertial sensors are used to aid the GPS phase/frequency and code tracking loops directly. The measurements used are the in-phase and quadrature-phase signal. (For further details, we refer to Tsui, 2004. Sect. 6.10). The main feature of ultra-tight integration is that the GPS signal acquisition is combined into the estimation algorithm as shown in Fig. 3.1. Therefore the performances of GPS signal tracking in harsh environment and robustness can be improved. There is also a feedback loop to Doppler frequency estimate, by which it helps to regain the GPS signal tracking faster after the GPS blockage. This integration offers reduced phase-tracking bandwidth, shorter acquisition time, and better resistance to radio frequency interference or multipath noise. Therefore it has the potential to apply in high performance navigation systems, where robustness to cycle slips is of paramount importance (Bhatti et al., 2007a and 2007b). Along the direction from the loose integration to tight integration and ultra-tight integration, the robustness of the integrated system is strengthened. However, the increased robustness is at the sacrifice of system simplicity, redundancy, and independence of the INS and GNSS navigators (Gebre-Egziabher, 2007).

Page 55: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

40

3.2 Principle of loose integration

In this thesis a loose GPS/INS integration structure is implemented for different kinds of simulations and experiments. The background introduction of the GPS observations and basic INS navigation principles are given below, and the detailed implementation of the loose integrated system will be described in Section 3.3.

3.2.1 GPS observations and positioning accuracy

As for GPS observations, here only basic phase and code pseudoranges are presented. We assume that the baselines are short, so that the atmospheric and satellite ephemeris errors are negligible. The phase and code observation equations are given as follows (Hofmann-Wellenhof et al., 2001, Sect. 6.1).

1 0 1 1 1i i i i i i i

X Y Z LL a X a Y a Z N

2 0 2 2 2i i i i i i i

X Y Z LL a X a Y a Z N (3.1)

1 0 1i i i i i i

X Y Z CC a X a Y a Z

2 0 2i i i i i i

X Y Z CC a X a Y a Z

where superscript refers to satellite, subscripts 1, 2 are the signal frequencies, L and C

are phase and code observables, respectively, 0i is range between the receiver and

satellite computed from approximate receiver coordinates, 1iN and 2

iN are

ambiguities, 1 and 2 are the wavelengths, , , 1iC , 2

iC are observation

random errors for phase and code observables at frequencies L1 and L2, respectively. Please note that all terms in Eq. (3.1) are double-differences when used in tight integration. These double-difference phase and code pseudoranges will be derived in Section 3.3.2. Assuming uncorrelated data, the covariance matrix of the double-difference observations is given by (Leick 1999, p. 105 and Sjöberg 2007, Sect. 6.2)

Page 56: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

41

22

212

1 2

2 22

21

1 0 0 0

0 0 0

40 0 0

0 0 0

L k

k

R (3.2)

where 1L , 2L , 1C , 2C are the standard error of the uncorrelated and un-

differenced phase and code observables on two frequencies, k is the ratio between code and phase standard error, i.e.

1 2

1 2

C C

L L

k

(3.3)

We also assume that the phase ambiguities in Eq. (3.1) are fixed, so that these terms can be moved to the left side of the equations. Otherwise we should add two ambiguities into the state vector each time one satellite is locked-on. For the GPS positioning accuracy, we present two examples of positioning curves with two typical GPS applications: Precise Point Positioning (PPP) and Real Time Kinematic (RTK). The PPP is to use a single receiver to determine the absolute position at one point, and an example of PPP positioning error curve is shown in Fig. 3.3.

Fig. 3.3 Static PPP positioning error (from Chen et al., 2006).

(DN, DE and DH are coordinate differences in north, east and height directions, respectively)

Page 57: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

42

RTK is another common technique used in land surveying based on phase observation. The RTK configuration contains a static reference station and one or several moving rover receivers. It starts with the initialization stage, during which the phase ambiguities are solved. Then the reference station provides the real-time corrections to the rovers, providing up to centimetre level accuracy. The standard RTK configuration with one reference station works satisfactory for rover stations within 10 km from the reference station (Sjöberg, 2007, p. 68). An example of RTK positioning error curve is shown in Fig. 3.4.

Fig. 3.4 Precision of North coordinates in kinematic mode (from Tiberius, 1998, p.

174).

Please note that the “+” and “x” symbols are coincident with each other at the first two epochs and then with the square symbols in the following epochs.

3.2.2 INS and navigation principles

An INS usually contains an embedded microcomputer unit and a platform or module containing a set of accelerometers and gyroscopes (IMU). The initial position and attitude of the INS should be provided from another source (e.g. GPS, compass, etc.), and thereafter computes the updated position and velocity by integrating measurements from the IMU. INS requires no external information to determine its

Page 58: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

43

position, velocity and orientation, once it has been initialized. Therefore it is autonomous, immune to jamming or deception (in contrast to GPS) and quite suitable for military applications.

Before introducing the INS navigation principles, the basic concepts of coordinate frames and rotation matrices are introduced in Appendix A and C, respectively. The following coordinate frames are used in this thesis: inertial frame (i-frame), conventional terrestrial frame (e-frame), navigation frame (n-frame), body frame (b-frame) and camera frame (c-frame). Two commonly used coordinate frames (e-frame and n-frame) are shown in Fig. 3.5.

Fig. 3.5 The e-frame and the n-frame

An IMU consists of two main parts: three orthogonal accelerometers and three orthogonal gyroscopes, which sense the vehicle accelerations and angular velocities, respectively.

Please note that the mechanic gyroscope and accelerometer is introduced in the following part just for the purpose of describing the basic principle of the IMU. For MEMS, there are no such mechanic components, but it is based on nanoelectromechanical systems and the nanotechnology.

A gyroscope is a device for measuring or maintaining orientation, based on the principles of conservation of angular momentum. An accelerometer is a device that measures the acceleration along a specific axis. The basic principles of gyros and accelerometers were introduced in Section 2.1.1.

Next we introduce the important concept (specific force). In the early beginning the accelerometer was actually a gravity meter (shown in Fig. 2.4). In fact, all kinds of

Page 59: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

44

gravity meters are accelerometers. Based on Newton's Second Law, the equation of motion is

mf a (3.4)

where f is the force, m is the mass, a x is the acceleration. Therefore, accelerometers are built on the principle of measuring the force exerted on a test body of a known mass along a given axis.

The compression of the spring is given by Hooke’s Law: sf kx , where k is the

spring coefficient, x is the spring displacement. Therefore the equation of motion can be written as

sm k x f x (3.5)

A complete force analysis can be shown as

( )k C b C m m C x x g r (3.6)

where C is the mass centre and b is the damping coefficient.

Since , where o is the initial balance point, Eq. (3.6) can be rewritten as

( )k b

C C C om m

rx gx x (3.7)

Therefore we can see that the output of an accelerometer in the vicinity of the Earth is

og r , which is called specific force. If the accelerometer is stationary on the Earth

surface, i.e. o r 0 , the output is gravity . It should be noted that the IMU

accelerometers sense only specific forces in the b-frame. This concept is of vital importance in inertial navigation.

Navigation principle: The fundamental equation for the motion in the gravitational field of the Earth is

i i i r f g (3.8)

where ri is the position vector from the origin of the inertial frame to the vehicle; fi is the specific force vector;

Page 60: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

45

ig is the gravitational acceleration vector.

The raw observations from the strapdown INS (SINS) are two orthogonal triads of specific forces and angular velocities in the b-frame. Both of them must be properly scaled and corrected for systematic errors. We store the scale factors for the

accelerometers in vector Ta ax ay az(S S S )S , scale factors for the gyroscopes in

Tg gx gy gz(S S S )S , gyro drifts in vector d and accelerometer biases in vector b. Then

the corrected accelerometer observations are computed as

b bx x ax

b b by y ayb bz z az

f f S

f f S

f f S

f b

(3.9)

and the corrected gyro observations

b bibx ibx gx

b b bib iby iby gy

b bibz ibz gz

ω ω S

ω ω S

ω ω S

ω d

(3.10)

where b bib, f ω are the raw output of the accelerometers and gyroscopes,

respectively. In the following part, we will present a simple set of inertial navigation equations. First, if the angular velocity can be assumed as constant during the time interval ∆t, the rotation vector becomes

k

k 1

tTb b b b b

ib ib ibx iby ibz

t

ω dt ω t ω t ω t

θ (3.11)

where

∆t t t . The rotation matrix in the navigation frame is then (Horemuz, 2006, p. 39)

n n 2b bb k b k 1 2

b b

sinθ 1 cosθ

θ θ

R R I S S (3.12)

Page 61: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

46

where θ ‖θ ‖,

0 θ θθ 0 θθ θ 0

and

θ θ θ

∆tω t ω t

2

ω t ω t

2ω t ω t

2

The velocity update is given by

n n nk k 1t t V V V (3.13)

where ∆ ∆ ∆t, ∆ t t ∆t and .

Finally, the position update is given by

n n n n n 2k k 1 k 1

1t t t t t

2 P P V g f (3.14)

where

t t /2

Please note that these equations are valid in such situations where the sculling error vanishes, where the angular rate and specific force is constant during the integration time interval, and also this is no velocity rotation, i.e. the velocity direction is the same. The sculling error is the integrated contribution of the high-frequency content in velocity update. For more details about sculling error, we refer to (Jekeli 2001, pp. 124-157). Inertial navigation is based on the integration of sensed specific forces and angular velocities with respect to time. Therefore it deals with the solution of a series of differential equations. Since the mathematical platform is adopted in SINS, i.e. the attitude matrix is derived by means of real-time computation, the attitude update algorithm is the crucial part in this type of INS navigation, because it is the main factor that influences the navigation accuracy. There are several traditional attitude update algorithms, namely Euler angle algorithm, direction cosine algorithm and quaternion algorithm. Each method has its advantages and disadvantages, but all these three formulations are mathematically equal. For more detailed information about these three algorithms, we refer to Jekeli (2001, pp. 9-19) and Horemuz (2006, pp. 12-24).

Page 62: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

47

3.2.3 IMU sensors calibration

Before we use an IMU for navigation purpose, all the sensor biases must be estimated. Using data from a static IMU in different positions enables us to estimate some systematic errors of the sensors, namely accelerometer biases and gyro drifts. The basic observations should be performed in the following way:

1- Place the IMU on a table, as precisely levelled as possible. Orient xb axis

towards north, yb axis towards east and zb axis downwards (position 1).

2- Record the static data for about 15 seconds. Compute mean values of specific

forces ( and of angular velocities ( 1 .

3- Stop the recording and rotate the IMU by 180º around zb, so now xb axis points

to the south. Then again record around 15 seconds of data. The mean values

are denoted by 2 and 2 (position 2).

In position 1, the accelerometers x and y should sense zero specific force, if the table is perfectly levelled and the accelerometers are bias-free. If these conditions are not fulfilled (which is usually the case), then we get write the following equations:

1

1

x x x

y yy

f g b

f g b

(3.15)

Similarly for position 2:

2

2

x x x

y yy

f g b

f g b

(3.16)

The bias can be computed by

1 2

1

2 b f f (3.17)

and the corrected specific force measurements can be computed by

f f b (3.18)

In position 1, the gyroscope should sense just the north, east and down components of the Earth’s rotation:

0Tn

ie e ecos sin ω (3.19)

Page 63: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

48

So the gyro drift components can be computed as:

x x e

y y

z z e

d cos

d

d sin

(3.20)

and the measurement angular rates are then corrected to

ω = ω - d (3.21)

3.2.4 INS Initialization and Alignment

Before an IMU gets ready to use, the b-frame axes must be aligned with the navigation frame axes, or the direction cosines matrix between the b-frame and n- frame must be established, i.e. the matrix b

n.

The initial position and velocity required as integration constants in the primary navigation equations are simply inserted into the system as input data. Levelling a gimballed platform is accomplished by monitoring the output of two mutually perpendicular accelerometers with their input axes parallel to the platform. If the platform is stationary, then the accelerometers sense only the components of the gravity acceleration vector. The platform is levelled when its vertical axis is aligned with the direction of gravity; then the horizontal accelerometers on the platform sense zero specific force. Thus, an approximate levelling ensures by rotating the platform pitch and roll gimbals using the servo motors, such that the output of each horizontal accelerometer vanishes.

For the SIMUs a coarse alignment can be done in an analytical and numerical way. Assuming a stationary IMU, the accelerometers sense only the components of the gravity vector, and the gyros sense only the angular velocity of the Earth rotation.

The coarse alignment procedures are shown as follows (for detailed derivation, we refer to Horemuz 2006, pp. 56-59):

First we compute the mean values of the raw data

mean ; mean ; mean

mean ; mean ; mean

x x y y z z

x x y y z z

f f f f f f

(3.22)

With these mean values, we can then compute the direction cosines matrix between

the b-frame and n-frame Tn( )n b

b R R as nbR is given by

Page 64: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

49

sin

cos cos cos

sin

cos cos cos

sin

cos cos cos

z y y zx x x

e e

y y yb x z z xn

e e

y x x yz z z

e e

f ff f

g g g

f ff f

g g g

f ff f

g g g

R (3.23)

Following Eqs. (D.9) and (D.10) in Appendix D we can obtain the misalignment angles from Eq. (3.23) by

2 2

roll arctan

pitch arctan

yaw arctan

y

z

x

y z

z y y z

x x e

f

f

f

f f

f f

g f sin

(3.24)

which are the three orientation angles.

Small angular errors would be more or less left between the practical and ideal alignments after a coarse alignment. This is due to the systematic errors in the sensors that cannot be calibrated in the lab, particularly the biases that have different values each time the system is turned on. Also, there would be different kinds of stochastic errors in each session as shown in Chapter 2. The coarse alignment is refined based on the assumption that external information provides means to estimate the systematic instrument errors. The calibration proceeds on the basis of external observations (both position and velocity) in a Kalman filter.

3.2.5 INS error dynamics in the e-frame Different navigation equations can be used to describe the INS dynamic process, i.e position and orientation changes with time. To solve such system equations with kinematic measurements, it is advantageous to transform the higher-order differential equations into first order. Then the variables in such a first-order differential system equations are called state variables and describe the system dynamics in time. With a state variable description of the dynamic process, the Kalman filter can be set up to estimate the state variables from kinematic measurements. To begin with, we assume that the initial position and orientation errors are known. The outputs of a strap-down IMU are three orthogonal specific forces from the accelerometers and three angular velocities from the gyros. Please note that all of

Page 65: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

50

these IMU outputs are in the b-frame. In the initialization stage, the direction cosines matrix is obtained by Eq. (3.23). Since the initial position is known, the direction cosines matrix is therefore known. As a result, all initial values can be transformed to the e-frame. In the following part, we directly present the INS error dynamics equations in the e-frame. For a more detailed derivation of the error dynamics equations in different frames, we refer to Jekeli (2001, pp. 139-153) and Horemuz (2006, pp. 43-56). The explicit error dynamics equations in matrix form are given by:

e

e

e e e e e e e eeie b b A

e e e eie b b G

A

G2

(t) 2

e

1 11 12

2

r

vv

x N r Ω v a ε R b R wε

Ω ε R d R wd

b

wx GF FFx Gu

wx G0 0

(3.25)

where the state vector is T 1 2x x x with Te T T e T

1 ( ) ( ) ( ) ex r v ε ,

TT T2 x d b ;

9 9 9 6

1 12F FF = [ ]

0 0with e e e

1 ie

eie

2 ( )

0 I 0

F N Ω a

0 0 Ω

, e12 b

eb

0 0

F 0 R

R 0

and the dimension of all zero matrices in F1 and F12 is 3 3 ,2

2 2 22e2

e 2e3 2 2 2

2

2 2 2

3x 3xy 3xz1

r r r 0 0kM 3xy 3y 3yz

1 0 0r r r r

0 0 03xz 3yz 3z

1r r r

N with x, y, z being the

position coordinates in the e-frame, kM is the geocentric gravitational constant,

2 2 2r x y z , and e 7.292115e-5 rad/s is Earth’s rotation rate;

eeie e

0 0

0 0

0 0 0

Ω ; Z Y

eZ X

Y X

0 a a

( ) a 0 a

a a 0

a with aX, aY, aZ being acceleration

components expressed in the e-frame; 2

1GG

Gwith 1

eb

eb

0 0

G R 0

0 -R

, 2 6 6G 0 ;

Page 66: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

51

T[ ] G Au w w where wA and wG are the white noise components of the

accelerometers and gyros, respectively.

3.3 Extended Kalman filter and implementation

The Kalman filter is named after Rudolf E. Kalman. In 1960 he published the classic paper proposing a recursive solution to the discrete-data linear filtering problem (Kalman, 1960). The Kalman filter is essentially a set of mathematical equations that implement a predictor-corrector type estimator optimally minimizing the estimated error covariance. From that time, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation (Welch and Bishop, 2001). The Kalman filter deals with the state estimate of a discrete-time process of a linear stochastic difference equation. But in real applications most of the process equations are non-linear, and in this linearized situation the Kalman filter is referred to as an Extended Kalman Filter (EKF). The state functions need not to be linear functions of the state but instead be differentiable functions.

For a dynamic system the state transition and observation models can be given as

1 1 1( , )k k k kf x x u w (3.26)

1 1( )k k kh z x v (3.27)

where x is the state vector, z is the measurement, the subscript is the discrete time

index, f and h are the functions to compute predicted state and predicted

measurements, respectively, wk and vk are the process and observation noises, which are both assumed to be zero-mean Gaussian noises with covariance matrices Qk and

Rk, respectively, i.e. (0, )k kNw Q and (0, )k kNv R .

Similar to the Kalman filter, the EKF algorithm has two main steps, i.e. the predict

and update steps:

Predicted state estimate | 1 | 1 1ˆ ˆ( , )k k k k kf x x u (3.28)

Predicted covariance estimate | 1 1 1| 1 1 1T

k k k k k k k P F P F Q (3.29)

Kalman gain 1| 1 | 1( )T T

k k k k k k k k k

K P H H P H R (3.30)

Page 67: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

52

Updated state estimate | | 1ˆ ˆk k k k k k x x K y (3.31)

Updated covariance estimate | | 1( )k k k k k k P I K H P (3.32)

where the innovation is | 1ˆ( )k k k kh y z x , and the state transition and observation

matrices are defined by the following two Jacobians: 1| 1 1ˆ1 ,|

k k kk

f x uF

xand

1| 1ˆ1 |k kk

h xH

x.

In the EKF algorithm, the first two equations are the prediction, and the last two are the update, both of which use the Kalman gain matrix K given in Eq. (3.30). For conciseness, the circular progress of the EKF is shown in Fig. 3.6. To implement the Kalman filter for the integrated navigation system, first we should build up the system model, i.e. to determine the state vector and derive the system dynamic equations as well as observation equations. Based on the system model, the EKF will be implemented in Sections 3.3.1 and 3.3.3.

Fig. 3.6 The circular progress of EKF (from Brown and Hwang, 1997, p.219)

Compute Kalman gain

1| 1 | 1( )T T

k k k k k k k k k

K P H H P H R

Predict state and covariance

| 1 | 1 1ˆ ˆ( , )k k k k kf x x u

| 1 1 1| 1 1 1T

k k k k k k k P F P F Q

Update state with measurement

Prior estimate 0x

with error covariance

Compute covariance for updated

estimate | | 1( )k k k k k k P I K H P

z0, z1, ...

0 1ˆ ˆ, ,...x x

Page 68: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

53

3.3.1 State vector and system dynamic equations

The state vector contains the unknown parameters to be estimated in the Kalman filter. In this thesis we choose 15 parameters for the state vector of the GPS/INS integrated navigation system, which can be divided into two groups as shown in Eq. (3.25):

TT T

1 2 x x x (3.33)

where the first group contains three dimensional errors in position, velocity and orientation of the IMU:

T Te T T e T1 i i i X Y Z X Y Z( ) ( ) ( ) x , y , z , v , v , v , , ,

ex r v ε (3.34)

and the second group are the sensor errors:

TT T 2x d b (3.35)

The position and velocity of the vehicle can be computed by:

i i0 i x x0 x

i i0 i y y0 y

i i0 i z z0 z

x x x , v v v

y y y , v v v

z z z , v v v

(3.36)

where i0 i0 i0x , y , z are position coordinates, x0 y0 z0v , v , v are velocities as delivered by

the IMU. The orientation can be computed from the transformation matrix ebR :

e e eb b

ˆ R I E R (3.37)

where

z y

ez x

y x

0

0

0

E (3.38)

and the orientation relative to the navigation frame becomes

Page 69: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

54

b b en e n R R R (3.39)

The IMU sensors errors in the second group contain two parts:

I. The accelerometer’s error a(t) consists of the constant (b) and random (wA)

parts

Aa(t) b w (t) (3.40)

where - b is the accelerometer bias, which is modelled as random constant. This bias

gets a random value within certain limit, usually specified as bias repeatability. This bias must be determined after each start of the IMU.

- wA(t) is random walk noise of the accelerometer II. Similarly, the gyro error can be described as

G(t) d w (t) (3.41)

where - d is the gyro drift. The same remark holds as for b. - wG(t) is random walk noise of the gyro

The sensor errors ( and a ) are defined as

ˆ

a a a

(3.42)

The INS errors are modelled by a linear system described generally by the following dynamic state equation

t t t t t x F x G u (3.43)

and the measurement model

(t) (t) (t) z Hx v (3.44)

where x is the state vector, F is the system dynamic matrix given in Eq. (3.25), G is input matrix, t is time and u is a vector forcing function, whose elements are white noise. The discrete solution of Eq. (3.43) is given by

Page 70: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

55

, 1 1k k k k k x Φ x w (3.45)

where , 1k kΦ is the transition matrix from epoch k-1 to k and kw is the noise in the

state vector. The Kalman filter can be applied iteratively based on this discrete solution. In the following part, we present the solutions of each key variables and steps in the Kalman filter. (For more information, we refer to Farrell 1999, pp. 84-86 and Horemuz 2006, pp. 68-70): The relationship between the transition matrix Φ and dynamic matrix F can be expressed by

t 2 2 3 3 4 4k 1,k k k k k

1 1 1e t t t t

2 3! 4!

FΦ I F F F F (3.46)

which is the transition matrix between epochs k and k+1, where k 1 kt t t and

k 1

k 1 k 1,

k

d

w Φ G u (3.47)

is the response at epoch k+1 due to the presence of the white noise input during interval t . The covariance matrix of wk is given by

Tk k k

k 1 k 1T T T

k k

k 1 k 1T T T

k k

E

E k 1,s s s t t k 1, t dt ds

k 1,s s s t t k 1, t dt ds

Q w w

T G u u G T

T G E u u G T

(3.48)

where

Td bE s t diag u u Q q q (3.49)

where dq and bq are 3x3 diagonal matrices, whose elements are the noise PSD of

gyros and accelerometers in the IMU, respectively. Using Eq. (3.46), the solution of the integral in Eq. (3.48) can be approximated by the

Page 71: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

56

following expansion:

2T T

k k k G G G

322 T T

G G G

42 33 T 2 T T

G G G G

tE t

2

t2

6

t3 3

24

Q w w Q FQ Q F

F Q FQ F Q F

F Q FQ F F Q F Q F

(3.50)

where

TG Q GQG (3.51)

In a similar way, the measurement model Eq. (3.44) can be converted into the discrete form

1 1 1 1k k k k z H x v (3.52)

where 1kH is the design matrix of the system at time 1kt , and 1kv is the

measurement noise at time 1kt , with a covariance matrix 1kR . Here we use the

double-differenced phase and code pseudoranges in the simulations and we assume that the baseline is short so that the atmospheric errors are negligible.

3.3.2 Differenced GPS observation equations

In the case of differential or relative positioning, at least two GPS receivers measure pseudoranges to a set of common satellites simultaneously. Let two receivers at points A and B measure a satellite s and the point A is a known reference point. The code and phase observation equations can be written as:

sAA

sA

ssA

sAA

sAA

sA Ntcttcttcttt )()(ˆ)~()~(

ˆ( ) ( ) ( ) ( )s s s s s s sB B B A B B B B Bt t t c t t c t t c t N (3.53)

AsA

ssA

sAA

sAA

sA tcttctPtctttP )()(ˆ)~()~(

BsB

ssB

sBB

sBB

sB tcttctPtctttP )()(ˆ)~()~(

where ( )sA At , )~( A

sA tP are the phase difference and pseudorange observed at time At

~

Page 72: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

57

by receiver A to satellite s; At~ is the nominal time of the signal reception measured

by the clock of receiver A; st~ is the nominal time of signal transmission measured

by the clock of satellite s. We can omit the time t for simplicity, since all observables are correlated to the same time instance, i.e.:

AtAsAA

sA

sA tttPP ˆ)()(

ˆ( ) ( )s s sA A A A A tAt t t (3.54)

To reduce the atmospheric effects that are common to both stations, it is useful to form differenced equations. The single difference can be obtained by the difference equations of the two receivers towards one satellite:

sABAB

sAB

sA

sB

sAB Ntc

s s s sAB B A AB ABP P P c t (3.55)

where

sA

sBZ

sBY

sBX

sB

sA

sB

sAB zayaxa ,,,0

AB B At t t (3.56)

sA

sB

sAB NNN

In the similar way, it is possible to form double differences between two single differences. Let two receivers A, B observing simultaneously two satellites s and q. For each of the satellites a pair of single difference equation similar to Eq. (3.55) can be formed as

sABAB

sAB

sAB Ntc

q q qAB AB AB ABc t N (3.57)

ABsAB

sAB tcP

q qAB AB ABP c t

Page 73: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

58

Taking differences between the single differences of two satellites, we get the following double difference equations:

sq q q sq sqAB AB AB AB ABN

sq q s sqAB AB AB ABP P P (3.58)

where

sq q sAB AB AB

sq q sAB AB ABN N N (3.59)

From Eq. (3.58), we can see the receiver clock errors st are eliminated, which is the

most important feature of the double differences. Taking into account linearized topocentric distance, the observation equations (3.58) can be written in linear form as:

,0sq sq sq sq sq sqAB AB XB YB ZB ABa X a Y a Z N

and

,0sq sq sq sq sq

AB AB XB YB ZBP a X a Y a Z (3.60)

where the pseudorange is linearized to:

0 0 00

0 0 0

( ) ( ) ( )( ) ( )

s s ss s A A A A A AA A A A

A A A

t t tt t X Y Z

X Y Z

(3.61)

where

sA

As

sX

A

AsA XX

aX

t

0

0

0

0 )~(

sA

As

sY

A

AsA YY

aY

t

0

0

0

0 )~(

sA

As

sZ

A

AsA ZZ

aZ

t

0

0

0

0 )~(

Page 74: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

59

3.3.3 System observation equations

In practice, there is always an offset (also called lever arm) between the GPS antenna and the IMU as shown in Fig. 3.7, which therefore is required to be compensated. We assume that this offset does not change with time during the navigation period.

O IMU

pi

pG

GPS antenna

oG

Fig. 3.7 The offset between GPS antenna and IMU

Let us denote the vector from IMU to the GPS antenna expressed in the body

coordinate system as bGo , the positional vector of the antenna as Gp and the

positional vector of the IMU as ip . Then the offset bGo between the GPS antenna

and the IMU can be computed by:

i

b b e b e eG e G e G( ) o R o R p p (3.62)

If vector y contains coordinates of the GPS antenna determined by GPS, then the GPS observation equation is given as follows:

e e e bG i b G

ˆˆ ˆ y p δ p R o δ (3.63)

where δ denotes the residual. Taking into account the approximation e e eb b

ˆ R I E R ,

the observation vector which is used in the Kalman filter is:

Page 75: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

60

e e e e bi i b G y p p I E R o δ (3.64)

e e b e e ei b G i G( ) L y p R o p o ε (3.65)

where Te e e e

X Y Z ε is the misalignment angles in the e-frame. The vector

product can be transferred to the skew-symmetric operation (see Appendix C), i.e.

e ez y

b e eG z x

e ey x

0 o o

o 0 o

o o 0

o

In fact, Eq. (3.65) is in the form of L = ε+ AX , if you regard the first term of the right

side eip to be ε , and the 2nd term to be AX. Note that the misalignment angle eε is

part of the state vector X. With the observation equation given by Eq. (3.65), we can get the design matrix for the loose integration in Eq. (3.44), where the state vector is given by Eq. (3.34):

e ez y

e eLoose z x

e ey x

1 0 0 0 0 0 0 o o

0 1 0 0 0 0 o 0 o

0 0 1 0 0 0 o o 0

H (3.66)

In tight integration, the double differences given in Eq. (3.61) are the observations from the GPS receiver, instead of the direct positioning results in loose integration. Therefore the relation between double differences and the positioning errors, i.e. Eq. (3.61), is applied. In order to obtain the design matrix for the tight integration, we rewrite Eq. (3.61) in matrix form

0( ) ( )s sA A A At t ρ ρ AX (3.67)

where [ ]s s sX Y Za a aA and [ ]X Y Z X .

The left side of Eq. (3.67) is the observation vector for the tight integration as that in

Eq. (3.65), i.e. 0( ) ( )s sA A A At t L ρ ρ . Multiplying design matrix A at both sides of Eq.

(3.65), we get

Page 76: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

61

e e e e ei G G( ) ( ) AL A p A o ε AX A o ε (3.68)

From Eq. (3.68), we obtain the design matrix for the tight integration

eTight G( ) H A 0 A o (3.69)

Please note that each of the 3 sub matrices in Eq. (3.69) is a matrix with 3 3dimensions.

3.3.4 Implementation structure of loose integration

To summarize all components discussed above in the GPS/IMU loose integration, the overall integrated system illustration is given in Fig. 3.8. Each component block of the system is described as one of the sub-sections. We just briefly present the model and related equations for the integrated system in Section 3.3, and a more detailed derivation can be found in the corresponding references. This integrated navigation system implementation will be used for the tests in Section 3.4.

Fig. 3.8 Structure of loose GPS/INS integration

∆ IMU

GPS receiver (RTK)

Kalman Filter

Navigation Processor ∆

, ,

IMU calibration

,

Position

Velocity IMU error model

Attitude

, ,,

, ,

, ,

IMU

Page 77: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

62

3.4. Tests and analysis

In this section, we present the detailed description of the integration system model and the integration structure. Based on these models, we have carried out a series of simulations and tests.

3.4.1 Test setup and detailed steps

In the real positioning experiments, we use the same MEMS-based IMU as introduced in Chapter 2 with the update frequency 100 Hz. On the other hand we use the double-difference observables from a GPS receiver with update frequency 1 Hz, which means that during the interval of two consecutive GPS updates, where there are 100 INS updates. This provides a convenient approach to interpolate the navigation solutions in high dynamic situations. The integration algorithm flowchart is shown in Figs. 3.9 and 3.10, including Zero Velocity Update (ZVUP) initialization, INS navigation updates and integrated navigation updates. During the ZVUP initialization stage, we know that the IMU is static in the e-frame, i.e. the velocity and rotation rate should be zero. Therefore the 3D velocities in the horizontal plane and along the vertical direction should be just measurement noise with small standard errors, i.e.

X

Y

Z

v 0

v 0

v 0

The static angular velocity estimate due to Earth rotation and misalignment can be given as

b b b e b b eeb ib e ie e e

ˆ ˆˆ ˆ ; ω ω R R R I E (3.70)

b b b e eeb ib e ieˆ d R I E (3.71)

With this fact, we can estimate the initial position as well as the attitude and constraint the influences of the IMU sensor noises. The whole GPS/INS integration can be divided into two main parts: the initial part and the dynamic process part, which are shown in Figs. 3.9 and 3.10, respectively. During the initial stage the vehicle stays still for a period of time to get enough information to determine the initial orientation and sensors noise biases before the moving session. We record all the static data from both IMU and GPS receiver until

Page 78: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

63

the vehicle start to move. By means of these static data, ZVUP is carried out and the initial position and orientation are computed as the starting state for the following moving session (see Sections 3.2.3 and 3.2.4).

Fig. 3.9 Flowchart of GPS/INS integration initialization algorithm

The events in the flowcharts are defined as (here we only care about these three events related with the LVN): EVENT 0 – start of static data (initialization) EVENT 5 – start of dynamic progress data (motion progress of different missions) EVENT 4 – end of session During the dynamic progress, each time we read the IMU observations we perform INS navigation update in the e-frame. After each 100 times of INS navigation update, we obtain also one GPS difference observation, and then we perform the integrated navigation update with EKF until the end of the session. If there is no GPS observation available, this means a GPS signal gap. In such cases, only INS navigation update is performed as shown in Fig. 3.10.

ZVUP

Record static data

Start

Average GPS coordinates and IMU observables

Event ? 5

0

End

Read IMU & GPS data

Dynamic navigation progress

Compute initial orientation and noise biases

Page 79: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

64

Fig. 3.10 Flowchart of GPS/INS integration dynamic process

3.4.2 Error of tight integration on simulated observations

Using GPS double-difference phase and code pseudorange observations and IMU observations at the raw data level, we simulated and analysed the standard errors of both position and orientation (see Eqs. (3.29) and (3.32)). To show the tight integration with realistic conditions, we use the simulated flight trajectory data from Horemuz and Sjöberg (2001) as shown in Fig. 3.11. The GPS/INS system is mounted on an aeroplane, which first accelerates and climbs up to 500 m height, and then it makes a horizontal loop. After three 90° turns, it starts to land and finally stops close to the starting point. This whole flight takes 680 seconds. During the flight we simulate a GPS signal outage of 40 seconds to all satellites. Hence, only INS updates the position and attitude during this interval. First we get the true positions along the whole flight trajectory, and then random errors are added to the sensor outputs,

Start

Which data? IMU solution

GPS/IMU solution

End

Y

Read measurement data

Event 4 ?

GPSIMU

Abnormal data

N

End

Page 80: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

65

Fig. 3.11 Test flight path (from Horemuz and Sjöberg, 2001)

Here we only present the navigation performance in the n-frame during two typical situations, the position and orientation errors of INS with 4 observable satellites and 2 satellites, i.e. INS + 4 satellites and INS + 2 satellites. For other situations and the detailed discussion, we refer to Horemuz and Sjöberg (2001).

Fig. 3.12 Position errors of INS + 4 satellites with 40s GPS outage

-4000

-3000

-2000

-1000

0

1000

2000

3000

4000

0200400

east (meters)

Test Flight Path

north (meters)

up

(m

ete

rs)

0 100 200 300 400 500 600 7000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Time [s]

Sta

nd

ard

err

or

[m]

NorthEastHeight

Page 81: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

66

Fig. 3.13 Orientation errors of INS + 4 satellites with 40s GPS outage

From Figs. 3.12 and 3.13, we can see that different manoeuvres during the whole progress only slightly affect the position errors, but they are important for yaw determination, since the azimuth becomes observable only in the presence of accelerations other than gravity, i.e. when the vehicle accelerates or turns the direction (Horemuz and Sjöberg, 2001). Around time 300 seconds there is the period of GPS signal outage, and from Fig. 3.12 we can draw the conclusion that the position coordinate errors grow quite rapidly during that period. Once the satellites are tracked again, the position errors drop down to the normal values as before. However, this gap does not make significant influences on the orientation errors, which remain within a low level all through the whole progress (see Fig. 3.13).

0 100 200 300 400 500 600 7000

20

40

60

80

100

120

140

160

180

200

Time [s]

Sta

nd

ard

err

or

[arc

se

c]

RollPitchYaw

Page 82: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

67

Fig. 3.14 Position errors of INS + 2 satellites with 40s GPS outage

Fig. 3.15 Orientation errors of INS + 2 satellites with 40s GPS outage

0 100 200 300 400 500 600 7000

50

100

150

200

250

Time [s]

Sta

nd

ard

err

or

[m]

NorthEastHeight

0 100 200 300 400 500 600 7000

20

40

60

80

100

120

140

160

180

200

Time [s]

Sta

nd

ard

err

or

[arc

se

c]

RollPitchYaw

Page 83: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

68

As we can see from the Figs. 3.14 and 3.15, there is no remarkable difference within the GPS signal gap, i.e. around 300 second; the position errors grow significantly fast when the number of observed satellites decreases to 2, but the orientation errors can still be kept in a relatively low level. Comparing Figs. 3.15 and 3.13, the differences of orientation errors only increase by 10-30 arc seconds under the situation of 4 and 2 satellites. This is one representation of the benefits with tight integration: even with less than 4 satellites, the GPS information can still contribute to the integration system, especially on keeping the orientation precision.

3.4.3 Positioning of loose integration on real observations

Next we carried out the following data processing based on real experiments in the scenario of urban area navigation. The raw data is obtained from a 3D surveying application for about 10.5 minutes in total with GPS/INS setup on a vehicle. Among the data, there are several small GPS signal gaps from time to time, and there is a large GPS signal gap for about 2 minutes. The GPS/INS setup in the experiments is shown in Fig. 3.16. The IMU used in the experiment is IMU-FSAS from NovAtel Inc., which is a tactical grade (lowest accuracy grade) IMU.

Fig. 3.16 Setup of GPS antenna and IMU (Courtesy Visimind AB)

GPS anntenna

IMU

Page 84: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

69

First of all we carried out EKF loose integration in the e-frame with the IMU observations and GPS differenced positioning coordinates. The navigation performances are shown in Figs. 3.17-3.19 (all final results in the following figures are transferred back to the n-frame for better illustrations of their physical meanings). Based on the same raw data and the loose integration, we analyse the physical constraints (i.e. velocity constraint and height constraint) to the EKF as well as the influence of simulated gross errors on GPS coordinates in order to test the actual navigation performance of the implementation described in Section 3.3.

Fig. 3.17 Position errors with EKF (The main GPS signal gap occurs roughly between 6.0 and 8.1 in the time legend.)

0 100 200 300 400 500 600 7000

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Time [s]

Sta

nd

ard

err

or

[m]

NorthEastHeight

Page 85: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

70

Fig. 3.18 Velocity errors with EKF Since the IMU update frequency is 100 Hz, the time interval between two continuous data are 10 ms in Figs. 3.17-3.19, and the whole moving session takes 632.46 seconds. From the beginning, the velocity standard error starts from zero due to the ZVUP as shown in Fig. 3.18. There are several small perturbations of the velocity and position standard errors because of the small GPS signal gaps caused by occasional obstacles form the environment. As we can see from Figs. 3.19 and 3.19, the position and velocity errors grow up significantly when there is a long GPS signal gap. After the gap they drop down to the normal values as the same situation with the tight integration. From Fig. 3.19, we can see that the orientation errors go down from the initial values with time goes by during the progress, and just grow up about 5-10 arc seconds during the long GPS gap. Even a large GPS gap does not affect the orientation error much in loose integration.

0 2 4 6 8 10 120

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

Time [min]

Sta

nd

ard

err

or

[m/s

]

VNorthVEastVHeight

Page 86: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

71

Fig. 3.19 Orientation errors with EKF From the three figures above, we can see that the error curves are quite smooth from the beginning, when also the GPS signal outage periods are quite small, except for the main one. Then during the large GPS signal outage, the position errors and velocity errors go up significantly, while the orientation errors go up relatively little.

3.4.4 Analysis on Kalman filter performance

An important phenomenon with the Kalman filter is the problem of estimation divergence. As is known, the EKF is just the first order approximation of the practical nonlinear system model. Moreover, we only assume that there is a random constant bias and drift in each of the IMU sensors. From Chapter 2 we can see that this is far from the truth and this assumption is somewhat too simple. Another point is that the IMU sensor noise is changing with time due to its stochastic character and change of environment and temperature. For these reasons the EKF navigation may diverge, if there are frequent and long periods of GPS signal outages. In such cases we could only rely on the simple dynamic model and the inaccurate noise information, the EKF may not adjust the parameters fast and accurate enough. As a result, the errors may accumulate. If the error grows up to some limit, it may cause numerical problems and lead to the divergence. For the real-time data set introduced in Section 3.4.3, we only get the recorded IMU output data after the experiment without accurate sensor noise parameters. Therefore

0 2 4 6 8 10 120

20

40

60

80

100

120

140

160

180

200

Time [min]

Sta

nd

ard

err

or

[arc

se

c]

RollPitchYaw

Page 87: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

72

the EKF loose integration is carried out in the e-frame with empirical parameters. The first minute and 150 second position errors are shown in Figs. 3.20 and 3.21, respectively. The three components of the position error in Figs. 3.20 and 3.21 refer to the three

dimensional position errors within the state vector, i.e. Ti i ix , y , z r in Eq.

(3.25). Since we do not know the true positions of the trajectory and the GPS solutions is based on differenced phase observables, we can roughly assume the GPS solutions as the true positions and the three dimensional position errors from the EKF output can be considered as the estimated position errors of the loose integration. Please note that all position and orientation errors in Figs. 3.17-3.21 are transferred into the n-frame for conciseness and direct viewing with their physical meanings.

Fig. 3.20 Estimated positioning errors within 1 minute

0 10 20 30 40 50 60-8

-6

-4

-2

0

2

4

Time [s]

Pos

itio

nin

g e

rro

r [m

]

NorthEastHeight

Page 88: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

73

Fig. 3.21 Estimated positioning errors within 150 seconds

From Fig. 3.20 we can see that within one minute the EFK successfully manage to produce reasonable results. There is a gap of 4 seconds followed by 2 seconds with only code observations from the 41 epochs. Then there is another 4 seconds of gap from the 49 epochs. During the gap, the position errors grow up quickly, even reaching up to 4 metres within 4 seconds. We can also notice that the position errors during the following continuous gap accumulate based on the former one, since the two continuous gaps are quite close to each other in time (there is no Kalman filter update during the gaps). From Fig. 3.21, we can see that due to many GPS gaps the position errors grow quickly. The position errors along east and down directions can still remain reasonable, but the north direction error degrade faster than the other two. During the following period of time, the EKF continues to degrade due to some numerical problems. The results are not given here. But the Kalman filter divergence analysis and possible improvement on this key problem can be one of further research topics. As a result, during the following tests we omitted the long period of GPS signal outage and only test on the real data with short GPS signal outages, i.e. the first minute of the data. In practice, this does not influence the real navigation significantly, because once there is no GPS signal we can only trust the navigation solution of MEMS based IMU within dozens of seconds and once the GPS signal acquisition is obtained we can trust the positioning result. As for the gaps during the navigation

0 50 100 150-10

-8

-6

-4

-2

0

2

4

6

8

Time [s]

Pos

itio

nin

g e

rro

r [m

]

NorthEastHeight

Page 89: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

74

route, some backward smoothing algorithm can be used (e.g. RTS smoother; Godha and Cannon, 2005 and 2007).

3.5 Summary

In this chapter, the following content and conclusions are included:

- Three levels of GPS/IMU integration structures are introduced with their

advantages and disadvantages.

- GPS code and phase observations with accuracy analysis are introduced.

- INS navigation equations and error dynamics in the e-frame are presented.

- EKF introduction and the implementation for the integrated navigation system

- Loose and tight integration modelling and detailed implementation process

with EKF

- Variance propagation analysis of tight integration based on flight simulation

data with 2 and 4 observable GPS satellites. We can see that even with less

than 4 satellites, the GPS information can still contribute to the integration

system, especially on keeping the orientation precision.

- EKF performance analysis of loose integration based on real application data.

We do not have accurate sensor noise parameters, and therefore during GPS

signal gap, the position errors grow up quickly. Once there is no GPS signal

we can only trust the navigation solution of MEMS within dozens of seconds.

Page 90: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

75

Chapter 4

Vision aided map matching

4.1 Map matching and existing problems

Nowadays most GPS receivers work with a digital map in different applications, and hence map matching (MM) is a practical positioning method in addition to the GPS and IMU sensors. A map-matching algorithm could be used as a key component to improve the performance of a navigation system. MM is aimed at determining the vehicle location on a road given a map and a series of positioning results. During last two decades quite many research papers on MM, from simple search techniques to complicated mathematical methods, appeared in the literature.

MM algorithms integrate positioning data with spatial road network data (roadway centrelines) to identify the correct link on which a vehicle is travelling and to determine the location of a vehicle on that link. In short, MM algorithms are used to determine the location of a vehicle on a road segment. Approaches for MM algorithms can be categorised into four groups: geometric, topological, probabilistic, and other advanced techniques, e.g. EKF and Fuzzy Logic. For a more detailed literature review on MM we refer to Quddus et al. (2003) and (2007).

In MM the geometric and topological algorithms have the basic problem of choosing correct links when facing a junction and with inaccurate GPS heading information due to the low speed. It is quite common that the vehicle has to decelerate and stop in front of the traffic lights at a street junction. During this waiting time, MM algorithms may give wrong link information due to the IMU drift errors and the inaccurate GPS heading.

It would be convenient if one could solve this problem by adding the image information by means of a camera to provide reliable vehicle motion information in real time. Nowadays different kinds of cameras are widely used in various application fields, and their cost is usually rather low. In this chapter we aim at detecting the lines in the images in order to determine whether the vehicle is within the correct lane. The line segments detected in consecutive images can also be used for determination of vehicle’s rotation. With the reliable rotation information from the real time images, we could solve the link-choosing problem in MM mentioned above.

Another problem is that in our application low-cost vehicle navigation by cheap IMU sensors combined with a GNSS receiver are adopted. The azimuth determined by this

Page 91: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

76

combination has large drift error when the car is not accelerating, which can greatly degrade the navigation accuracy. Therefore the lane detection from real time images can also be used to estimate the rotation of the platform between images, which is an important support for GNSS/IMU systems. A detailed implementation and discussion on camera motion estimation will be addressed in next chapter.

4.2 Lane detection

Here we solve the link-choosing problem in MM by the real-time images from a camera on the vehicle. The road edge and lanes can be detected and tracked in real time from the images, so that we can make sure whether the vehicle has turned into a new link or not. Such a lane detection system is also a crucial component of a Driver Assistance System (DAS), which can provide Lane Departure Warning (LDW) (Horemuz, 2008).

Lane detection mainly deals with image processing techniques, and basically there are two classes of approaches for lane detection, i.e. feature-based and model-based techniques. The feature-based techniques detect the lanes from the images by detecting features like road markers or road edges. These road features are detected by traditional image segmentation. Obviously the feature-based approach relies on the availability of image features, and therefore it may suffer from occlusions or image noise in different situations.

On the contrary, the model-based techniques are robust against noise and occlusions, with lanes modeled as straight lines or parabolic curves. However, most lane models are only focused on certain shapes of the road, and thus lack flexibility to modelling different road shapes. For a more detailed literature review on lane detection we refer to Want et al. (2004) and McCall & Trivedi (2006). For our purpose to choose correct link for MM, the feature-based techniques can fulfil the requirements and they are easier to implement. The progress of feature-based lane detection mainly contains three main steps: pre-processing, edge detection and line detection, as will be described below.

4.2.1 Pre-processing

An image can be defined as a two dimensional intensity function ( , )f x y , where x and y

are spatial (plane) coordinates. Hence the amplitude of f at the coordinates (x,y) is called the intensity or grey level of the image at that point. When x, y and the amplitude values of f are all finite, discrete quantities, the image is called a digital image. In the following part, we focus on the image processing and line detection of digital images.

Page 92: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

77

Region of Interest

As for vehicle navigation, we know the fact that the road and lane start from the left or right bottom corner of the images, while the top part of the images usually contain buildings, trees and other different objects, which are of no interest for the lane detection and may even cause many mistakes in lane detection. Therefore we choose the bottom part of the images as the Region of Interest (ROI) for lane detection to speed up the image processing algorithm and to reduce the possibility for mistakes in lane detection.

Smoothing operation

With a cheap camera image sensor, there is usually much noise of random intensity in the real-time images (e.g. the salt-and-pepper noise) and the noise would cause trouble in detecting the edges and lines. Therefore the images should be smoothed before further treatment. Basically there are two main kinds of filtering operations: spatial and frequency domain filtering. The filtering operations in the frequency domain should first perform the Fourier transform of the image, then the filtering operation is implemented, and finally the inverse Fourier transform is performed to restore the image back in spatial domain. The filtering operation in the spatial domain directly operates on the image neighbourhood data.

There are different filters for image smoothing operations in spatial domain, including mean, median and Gaussian filters. The principles of spatial filtering are shown in Fig 4.1, which is in fact a 2D convolution operation. Convolution is one of the most important and fundamental concepts in signal processing and analysis. For more detailed information on filtering and convolution, we refer to the digital signal processing literature, e.g. Gonzalez (2002) and (2003). The spatial filtering process consists of moving the filter mask from point to point in an image. At each point the response of the filter is calculated using a predefined relationship. For linear spatial filtering, the response is given by a sum of products of the filter mask coefficients and the corresponding image pixels in the filter mask area. For the 3 3 filter mask shown in Fig. 4.1, the response of the linear filtering at point (x,y) in the image is given by

( , ) ( 1, 1) ( 1, 1) ( 1,0) ( 1, ) ...

(0,0) ( , ) ... (1,0) ( 1, ) (1,1) ( 1, 1)

g x y w f x y w f x y

w f x y w f x y w f x y

(4.1)

Please note that the mask coefficient (0,0)w coincides with the image point ( , )f x y ,

which means that the filter mask is centred at (x,y) when the sum of product operations is

carried out. The filter mask is usually chosen with odd sizes, of which the smallest mask

Page 93: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

78

size is 3 3 . In general, the linear filtering response of an M N image with a m n filter

mask is given by

( , ) ( , ) ( , )

a b

s a t b

g x y w s t f x s y t

(4.2)

where ( 1) 2a m and ( 1) 2b n . It is clear that when 3m n , Eq. (4.2) reduces to

the example given in Fig. 4.1.

In spatial domain operations, Gaussian filter is widely used in image processing to reduce the noise and image details. Mathematically, applying a Gaussian filter to an image is equal with convolving the image with a Gaussian function defined as

2

2

( )

21( )

2

x

f x e

(4.3)

where is the mean or expectation, and is the standard deviation.

A Gaussian function has the characteristic shape of symmetric “bell curve”. Two important properties of Gaussian functions are:

The Fourier transform of a Gaussian function is another Gaussian function.

Applying a Gaussian filter has the effect of reducing the image's high-frequency components. In other words, a Gaussian filter is a low pass filter.

Different mask sizes can be adopted, and the well-known 3 3 or 5 5 Gaussian filters can be adopted for spatial filtering to get rid of speckles in real-time images. In our tests, the 3x3 Gaussian filter is adopted, and the mask kernel i.e. the mask coefficients ( , )w s t in Fig. 4.1 is given by

1 2 1 1

16 16 16 42 4 2 2 1 2 1

16 16 16 4 4 4 411 2 1

416 16 16

Page 94: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

79

Fig. 4.1 The principles of spatial filtering (from Gonzalez and Woods 2002, p. 90)

4.2.2 Edge detection

In the field of image processing, the extraction of geometric features from images is one of the basic operations. The most common approach for edge detection is to detect meaningful discontinuities in the intensity values. The discontinuities in an image are generally detected by first- or second-order derivatives. The second-order derivatives are seldom adopted in image edge detection, because they are too sensitive to noise, and their

( , )g x y

Page 95: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

80

magnitudes cause double edges, which are unsuitable for edge detection. However, these derivatives could be a useful tool when combined with other edge detection methods. The approaches based on first-order derivatives are discussed below.

The gradient of an image f(x,y) is defined as the vector

x

y

fG x

fG

y

f (4.2)

Therefore the magnitude of the gradient is

2 2 2 2( ) ( )x yf G G f x f y f (4.3)

The computation of the gradient of an image (intensity function) is based on the partial derivatives f x and f x at each pixel location. However, this implementation is not

always desirable because of the high computation burden required by the square and square root operations in Eq. (4.3). Therefore, in order to simplify the computation operations, Eq. (4.3) is usually approximated by

2 2x yf G G

or just using the absolute value

x yf G G

The main purpose is to find the discontinuities locations (i.e. the image coordinates where the intensity changed), instead of computing the gradient. Therefore theoretically any operator can be adopted here if it can approximate the derivative and indicate where the intensity discontinuities occur. For image processing, we must consider the implementation speed, and should choose the simplest operator for our purpose. The approximations above are classic operators (Gonzalez and Woods 2002, Sect. 10.1.3), which behave the same as the derivatives of the image in Eq. (4.2). The essential property of the gradient vector is that it indicates the direction of maximum change rate in images. A fundamental property of the gradient vector is that it point to the direction of the maximum rate of change of the image f at coordinates (x,y), and the direction angle at which this maximum rate change occurs is

1( , ) tan ( )x

y

Gx y

G (4.4)

Page 96: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

81

Thus the fundamental problem for edge detection is how to estimate the derivatives in Eq. (4.3) in a digital image. To solve this problem, there are several common implementations proposed to represent the derivatives within a digital image, namely Soble, Prewitt, Roberts and Canny, etc. (Gonzalez and Woods 2002, pp. 379-393).

Fig. 4.2 Common edge detector masks and the first-order derivatives (from Gonzalez and Woods 2002, p. 387)

Some of these edge detector masks and the first-order derivatives are shown in Fig. 4.2. The operating procedure is a kind of 2D convolution shown in Fig. 4.3, which is similar with the spatial filtering mechanism in Fig. 4.1. The output digital image y[m, n] can be obtained by the convolution of the input image x[m, n], and the detector mask h[m, n], as shown in Eq. (4.5), where m and n are the indices of rows and columns.

Page 97: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

82

[ , ] [ , ] [ , ] [ , ] [ , ]j i

y m n x m n h m n x i j h m i n j

(4.5)

Fig. 4.3 2D Convolution (from Song, 2005)

The edge detection algorithms operate on the premise that each pixel in a grayscale digital image has a first derivative with regard to the intensity change at that point. If a significant change above the threshold occurs at a pixel in the image, then a white pixel is placed as the edge point in the binary image, otherwise a black pixel is placed there instead.

Although the Canny detector is considered as the most powerful edge detector (Gonzalez and Woods, 2002), it usually produces too many edges on the background scenes, which would slow down the processing speed and more easily lead to mistakes in the following line detection step. As a result, the Sobel edge detector is adopted in this chapter, since the lanes and road edges are evident lines in the images.

The Sobel edge detector is in fact a discrete differentiation operator, that computes the approximate gradient of the image intensity function. For each point in an image, the result of the Sobel operator is either the corresponding gradient vector or the norm of this vector. The operator is based on convolving the image with a small filter of integer values in horizontal and vertical directions and is therefore with relatively low computation burden. As a result, it yields relatively crude edge information, in particular lack of high frequency variations in the image (blurred image).

Page 98: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

83

4.2.3 Line detection

Hough transform

The Hough transform is named after Paul Hough, whose method was patented in 1962 (Hough, 1962) It is a powerful global line detecting method, which transforms between the Cartesian space and a parameter space, in which a straight line (or other boundary) can be defined. A straight line in the image space can be given by y mx b (4.6)

The characteristics of the straight line is represented by the point (b,m) in the parameter space. In the Hough transform, polar coordinates ( , )r of the line points are adopted to

represent the line as shown in Fig. 4.4.

Fig. 4.4 A straight line with polar coordinates

Then the equation of the straight line can be rewritten as

cos

( ) ( )sin sin

ry x

(4.7)

r

x

y

O

Page 99: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

84

where r is the distance between the line and the origin, and is the angle of the vector from the origin to the closest point on the line. It is obvious that the line in Eq. (4.7) is equivalent to

cos sinr x y (4.8)

Therefore, for a single point in the image plane with coordinates ( 0 0,x y ), the lines which

go through this point follow the equation

0 0( ) cos sinr x y (4.9)

where ( )r r is a function of . In fact, Eq. (4.9) is a sinusoidal curve in the ( ,r )-plane,

which is unique to the point ( 0 0,x y ). If the curves corresponding to two or more points

are superimposed, the crossing location corresponds to a line passing through all the points in the image space. Therefore we search for ( ,r ) elements which have large

values, and each peak found corresponds to a line in the original image.

Advantages and Disadvantages The distinct advantage of the Hough transform is that the pixels on one line need not to be contiguous with each other i.e. separated line points can be successfully detected as one line. Therefore the Hough transform is very suitable for lane detection under the situation discontinuous line clusters due to noise, or the lines are partially occluded in the image. One main disadvantage of the Hough transform is that it may give out misleading results when background points happen to be aligned by chance. This clearly shows another disadvantage that the detected lines are given by ( ,r ) polar coordinates, and each ( ,r ) values in fact correspond to infinite lines (see Fig. 4.4) rather than finite number of lines with defined end points. Please note that the Hough transform can also be used to detect other shapes in an image, for example circles, and in this case the only difference is that the method will be modified with the three parameters: the radius, the x and y coordinates of the centre. However, due to the increased computation load for more complicated curves, it is used only for simple shapes in practice.

4.2.4 Practical constraints and difficulties

For the lane detection, we know many empirical facts of the lanes or road edges (e.g. shape, colour and length, etc.), and hence the practical knowledge can be applied to reject some redundant or fake lines, especially under complicated situations. Some of the common practical constraints can be applied to lane detection based on the fact that the

Page 100: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

85

lanes and road edges are approximately parallel to the optical axis. For example we can constrain the line detection within a specific angle scope, for example 30 – 150 deg. The length of line segments should be larger than a threshold, for example 20 pixels. The detected lines which are parallel or crossed nearby with each other can be regarded as one straight line.

However, similar with other image processing applications, lane detection is also subject to the different environmental effects and therefore several common problems are summarized as:

– Difficulties caused by various illumination conditions and poor quality of lane markings

– Violation of some common assumptions, such as constant road width, parallelity of lane boundaries, or the use of other simplified geometric road models (e.g., parabolic boundaries)

– Difficulties caused by surrounding objects, such as trees on the roadside, occlusions caused by other vehicles on the same road

– Missing information about the actual lanes or special roads (e.g. no lane marks, or unpaved roads).

These difficulties or problems for lane detection will be discussed with our test results, and accordingly possible solutions are also proposed.

4.3 Experiments and analyses

In order to test the performance of our proposed lane detection algorithm under different real time navigation situations, we have carried out several practical tests in different places with different scenarios, including highways, city streets and pathways without lanes. The test results and detailed analyses are given in the following parts.

In our test, the standard library functions for image spatial filtering and Hough transform in Matlab are used to perform the line detections.

4.3.1 Highway tests

We use the video data from Horemuz (2008) to test the performance of our lane detection algorithm. The video is taken on road number 68 shown in Fig. 4.3, which is approximately 10 km long between two townes Horndal och Hästbo (about 150 km west from Stockholm). This driving test is carried out with a speed up to 90 km/h, and this part

Page 101: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

86

of the road is surveyed by mobile mapping system Visimind (Horemuz, 2008), which is based on an integrated system of GPS, INS and digital camera sensors.

Fig. 4.5 Test driving on Road 68 (from Horemuz, 2008)

As introduced in Section 4.2, the lane detection algorithm in the experiments contains the following steps: choose region of interest (i.e. subtract the scene) -> Smoothing-> Sobel edge detection -> Adaptive Hough transform -> pick up the peaks -> Sort the candidate lines by their positions ->Delete the redundant lane marking candidates-> Fit the lines.

First we take a frame of the video as the target image shown in Fig. 4.6, and the results from each step are given in Fig. 4.7 – 4.11. In order to show the performance under complicated environments, a specific frame with both a passing car on the other lane and a turning is chosen, and the test results are shown in Fig. 4.12 – 4.15.

Testdriving

Page 102: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

87

Fig. 4.6 A general high way grayscale image

Fig. 4.7 ROI of Fig. 4.6

Fig. 4.8 Edge detection

Page 103: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

88

Fig. 4.9 Hough transform image (peaks shown in the white squares)

Fig. 4.10 Detected lines

Fig. 4.11 Delete redundant lines

Hough Transform of Road Image

-60 -40 -20 0 20 40 60

-600

-400

-200

0

200

400

600

Page 104: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

89

Fig. 4.12 The frame with other vehicle and turning

Fig. 4.13 ROI of Fig. 4.12

Fig. 4.14 Edge detection within ROI

Page 105: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

90

Fig. 4.15 Detected lines

From the test results, we can see that the lane detection algorithm is able to handle different situations in normal highways. Of course, the lanes and highway roads are relatively simple, clear and straight. Therefore there are few difficulties for image processing in real time.

4.3.2 Pathways around KTH

We have also done some tests around KTH, and basically the idea is to test the situations of pathways, Y- junction and the city streets.

Fig. 4.16 Drottning Kristinas väg at the L-building of KTH

Page 106: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

91

Fig. 4.17 Edge detection within ROI

Fig. 4.18 Detected lines

As we can see from Figs. 4.16 - 4.18, the road edges can be used as the detected lines for the pathway without lanes, and they are successfully detected even with other background objects, such as cars, bikes, trees and buildings at both sides of the road. The proposed algorithm can still manage to detect the correct road edges as shown in Fig. 4.18.

Then some tests on a Y-junction near KTH are carried out. First we take pictures from the view in front of the junction, then at the middle of the junction and finally after the turning, as shown in Figs. 4.19, 4.23 and 4.26, respectively.

Page 107: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

92

Fig. 4.19 Y junction on Körsbärsvägen besides KTH

Fig. 4.20 Edge detection within ROI

Fig. 4.21 Detected lines

Page 108: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

93

Fig. 4.22 Delete redundant lines

Fig. 4.23 At the center of Y junction

Fig. 4.24 Edge detection within ROI

Page 109: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

94

Fig. 4.25 Only one road edge detected (one curve approximated with 2 lines)

As we can see only the line clusters for one road edge are detected in Fig. 4.25. The reason is that the line clusters of the right road edge in the right corner are similar with ¼ of a circle curve, and the stretched parts are covered by the shadow in Fig. 4.24. As a result, they are not considered as a straight line due to the large bent angle.

However, it should be noted that the result of Fig. 4.25 only lasts within a short time when the car is passing the middle section of the Y-junction. After the turning, the algorithm can successfully detect the road edges, as shown in Fig. 4.26 and 4.27 below.

Fig. 4.26 After the turning

Page 110: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

95

Fig. 4.27 Road edge detection (within ROI)

4.4 Existing problems and discussion

As shown in Section 4.3, the highway lane detection is relatively simpler than the pathways and city streets. The road edges can be detected when the pathways have no lanes, and even the Y junction can be detected, except for a short turning time.

In the following parts we will discuss some special situations that can cause troubles to the image processing in real-time applications. First we present some typical mistake scenarios. The reason and possible solution will also be discussed.

Fig. 4.28 Large shadows covering the road edge

Large shadow areas cause big problems for different kinds of images processing all the time. If the shadow covers the lanes or road edges as shown in Fig. 4.28, it would cause

Page 111: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

96

failures for the line detection. Although there are many researches on shadow detection and removal, it is still a basic problem for image processing in many fields.

Normally, if we know the driving position and the time, the position of the sun relative to the vehicle could be estimated. In this way, it would be helpful to detect the shadow areas with the direction and length information in real time. The research on shadow removal is beyond the content of this thesis, and therefore it will not be further discussed in this chapter.

Fig. 4.29 Scene with lines parallel with the lanes (Valhallavägen nearby KTH)

If there are many background lines parallel to the road and the lanes, it is difficult for the algorithm to distinguish all these lines, especially when their positions are near to each other. As shown in Fig. 4.29, the buildings with lines parallel to the lanes are just located at the left side of the street, and there are several lanes on the street at the same time. All these conditions cause difficulties to the image processing. An adaptive choice of ROI and other physical constrains could be a tricky solution in such situations.

Please note that we are actually also using the detected lines for the camera motion estimation, so even the building lines can be useful. However, for the complicated scenes like Fig. 4.29, additional practical constraints should be applied to distinguish the building lines with the lanes, e.g. the lanes are located on the horizontal plane (i.e. bottom area of the image), while the parallel building lines are on the vertical plane at either side of the road.

Page 112: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

97

Fig. 4.30 Complex scenes beside the city street (Valhallavägen nearby KTH)

In city streets, there are always complicated scenes along the driving trajectory, which can cause lots of practical difficulties for real-time images processing. As shown in Fig. 4.30, lots of cars and trees besides the street would cause high density edge points, which are difficult for the algorithm to handle adaptively or intelligently, and these high number of edge points could cause many fake lines in lane detection.

To solve this problem, the object recognition based on morphology can be used to remove the influence of basic background objects, such as other cars on the road. Morphology is a broad set of image processing operations based on the shapes. Morphological operations apply a structuring element to an input image, and create an output image of the same size.

The most basic morphological operations are dilation and erosion. In a morphological operation, each pixel of the output image is based on a comparison of the corresponding pixel in the input image with its neighbourhood. By choosing the size and shape of the neighbourhood, the morphological operations can be constructed, and they are sensitive to the specific shapes in the input image. For more information on morphological operations, we refer to Gonzalez et al. (2002) and (2003).

Page 113: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

98

4.5 Lane detection applications for map matching

In Section 4.3, the proposed lane detection algorithm is shown to be effective for high ways and pathways in the city, even some complicated junctions. As discussed in Sections 4.2.4 and 4.3, lane detection is the same with other image processing applications, which are subject to different environments, and there are some common difficulties in real time.

The illumination or shadows are the basic problems for all image processing. For lane detection, different road textures and complex background along the city street are big challenges for real time navigations.

The complex backgrounds in city streets could be eliminated by Inverse Prospective (Aly, 2008). The problem caused by other cars on the road can be removed by shape recognition or morphological operations. However, it should be noted that there is no standard way to get rid of all problems in image processing. Different problems are usually related with different scenarios, and therefore some specific methods should be adopted accordingly.

In this chapter, a lane detection algorithm was proposed and tested under different situations. The test results show that the lines in most of the real-time images are successfully detected. The reasons for failure under some special situations were analysed and possible solutions were also proposed.

However, we are not aimed to propose a well-functioned lane detection algorithm in real applications due to our limited research time on this topic. Image processing is a broad and fast-developing subject, and quite many research results on lane detections and image processing can be adopted to overcome the related problems in the future.

The lines detected in the roads are the basis for 3D vehicle motion estimation in next chapter, and also can guarantee the correctness of link-choosing in MM. In this way, the performance of MM can be improved and guaranteed in real time navigation and other applications.

4.6 Summary

In this chapter, the following content and conclusions are included:

- Map Matching and its link-choosing problems are introduced. - Basic principles of image processing with focus on lane detection application - Line detection method based on Hough transform

Page 114: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

99

- Lane detection procedures and practical considerations - Tests on highways, city streets and pathways under different scenes. The test

results show that lanes or road edges are successfully detected in most cases. - The failure situations of lane detection are analysed in details and possible

solutions are proposed accordingly. - Lane detection application for Map Matching is proposed.

Page 115: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

100

Chapter 5

Camera motion estimation from images

5.1 Introduction

Motion estimation from image sequences is a basic topic and sub-domain of computer vision, a discipline for acquiring, processing, analysing, and understanding images in order to produce numerical or symbolic information. Computer Vision has lots of application fields including positioning and navigation, e.g. for the motion of an autonomous vehicle or mobile robot.

In this chapter, we will apply some models and theories from computer vision for navigation purpose, i.e. rotation and translation estimation from image sequences in real time. The term “motion” in the following parts refers to the rotation and translation between the camera positions at two consequent epochs along a trajectory. In navigation applications possible outliers in the data set (e.g. point mismatches) is one common problem. To solve this problem, balanced adjustment was proposed and analysed in detail by Jurisch and Kampmann (1998) and Kampmann (2009). Here results from the classic robust estimator RANSAC will be compared with the balanced adjustment. In addition, the ultimate purpose of this chapter, i.e. the integration model to support the IMU orientation with the camera motion will be presented with some applications.

Before we actually present the principles of camera motion estimation, the estimation theory used for the parameter estimate is introduced.

5.2 Optimal and robust estimation

An estimator is a mathematical procedure, which takes the measured data as input and produces an estimate of the parameters with the corresponding accuracy. It is preferable to derive an estimator that exhibits optimality. Estimator optimality usually refers to achieving minimum average error over some class of estimators, for example, a minimum variance unbiased estimator. It is well-known that there are different forms of norms defined in the n-dimensional space, e.g. L1-, L2- and L -norm. Different norms can be

used to define the target functions of different estimations, such as the well-known least-squares estimation with the L2-norm.

There are different kinds of estimators based on different estimation criteria. Among these estimators, two important terms (i.e. optimal and robust estimation) should be introduced. Optimal estimation means the parameter estimation should minimize the cost function, such as the least squares estimator minimizes the weighted observation residual squares. Robust estimation means that the parameter estimation is insensitive to small

Page 116: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

101

deviations or even outliers from the idealized assumptions which have been used to optimize the algorithm.

A general linear adjustment problem formulated as a Gauss-Markov model is given by

1 1 1n m m n n

A x = L- ε (5.1)

with the variance-covariance matrix

120E( )T

n n

εε P (5.2)

where A is the design matrix, x is the m dimensional vector of parameters to be estimated, L is vector of n observations , ε is the unknown vector of residuals, 2

0 is the a priori variance factor and P is the weight matrix.

The problem in Eq. (5.1) is to determine x, and it requires at least m observations. If n > m, the system is over-determined, and it can be solved by least-squares methods, i.e. by minimizing the function Tε Pε . The normal equation of system equations, Eq. (5.1), is given by

T TA PAx = A PL (5.3)

the least-squares estimation is obtained as

1ˆ ( )T Tx = A PA A PL (5.4)

and the estimated residual ε , corresponding x , is given by

ˆ ˆ T -1 Tε = L - Ax = L - A(A PA) A PL (5.5)

5.2.1 Different norms and linear programming

Given an n-dimensional vector x, a general vector norm x is a nonnegative number

defined such that (Horn and Johnson 1990, Ch. 5):

1. 0x when 0x and 0x iff 0x .

2. k kx x for any scalar k.

3. x + y x y

Let 1p be a real number and n be an integer, then the Lp-norm is defined by

Page 117: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

102

1

1

( )n

p pip

i

x

x (5.6)

For p =1 we get the L1-norm, for p =2 we get the L2-norm, and as p approaches , the Lp-norm approaches the infinity norm. A more detailed discussion about different norms can be found in Horn and Johnson (1990, Ch. 5). The L1-estimator minimizes the sum of absolute values of residuals, while the L2-norm minimizes the sum of squares of residuals, i.e.

L1-norm (absolute-sum): 1 2 31...x x x x

and

L2-norm (least-squares): 2 2 21 22

... nx x x x

In a normed vector space of dimension two or greater, for a set of n data points 1x ,…, nx

the spatial median is defined as the vector (also called “L1 median”) that minimizes

the sum 1

n

ii

x - of Euclidean distances from the data points (Chakraborty et al.,

1998). The important property of the L1 median is that it is unique if the data points do not lie on a straight line in space (Kemperman 1987; Hettmansperger and McKean 1998). As a result, the L1 median estimation is not influenced by outliers, if the total number of outliers is less than 50%, and therefore the L1-norm estimator has been widely used as a robust parameter estimation method with outlier detection.

In our application (i.e. camera motion estimation), we adopt the L1-norm estimation to get rid of possible gross errors in the raw data. The L1-estimation problem, also known as Least Absolute Values (LAV) problem, is given as

1

Ax - L = minimum(x) (5.7)

where m nA is the known coefficient matrix and mL is a known coefficient

(observation) vector.

The L1-norm estimation problem in Eq. (5.7) can be solved by linear programming (LP). Such problems are optimization problems, where the objective function and constraints are all linear. The general form of LP is given by

Tc x = minimum(x) (5.8)

subject to

Ax b (5.9)

Gx = h (5.10)

Page 118: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

103

and

0x

where mb , nc and ph are known vectors, and p nG is a known matrix. It is

obvious that the equality constraints satisfy both inequality constraints and .

The left side of Eq. (5.8) is actually a linear cost function in mathematics, and the optimization problem is finding the parameter to minimize this cost function. In mathematical optimization, the linear cost functions define a family of parallel hyperplanes (lines in 2D, planes in 3D, etc.). One minimum that we want to find out must occur at one corner of the feasible set. Theoretically there are 3 possible feasible sets for the LP problem: • the feasible set is empty. • the cost function is unbounded on the feasible set. • the cost function has a minimum on the feasible set. The first two cases are very uncommon for real problems in economics and engineering. Therefore we always assume the third case is valid in our applications. Each linear inequality in Eq. (5.9) divides the n-dimensional space into two half spaces: one half space where the inequality is satisfied, and the other where it is not satisfied. The feasible set is the solutions to a family of linear inequalities. There are many different approaches to solve the LP problem, such as the classic simplex method, interior-point method, ellipsoid method and cutting-plane method. For detailed information on LP solution algorithms, we refer to Dantzig (1947), Karmarkar (1984) and Cornuejols (2007). There are also many existing software packages for LP. Therefore it is convenient to use LP as a black box for solving various linear optimization problems. Based on the discussion above, Eq. (5.7) can be rewritten as

1

m

ii

y = minimum(x) (5.11)

subject to

y Ax b y (5.12)

where 1 2 ...T m

my y y y Ax b is a non-negative intermediate vector.

It can also be rewritten in matrix form as

Tc x = minimum ( x ) (5.13)

subject to

Page 119: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

104

Ax b (5.14)

where

xx

y ,

0c

I ,

A -IA

-A -I ,

bb

-b (5.15)

5.2.2 Robust estimation with balanced adjustment

For the over-determined estimation problem discussed above in Eq. (5.1)-(5.5), outliers would greatly degrade the estimations since they are treated in the same way as other observations and contribute to the final estimation. Accordingly there are different kinds of robust estimators to address this problem, and different robust estimators are suitable for different situations, e.g. data snooping (Baarda, 1967, 1968) for single outlier situations, RANSAC (Fischler and Bolles, 1981) for data set contaminated by large amounts of outliers, and some kind of M- (Huber 1981, p. 43) or W-estimators (Goodall 1983, pp. 339-403). RANSAC and M- or W-estimators belong to the family of Robust Statistical Estimators, and these kind of robust estimators have a theoretical statistical background with idealized assumptions, which is quite complicated to understand or analyse. On the other hand, the balanced adjustment is based on the L1-norm solution, which principle and algorithm are simple. It works even if there are many gross errors and it is fast, as we can see in the test section. Therefore the balanced adjustment is adopted as the robust estimator in this chapter.

In our application of camera motion estimation, it is quite likely that some point mismatches would be included in the data set. To deal with possible point or line mismatches (i.e. gross errors), the observations can be reweighted, or balanced, so that they have equal redundancies, i.e. equal influences on the parameter estimation. In our application, we choose balanced adjustment as the robust estimator because the algorithm is elegant, it works even if there are many gross errors, and the speed is fast as we can see in the test section. It should be mentioned that robust estimation based on balanced adjustment is used in many industrial applications, but has not been comprehensively described in literature yet.

In the following we briefly present the principles of balanced adjustment without detailed proofs. A detailed discussion on this topic can be found in Jurisch and Kampmann (1998) and Jurisch et al., (2002). To begin with, we choose a special weight matrix (the balancing factors PG) in Eq. (5.4), and then the over-determined estimation problem can be rewritten as

b b A X L = minimum(X) (5.16)

and the hat matrix ( ) 1T T

b b b b b

-=C A A A A (5.17)

has equal diagonal elements where

Page 120: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

105

1/2

b G=A P A (5.18)

and the subscript b represents balanced adjustment.

The balancing factors PG can be formulated as a function of the design matrix A, namely by the properties of the orthogonal projection matrix

1 2 1 2( ) ( )b ii G G G ii m n T -1 TC P A(A P A) A P (5.19)

To compute the balancing factors PG, it is the target to form the diagonal elements of Cb to the numerical value m n by finding the corresponding balancing factor PG. First, the n×m matrix A can be decomposed with QR-factorization into

A = QR (5.20)

where Q is an orthogonal n×m matrix, R is a m×m upper triangular regular matrix, and the iterative solution of the n n diagonal weight matrix PG is given by the following algorithm (Jurisch and Kampmann, 1998):

Step 0: Initialize the i-th diagonal elements of PG :

1.0GiP , 1, 2,...,i n

and choose a scale factor sc, e.g. sc = 100

Step 1: Calculate the elements of the balanced design matrix b

A in Eq. (5.18) by:

bij Gi ijA P A , 1, 2,...,j m

and perform QR decomposition

bA = QR

Step 2: Calculate

( )i Tii

cP

QQ with 2

1

( )u

Tii ij

j

QQ Q

, m

cn

Step 3: Determine the balancing factors of the corresponding iteration step by

( ) ( )Gi new Gi old iP P P

Step 4: Abort iterations if max( )iP sc , is a small number (the accuracy relates to

the achievable computer accuracy), otherwise go to next step.

Step 5: If 0.0GiP , a restriction is present: abort iterations. Otherwise go to next step.

Page 121: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

106

Step 6: Back to step 1.

We must mention that there are both straight and latent restrictions for the application of balanced adjustment (Jurisch et al., 2002). The Latent restrictions latent restrictions determine the final degree of freedom (Grafarend and Awange 2012, p. 142). It is beyond the scope of this thesis to go into further details of the latent restrictions. A detailed discussion on this part can be found in Grafarend and Awange (2012, p. 142), Kampmann and Renner (2000), Jurisch and Kampmann (2001), and Jurisch et al. (2002).

Generally these restrictions should be checked before applying balanced adjustment. Due to the complex principle of these restrictions, the discussion on this topic is omitted here. During our tests, the straight restrictions are checked in advance, but we have not actually checked the latent restrictions in our applications. The test results show that balanced adjustment works fine for our tests and therefore our test data do not belong to the special situations with the latent restrictions.

After the balancing of observation equations by PG, we can apply either L2- (least squares) or L1- (median) estimator. As discussed in Section 5.2.1, to reduce the possible gross errors we choose the L1 estimator in our application and therefore the estimator used in our application is Balanced Least Absolute Value Estimator (BLAVE).

5.2.3 RANSAC

RANSAC is an abbreviation for “RANdom SAmple Consensus”, which is a robust estimation method, first proposed by Fischler and Bolles (1981). Use of the RANSAC method to estimate the epipolar geometry was first reported in Torr and Murray (1993). It is an iterative method to estimate the parameters of a mathematical model from a set of observed data in presence of outliers. It is a non-deterministic algorithm leading to a reasonable result only with a certain probability, and this probability is increasing as more iterations are carried out (Fischler and Bolles, 1981).

A basic assumption is that the data consists of both “inliers” and “outliers”. In addition, the data can be subject to noise. The outliers can come, e.g. from extreme values of the noise or from erroneous measurements or incorrect hypotheses about the interpretation of data. RANSAC also assumes that, given a (usually small) set of inliers, there exists a procedure which can estimate the parameters of a model that optimally explains or fits this data. The algorithm outline is summarized in the following (Fisher, 2002). A much more detailed discussion about RANSAC can be found in Zuliani (2012).

The problem can be stated as:

Given a fitting problem with parameters x , estimate the parameters using observed data.

Assumptions:

Page 122: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

107

a. The parameters can be estimated from N data samples. b. There are M data in total. c. The probability of a randomly selected data being part of a correct model is gp .

d. The probability that the algorithm will exit without finding a correct fit is failp .

Then the algorithm is given with the following steps, and a flow chart is shown by Fig. 5.1:

1. Randomly select N data 2. Estimate the parameters x 3. Calculate the fitting error with the estimated parameters 4. Find out how many data within M fit the model with parameter xwithin a user

given tolerance . Denote this number as K. 5. If K is large enough, accept the fitness of the model and exit with success. 6. Repeat steps 1 to 4

Fig. 5.1 Flowchart of RANSAC algorithm (from Choi et al. 2009)

Page 123: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

108

As we can see in Fig. 5.1, RANSAC is an iterative algorithm with two main phases: hypothesis generation and hypothesis evaluation. The steps 5.5 and 7.5 are possible local optimizations to improve the accuracy. Some other additional steps can also be added to the standard RANSAC algorithm, e.g. partial and adaptive evaluation at steps 3.5 and 4.5 (Choi et al. 2009). Different choices at these steps constitute different variants of RANSAC. For simplicity, only the standard RANSAC algorithm is adopted and compared in the tests.

The magnitude of K is dependent on how much percentage of the data is assumed fitting the parameters. Assume that 100%w of the data are outliers (i.e. 1gp w ). The

probability of sampling N inlier samples is ( )Ngp . Then we will have a probability of

1 ( )Ngp for the case that we have not found any inliers after L times of iterations. We can

find a sufficiently large k that limits the probability of failure to be under a risk level failp , e.g.

5%.Therefore the theoretical minimum number of iterations L can be given by (Fisher, 2002)

log

log(1 ( ) )fail

Ng

pL

p

(5.21)

In our application of camera motion estimation, at least 5 point matches are needed, i.e. N=5. The curve between the outliers’ proportion and the minimum iterations are shown in Fig. 5.2.

It is important to notice that the minimum numbers of iterations given by Fig. 5.2 are only to guarantee that at 95% confidence level we could get reasonable parameter estimations from an observation combination without outliers. The main advantage of this minimum iteration number is that it is not related with the total number of the data set. However, the parameter estimation with minimum iterations is obviously not optimal, since it only uses one inlier combination (not over-determined parameter estimation). If we want to pick out all the outliers and try to get the optimal estimation with all inliers, much more iterations are needed than the theoretical minimum iterations.

Page 124: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

109

Fig. 5.2 The relation between outliers’ proportion and the minimum number of iterations

Advantages and disadvantages

The most important advantage of RANSAC is its ability to produce a robust estimation of the model parameters with a high degree of accuracy even when a significant number of outliers are present in the data set. The percentage of outliers, which can be handled by RANSAC, can even be larger than 50% of the entire data set (Zuliani, 2012). Hence RANSAC or its variant algorithm family member (see Fig. 5.1) is a classic robust estimator and widely used in many different application fields.

An main disadvantage of RANSAC is the computing time, i.e. there is no upper bound on the time it takes to compute these parameters. If the computation iterations are limited, the solution may be not optimal. In this way RANSAC offers a trade-off: by carrying out more iterations, the probability of a reasonable model is increased. Another basic disadvantage of RANSAC is that a problem-specific threshold for the hypothesis test is needed. The threshold is of vital importance in the hypothesis evaluation phase. These disadvantages will be analyzed with detailed test results in Section 5.6.4.

Overall algorithm complexity

Summing up all the hypothesize and test steps, the overall complexity of the RANSAC algorithm is (Zuliani, 2012)

10 15 20 25 30 35 40 45 50 55 600

50

100

150

200

250

300Outliers vs. iterations

The

oret

ical

min

imum

no.

of

itera

tions

Outlier proportion [%]

Page 125: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

110

min( ( ( ) ))iter estimate fittingComplexity O T C k MC (5.22)

where the notation O is the upper bound for the algorithm run time; iterT is the time for

one iteration, min( )estimateC k is the operation cost for the hypothesis generation ( mink is the

cardinality, i.e. data number of minimal sample set), fittingC is the operation cost for

computing the fitting error of one single data element and therefore the total cost would be fittingMC .

5.3 Camera motion estimation from image pairs

A calibrated camera can be treated as a pinhole camera model as shown in Fig. 5.3, which is a simple camera model as a light-proof box with a small aperture on one side. The scene light passes through the aperture and projects an inverted image on the opposite side of the camera.

The pinhole camera model does not consider the geometric distortions or blurring of unfocused objects caused by different lenses and finite sized apertures, and therefore it can be used as a first order approximation of the mapping of a 3D scene to a 2D image.

Fig. 5.3 Principle of a pinhole camera (from Wikipedia, Pinhole camera)

5.3.1 Geometry of a pinhole camera

The pinhole camera image projection process can be shown as in Fig. 5.4.

Page 126: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

111

Fig. 5.4 The geometry of a pinhole camera (positive image)

The camera has a projection centre O, and its principal axis is parallel to the Z axis, perpendicular to the image plane. The image plane is located at the focal point with focal length f from O. An arbitrary 3D point M = (x, y, z) is projected on the camera’s image plane with coordinates m = ( ,u ). We can determine the coordinates of the image point m by using the relations between similar triangles as shown in Fig. 5.5:

f u

z x y

(5.23)

which gives us

f x

uz

(5.24)

f y

z (5.25)

Y

Z

X

M(x,y,z)

u

O f

Image Principal axis

Page 127: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

112

Fig. 5.5 The geometry of a pinhole camera as seen from the X axis

For simplicity we use vector forms to represent the 2D and 3D points in the following parts. Given an arbitrary point with image coordinates ( )Tu m , its homogeneous

coordinates (denoted by “~”) are given by ( 1)Tu m . Similarly for a point

( )Tx y zM in 3D space, its homogeneous coordinates are given by

( 1)Tx y zM . The general relation between each point M in space and its

corresponding point m in image plane is thus described by a perspective transformation

k m PM (5.26)

where k is an arbitrary scale factor (below k is set to 1 for simplicity) and P is a 3 4 matrix, known as the perspective projection matrix, which is given as

[ ]c cw wP R t (5.27)

where cwR is a ( 3 3 )-dimensional rotation matrix and c

wt is a ( 3 1 ) translation vector

from the world (w) coordinate frame (e.g., from the e-frame WGS-84) to the camera frame (c-frame).

It is clear that if we can determine the rotation matrix cwR and the translation vector c

wt for

every image in a sequence, the camera motion problem is solved. The objective of this chapter (i.e. camera motion estimation) is carried out via determination of the projection matrix P from image pairs. In other words, the camera motion estimation can be described by: given a 3D point x in the coordinate frame associated with the first camera, it corresponds to the point Rx + t in in the coordinate frame associated with the second camera.

Page 128: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

113

5.3.2 Basic formulas with projective coordinates

Before we consider our camera motion estimation method, the basic formulas for points and lines with projective coordinates will be introduced. To represent a line l in the projective plane, we begin with its standard Euclidean formula:

0au b c l m (5.28)

where the line is represented by ( )Ta b cl and ( 1)Tu m is a point on this line. Hence, Eq. (5.28) can also be rewritten as

T Tm l = l m = m l = 0 (5.29)

We can also get another straightforward relation: for any point m on a line l, the following relation holds

0T n m (5.30)

where n is the normal vector of the plane containing line l and optical centre O (see Fig. 5.6). As we can see that in Eqs. (5.29) and (5.30), points and lines actually play a symmetric role, i.e. there is no difference between points and lines in the projective plane. This is known as the principle of duality (Faugeras 1993, p.15).

By elementary algebra, we know that two arbitrary crossing lines 1 1 1 1( )Ta b cl and

2 2 2 2( )Ta b cl intersect at the point 1 2 2 1 2 1 1 2 1 2 2 1( )Tb c b c a c a c a b a b m . This

formula can be easily represented as the cross product:

1 2 m l l (5.31)

Similarly, given two arbitrary points a and b , the equation of the line passing through them is given by

l a b (5.32)

Eq. (5.31) and (5.32) are two simple mathematical tools with projective coordinates, which are also the representations of the principle of duality.

5.3.3 Geometry of point/line motion in image pairs

The camera motion estimation problem can be shown as the geometry of point/line motion in two perspective images, which is shown by Fig. 5.6. Two images of the same line segment are taken from two cameras with different position and orientation. Then the objective of the camera motion estimation is actually the determination of the transformation between the two images, i.e. (R, t) in Fig. 5.6. From the figure, it is straightforward that the normal vector n can be determined by

Page 129: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

114

n M u (5.33)

where u is the direction vector of an arbitrary line SE, and M is a point on it. The transformation from the first image to the second image is described by a rotation R and a translation t. In the second image coordinate frame we have, similar to Eq. (5.33):

( ) ( ) ( ) ( ) n RM t Ru R M u t Ru (5.34)

where the prime notation (e.g. n ) denotes the coordinate frame associated with the second camera, and this notation will be kept below. Then we can obtain the following relation by multiplying each term by tT to eliminate the last term of Eq. (5.34):

( ) 0T t n Rn (5.35)

Fig. 5.6 The geometry of line motion (from Zhang, 1995)

Eq. (5.35) cannot be used directly because it is related with a single unknown point M on the line and we only have the projective representation of the line (i.e. n and n ). This problem is known by the fact that the motion between two cameras cannot be determined from two views of straight lines, i.e. two sets of lines do not constrain the motion of a camera, since n and n in Eq. (5.35) can be multiplied by any arbitrary nonzero scalar. This is also why we can only determine the direction of translation (i.e. a normalized vector), while the magnitude is unknown. This condition will be added to our model later as the unit constraint.

Image I

Image II

Page 130: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

115

5.4 Statement of the problem and the proposed solution

Now the camera motion estimation problem can be formulated as:

Given two sets of points and line matches ( , ), ( , ) | 1,..., ; 1,..., i i j j i n j m M M l l ,

which are in correspondence in two images,

estimate the transformation parameters (R, t), such that the first camera is transformed to the position of the second camera by a rotation matrix R and a translation vector t.

5.4.1 Point matches algorithm

As we can see from Fig. 5.6, each line segment (l or l ) defines a space triangle by the optical centre and its two end points. Two of such triangles would intersect, if the corresponding line segments overlaps in each image. The epipolar geometry shown in Fig. 5.7 is the geometry of stereo vision. When two cameras (or one camera at different epochs) view the same 3D scene from each distinct position, several geometric relations between the 3D points and their 2D image projections lead to practical constraints, which are known as epipolar constraint in computer vision (Hartley and Zisserman, 2004, pp. 239-241).

If we set the world coordinate frame to coincide with the first camera frame, the corresponding point in the space of image point m can be represented as

1

u

M m , (0, ) (5.36)

In the coordinate frame of the second image, the same point M is given by

M RM t Rm t , (0, ) (5.37)

Page 131: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

116

Fig. 5.7 Epipolar geometry between two views

The line ml is the projection of the semi-lineCM , and it is given by

m l e m t Rm (5.38)

where e and m correspond to the points with 0 and , respectively, in Eq. (5.37).

The point e is called epipole in the second image, and the line ml is known as the

epipolar line of m.

It is clear that all epipolar lines in the second image pass through the epipole e , and all corresponding points m must lie in its epipolar line. This constraint is well-known as the epipolar constraint. For a more detailed discussion on this topic, we refer to Chapter 9 of Hartley and Zisserman (2004).

Denoting ( )E t R , Eq. (5.38) can be rewritten as

m l Em (5.39)

where ( )t denotes the skew-symmetric matrix of vector t (See Appendix C for the

properties of skew-symmetric matrix), and the matrix E is known as the essential matrix (Hartley and Zisserman, 2004, pp. 257-259).

In the same way, for a given point m in the second image, we can obtain the corresponding epipolar line in the first image by the relation

Page 132: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

117

' ( ) ( ) ( ) ( ) ( )T T T T T T T

m l R t R m R t m R t m R t m E m (5.40)

From this equation we can see that the transpose of the essential matrix defines the epipolar lines in the first image.

The majority of application of the epipolar constraint is based on point matches and can be formulated by applying Eq. (5.29)

0T

m¢ ¢ ¢⋅ = =m l m Em (5.41)

where 1 1( 1)Tx y=m ,

1 1( 1)Tx y¢ ¢¢ =m are homogeneous image coordinates of

one point in the two images. The coordinates are assumed to be corrected for lens distortions. By applying coordinates of the matched points into Eq. (5.41) and rewriting the

9 elements of E as a vector11 21 31 12 22 32 13 23 33

T

e e e e e e e e eé ù= ê úë ûe , we can

rewrite Eq. (5.41) by

L =e 0 (5.42)

where

1 1 1 1 1 1 1 1 1 1 1 1

1

1n n n n n n n n n n n n

x x x y x y x y y y x y

x x x y x y x y y y x y

é ù¢ ¢ ¢ ¢ ¢ ¢ê úê úL = ê úê ú¢ ¢ ¢ ¢ ¢ ¢ê úë û

The preliminary values e can be obtained by singular value decomposition (SVD) of . These values are preliminary, since they do not necessarily satisfy all constraints following from the fact that the essential matrix E has only 5 degrees of freedom (i.e. 3 rotations + 3 translations – unity constraint), i.e. 4 constraints must be imposed on the solution from the SVD for the 9 elements of matrix E. Different papers suggest different constraints on the essential matrix, and a comprehensive summary on this topic can be found in Faugeras and Maybank (1990). Our solution for the essential matrix estimation with constrains will be proposed in Section 5.4.3.

Another critical issue of the existing solutions is their robustness. The point matches between images are found by different correlation techniques, which do not work 100% reliably, i.e. incorrect matches cannot be avoided. Most of the papers are applying the RANSAC method or some kind of M-estimator (Huber 1981, p. 43). In Sections 5.3.2 and 5.3.3, we mainly investigate the motion estimation approach with both point and line matches. The robust estimation by L1-norm balanced adjustment and the RANSAC method have been introduced in Section 5.2.

Page 133: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

118

5.4.2 Line matches algorithm

Besides the point matches, line segment matches are another important correspondence in image pairs, which could also provide the information of camera motion. Each line segment in the images can be treated as a combination of its starting/end points and the line direction vector.

For the situation shown in Fig. 5.8, given a pair of line segments ( l, l ) in correspondence with l having the starting point s and end point e in the first image, the epipolar line of the point s is s l Es in the second image, and similarly the epipolar line of point e is

e l Ee . Let us denote the point of intersection of line sl with line l by s s l l , and

the intersection of line el with line l by e e l l . Provided the epipolar geometry

between these two images is correct, then s and s correspond to a single point in space, and so do e and e . Therefore a line segment pairs overlap in two images ( l, l ) is equal to the line segments s e and s e overlap. In real applications, the end points of line segments as well as the object shape corners are useful point matches in the image pairs.

Fig. 5.8 Two line segments in correspondence (from Zhang, 1995)

To apply the formulas with projective coordinates (see Sect. 5.3.2), we first compute the normal vectors of the two line matches. The normal vector to the plane containing line l in the first image is given by

n = s×l, l = e - s (5.43)

and the corresponding normal vector in the second image are similarly obtained as

' ' ' 'n = s ×l , l = e' - s' (5.44)

Let Ld be a direction vector, which is parallel with the line l expressed in the coordinate system of the first camera. According to Weng et al. (1992), for the same line in two images, the following equation must hold (the derivation can be found in Appendix F):

Page 134: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

119

( )' 'T´ + ⋅ =d

n R n n t L 0 (5.45)

This equation is valid only if the variables n, n , t and Ld are expressed in the same units, i.e. n, n and Ld is computed from first camera coordinate frame. If we have another image of the same line, the term with Ld in equation (5.45) can be eliminated and then the projected image coordinate can be used to compute n.

In our application we are using only two consecutive images to determine the change in orientation and position and the object coordinates are not available. However, we can make use of some lines in special positions: vertical lines (e.g. edges of houses and traffic signs) and lane lines, which are mostly in the same direction as the driving direction. In

the case of vertical lines, we can use the direction vector 0 0 1Té ù= ê úë ûd

L and in the lane

line cases the driving directionT

x y zt t té ù= ê úë ûd

L can be used, which components are

taken from the previous determination of vector t. As it is known that the first part of Eq.

(5.45), i.e. 'T´n R n is a vector parallel with d

L , and it follows that

( ')T´ ´ =d

L n R n 0 (5.46)

This equation will be used later to represent the line matches in our model.

5.4.3 Proposed motion estimation solution with detailed equations

Here we derive the detailed motion estimation equations by taking advantage of the background knowledge of computer vision and parameter estimation above. Given the initial estimates of the rotation and translation matrices (R and T0) for two perspective images, the epipolar constraint on point matches, Eq. (5.41), can be linearized as

( )( )00¢ + D -Q =m T T I Rm (5.47)

where the approximation is applied ( ) ( ) × 0E (t) R t Δt I , T0 is the skew-

symmetric matrix of the approximate translation vector 0 x0 y0 z0(t ,t ,t )t ,

z y

z x

y x

0 r r

r 0 r

r r 0

and z y

z x

y x

0 t t

t 0 t

t t 0

T

where ir and it are improvements to the approximate values.

After some rearrangements, we get

0 0 0 m T m T m T m T Rm (5.48)

Page 135: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

120

If the approximate values are close to the true values, then the improvements are rather small and therefore we may assume that

0 m T

Let us denote d = Rm and D is the skew-symmetric matrix of vector d. Then we obtain

x x

0 0 y 0 y

z z

t r

t r

t r

m T d m D t m T De m D m T D (5.49)

The equation for line matches, Eq. (5.46), can also be linearized as:

( )( )' '

'

' '

' '

T

T T

x

T T

y

z

T

T

r

r

r

= ´ ´ = ´ ´ -

= ´ ´ +

= ´ ´ + ´ ´

D

= ´ ´ - ´ ´ D

D

é ùê úë û

é ùê úë û

é ùê úê úê úê úê úë û

0 L n R n L n I E R n

L n R I E n

L n R n L n R En

L n R n L n R N

where the relationEn' = -N'e is applied.

Therefore we obtain

' 'x

T T

y

z

r

r

r

é ùDê úê ú´ ´ = ´ ´ Dê úê úDê úë û

L n R n L n R N (5.50)

where 'N is the skew symmetric matrix of 'n .

In addition, the unity constraint (i.e. the translation vector must be a unit vector) as discussed in Section 5.3.3 should be added into the model:

22 2

x0 x y0 y z0 zt t t t t t 1 (5.51)

i.e.

2 2 2 2 2 2x0 x0 x x y0 y0 y y z0 z0 z zt 2t t t t 2t t t t 2t t t 1

Considering small values of t , i.e. we may approximate

2 2 2x y zt 0, t 0, t 0

yielding the linear equation

Page 136: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

121

2 2 2x0 x y0 y z0 z x0 y0 z0

1t t t t t t 1 t t t

2 (5.52)

Now we have a set of linear Eqs. (5.49) and (5.50) with the constraint on parameters Eq. (5.52), which can be written in form of Eq. (5.13)-(5.14) as

AX L ε (5.53)

with

x cA X (5.54)

where

1 1 1 0 1

0

1 1 1

T T

T T

n n n nT

T

k k k

é ù¢ ¢-ê úê úê úê ú¢ ¢-ê ú

= ê ú¢´ ´ê úê úê úê úê ú¢´ ´ê úë û

m D m T D

m D m T DA

0 L n R N

0 L n R N

(5.55)

T1 0 1

Tn 0 n

T1 1 1

T1 1 1

m T d

m T dL

L n R n

L n R n

(5.56)

T

x y z x y zt t t r r ré ù= D D D D D Dê úë ûX (5.57)

0 0 0

0 0 0x x y zt t té ù= ê úë ûA (5.58)

and

( )2 2 2

0 0 0

11

2 x y zc t t t= - - - (5.59)

Finally, the parameter vector X is solved by the BLAVE method, i.e. by minimizing the function

Page 137: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

122

b b xmin with A X L A X c (5.60)

It is obvious that the solution can be obtained iteratively.

After that outliers in the data set are identified and removed, we can carry out a LS adjustment. Then Eqs. (5.53) and (5.54) correspond to the linear model “adjustment by elements with constraints” (Fan 1997, pp.114-116) with the solution

ˆ c-1 T -1 -1 T -1 T -1x x x x xX = N (I - A N A N )A L + N A N (5.61)

where ,T -1 Tx x xN = A A N = A N A ,

and the variance-covariance matrix of estimated parameters is given by

ˆ ˆ-1 -1 T -1 -1

x x xXXC = N - N A N A N (5.62)

Please note that we could not get the variance matrix estimation with the L1-norm estimator.

5.5 Support of relative orientation between camera and IMU

In this section we present the integration model with the camera motion estimations, i.e. the model how to support the IMU orientation with the rotation estimation from image pairs is given.

Based on the INS navigation equations, we assume that at the starting point the relative orientation between the camera and IMU (i.e. c

bR ) is known, and the rotation matrix R

between the image pairs has been determined by the proposed camera motion estimation algorithm above. In the same way the orientation of each image can be obtained from all image sequences in real time during the whole vehicle motion process, i.e.

( 1) ( ), 1, 2,...c cn ni i i R RR

The IMU orientation matrix bnR given by the camera observation would be

( 1) ( ), 1, 2,...b b c b c bn c n c b ni i i R R R R RR R (5.63)

From camera observations we get matrix bnR and the covariance matrix for the rotation

angles (i.e. roll, pitch, yaw) RPYC .

Since the misalignment angles are quite small, we obtain the following relation between the camera observations with the IMU orientation observations based on the approximation (Appendix D, Eq. D.12)

Page 138: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

123

n nb b R I E R (5.64)

where nbR is obtained by the IMU and E , the skew-symmetric matrix of misalignment

vector T[ ]x y z ε , i.e. z y

z x

y x

0

0

0

E

.

For convenience, the state vector of the Kalman filter is rewritten here as

T Te T T e T1 i i i X Y Z X Y Z( ) ( ) ( ) x , y , z , v , v , v , , ,

ex r v ε (5.65)

and therefore the observation equation for camera motion estimation becomes

cameraL + ε H x (5.66)

where

T[ ]x y z L (5.67)

and

camera

0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 0 1 0

0 0 0 0 0 0 0 0 1

H (5.68)

The covariance matrix of L is in fact the covariance matrix of the rotation estimation from the images. As a result, the total observation equation of a GPS/INS integrated navigation system with camera can be obtained with Eq. (5.66) and Eq. (3.66) as

L + ε Hx (5.69)

where

TLoose camera[ ]H H H

and LooseH is given in Eq. (3.66)

5.6 Tests with two scenarios

In order to test the performance of the camera motion estimation algorithm based on point/line matches and the BLAVE method against outliers, we designed two trajectory scenarios with different manoeuvres.

Page 139: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

124

a. Acceleration with horizontal lanes and vertical lines. The car is driving along a

straight line with constant acceleration, as shown in the red dash line. Suppose that the camera is 1 meter higher than the ground, and the car is driving along the Z-axis.

Fig. 5.9 The acceleration scene

b. Turn around 90 degree with constant angular velocity at a crossing. The car is driving along the Y-axis, and turns to the left at the crossing, as shown in the red dash line. Suppose that the circle centre is at the left upper corner, and the radius is 4 meters.

Fig. 5.10 The turning scene

0

5

10

15

20

-20

24

-1

0

1

2

Z : meter

X : meter

Y :

met

er

-5

0

5

10

-10

-5

0

5

10

-2-10

Y :

met

er

X : meter

Z : meter

Page 140: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

125

5.6.1 Detailed implementation and comparisons

These two scenes (acceleration and turning) are designed to focus on both translation and rotation manoeuvres. In each scene, 10 point matches and 4 lines matches (2 vertical lines and 2 horizontal lines) are chosen as the targets to estimate the camera motion parameters. We first generate the continuous image series with different car position and orientation. The position and orientation of the camera can be determined at each epoch with the simulated motion parameters. Please note: the motion parameters (i.e. translation and rotation) are estimated between two consecutive images. For both the point and line matches, balanced and non-balanced estimate adjustments are carried out. Then noise and outliers are added into the raw data (i.e. image coordinates). Standard error of the white noise is 1~2 pixels on the image coordinates. To test the robustness of different algorithms, 1~4 outliers are manually added to the data.

5.6.2 Point matches vs. Point/line matches

To test the contribution of the line matches, we run the non-balanced estimator twice under different noise levels, one with point matches only and the other with both point and line matches. For the first scene, the car is accelerating along Z axis in the horizontal plane without rotation, and 2 vertical matches are chosen as the observations. The simulated motion parameters during the acceleration scene are

Simulated translation [m]:

0 0 1

Simulated rotation [deg]:

0 0 0

Two continuous images along the trajectory are used to estimate the parameters, and the estimation with noise standard errors of 1 and 2 pixels are given in Table 5.1 and 5.2, respectively.

Table 5.1 Estimated parameters with point matches and point/vertical line matches (Acceleration, noise std. error: 1 pixel)

Noise std. error: 1 pixel Point matches Point/line matches Estimated translation (m) -0.009 -0.007 1.000 -0.006 -0.002 1.000 Estimated rotation (deg) -0.081 0.068 1.307 0.031 -0.052 0.202 Elapsed time (s) 0.125 0.161

Page 141: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

126

Table 5.2 Estimated parameters with point matches and point/vertical line matches (Acceleration, noise std. error: 2 pixels)

Noise std. error: 2 pixels Point matches Point/line matches Estimated translation (m) -0.003 -0.005 1.000 -0.003 -0.004 1.000 Estimated rotation (deg) -0.060 0.294 5.216 0.021 0.247 0.017Elapsed time (s) 0.128 0.157

When the noise level increases, the accuracy of the estimated parameters degrades especially for the estimated rotations in Tabel 5.2. The vertical line observations constrain the rotation angle, especially the yaw and roll angle (the noise causes the 5 degrees estimation deviation). When the line observables are added to the model, the computing time is increased as a result.

The point matches can be regarded as the basic constraints of the translation and rotation errors. On the other hand, the line matches are actually other strong constraints on the rotation errors except for the rotation along the direction of these lines. As we can see from Table 5.1, the point matches have the roll angle (rz) estimation of more than 5 degrees, while the vertical line matches keep this value at a low level all the time. The yaw angle (ry) errors are in the same level between point matches and point/line matches, because vertical lines are not so strong to constrain the yaw angle errors, and this phenomenon is in conformity with the practical fact.

From this test for the acceleration scene, we can see that the vertical line matches in the acceleration scene contribute as additional constraints and improve the accuracy when there are much noise, especially for the rotation error on the specific directions.

For the second scene, the car is turning around its vertical axis. Two horizontal line matches along the Z-axis are chosen as the observations. Simulated motion parameters during the turning scene are

Simulated translation [m]:

-0.131 0 0.991

Simulated rotation [deg]:

0 -5 0

The detailed estimated parameters with standard noise error 1 and 2 pixels are given in Table 5.3 and 5.4, respectively.

Page 142: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

127

Table 5.3 Estimated parameters with point matches and point/line matches (Turning, noise std. error: 1 pixel)

Noise std. error: 1 pixel Point matches Point/line matches Estimated translation (m) -0.024 0.161 0.987 -0.125 0.061 0.990 Estimated rotation (deg) -0.484 -4.810 0.387 -0.114 -5.053 -0.179 Elapsed time (s) 0.136 0.140

Table 5.4 Estimated parameters with point matches and point/line matches (Turning, noise std. error: 2 pixels)

Noise std. error: 2 pixels Point matches Point/line matches Estimated translation (m) 0.066 0.143 0.986 -0.143 0.098 0.985Estimated rotation (deg) -0.708 -4.709 1.421 -0.302 -5.102 0.839Elapsed time (s) 0.130 0.158

From Tables 5.3 and 5.4 we can see that the line matches along Z-axis are efficient to constrain the pitch angle (rx) estimate, since the line matches give much of the direction information even with much noise. The main turning angle is correctly estimated all the time (the estimate errors increase in Table 5.4 because the noise increases to 2 pixels), while the roll angel (rz) estimate error is not constrained by the line matches along Z-axis, which is also in conformity with the practical fact.

From Tables 5.1-5.4, we can draw the conclusion that the line matches provide better geometry for the parameter estimations than the geometry with only point matches. Different directions of line matches can significantly constrain the errors of the corresponding rotation estimate. When there is much noise, line matches are effective constraints to reduce the estimation errors, especially for the rotation estimates.

In our model, each point and line match has the same weight and affects the final results equally as one observation equation in Eq. (5.55). In practice, each object has its specific shape and would have many corners or edge points in the image. As a result, it is quite often that we may get some point mismatches. What is more, some point mismatch may even happen because of illumination change, object texture, noises (e.g. salt-and-pepper noise), scenario environments etc.

However, in real-time applications the line detection and match process are actually much more reliable operations than point matches between image pairs under different situations against practical noise and illumination changes. Moreover the detected lines are much easier and straightforward to match with each other. Therefore line matches can be considered as more important. In one word, in practice the line matches are more reliable and easier to implement compared with the point matches.

Page 143: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

128

5.6.3 Non-balanced adjustment vs. Balanced adjustment

The non-balanced adjustment here refers to the normal L1-norm adjustment without the balancing processing introduced in Sect. 5.2.3. Since our main purpose of the motion estimation is to support the IMU orientation in real-time applications, in the following tests we will focus on the rotation estimation results in the turning scene.

For the same trajectory discussed in Section 5.7.2, we add 1 to 4 outliers manually. The estimated parameters with both non-balanced and balanced adjustments are given in Tables 5.5 and 5.6. The turning angle (i.e. yaw angle) estimation errors with different number of outliers are shown in Fig. 5.11.

Simulated translation [m]:

-0.131 0 0.991

Simulated rotation [deg]:

0 -5 0

Table 5.5 Estimated parameters by non-balanced and balanced adjustments (1 outlier within 10 point matches)

1 outlier Non-balanced adjustment Balanced adjustment Estimated translation (m) 0.069 0.145 0.987 -0.125 0.061 0.990Estimated rotation (deg) -0.476 -4.552 0.418 -0.114 -5.053 -0.179 Elapsed time (s) 0.095 0.118

Table 5.6 Estimated parameters by non-balanced and balanced adjustments (4 outliers within 10 point matches, i.e. maximum number of outliers)

4 outliers Non-balanced adjustment Balanced adjustment Estimated translation (m) 0.104 -0.115 0.987 0.195 -0.200 0.960 Estimated rotation (deg) 4.138 -5.973 -15.169 0.302 -4.485 -0.495Elapsed time (s) 0.114 0.123

In principle, the balanced adjustment can tolerate outliers up to 50% of the total data if we have good initial value for our linearized model. From Table 5.5 and 5.6 as well as Fig. 5.11, it is obvious that the outliers can significantly degrade the non-balanced adjustment results. The more outliers that exist in the data, the worse estimation accuracies are obtained.

The yaw angle errors with both balanced and non-balanced adjustment during the whole process are shown as an example in Fig. 5.11. It directly shows the great advantage of

Page 144: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

129

balanced adjustment when there are outliers. In contrast to unbalanced adjustment, balanced adjustment gives us reasonable estimates when adding 1 - 4 outliers. The estimation accuracies degrade a little bit when the number of outliers reaches the maximum value (i.e. 40% of the data set). This happens due to the random noise. However, we can see that the motion estimation of balanced adjustment is still within a reasonable range.

Fig. 5.11 Yaw angle errors with different number of outliers

5.6.4 Comparisons with RANSAC

To compare the balanced adjustment (BLAVE) with the classic RANSAC algorithm, we repeat the same procedure in Section 5.6.3, i.e. 1 to 4 outliers are manually added to the same motion in the turning scene, and then the estimated parameters with both methods are given in Table 5.7 and 5.8, respectively.

In principle, RANSAC can handle outliers in more than 50% of the entire data set. However, in such a real application, we may actually treat the “outliers” as the “inliers”, unless we have knowledge about the data itself. Therefore we only test with outliers in less than 50% of the data set, which is also known as the breakdown point of BLAVE estimator.

1 2 3 4-16

-14

-12

-10

-8

-6

-4

-2

0

2

Number of outliers

Yaw

an

gle

err

or [

deg

]

Non-balancedBalanced

Page 145: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

130

Simulated translation [m]:

-0.131 0 0.991

Simulated rotation [deg]:

0 -5 0

Table 5.7 Estimated values by non-balanced and balanced adjustments (1 outlier within 10 point matches)

1 outlier RANSAC BLAVE Estimated translation (m) -0.202 -0.076 0.976 0.069 0.006 0.998 Estimated rotation (deg) 0.283 -5.232 1.218 0.063 -4.913 0.107Elapsed time (s) 0.510 0.122

Table 5.8 Estimated values by non-balanced and balanced adjustments (4 outlier within 10 point matches, i.e. maximum number of outliers)

4 outliers RANSAC BLAVE Estimated translation (m) 0.098 -0.109 0.989 0.052 0.069 0.996 Estimated rotation (deg) 0.348 -4.470 -0.088 -0.053 -4.576 0.124Elapsed time (s) 11.084 0.128

From Tables 5.7 and 5.8, the estimated results are very similar for RANSAC and BLAVE, but it is obvious that RANSAC is more time-consuming. This is due to its random sampling nature. The more outliers in the data, the more sample iterations are needed to exclude all of them. When 40% outliers in the 10 point matches, RANSAC takes about 11 seconds to find them, while BLAVE takes almost the same time with a 10% outlier situation. In principle, the computation time of BLAVE is not influenced by the number of outliers.

In order to compare the implementation speed of these two robust estimators, we run both algorithms with different number of outliers. The elapsed time to run both RANSAC and BLAVE algorithms at different situations are shown in Fig. 5.12.

Page 146: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

131

Fig. 5. 12 Elapsed time with different number of outliers

From Table 5.9 as well as Figs. 5.8 and 5.12, we can see that the RANSAC iterations needed in our tests are actually much more than the theoretical minimum numbers. Although the number of RANSAC iterations is impossible to estimate since its behaviour is not repeatable (Zuliani 2012, p.45), the increasing trend of computing time can be easily forecasted from the test results.

What is more, when we add more data, which may contain some outliers, into the original data set (i.e. the total amount of the data set increases), the RANSAC computing time increases (but this is also the case with balanced adjustments BLAVE, although more moderate) too. It is natural to draw the conclusion that many iterations are needed with large numbers of data, since RANSAC is based on random samples of the total data set. However, images are one kind of media with high amount of information. It is common that there are hundreds or thousands of edge points detected from one image. It would consume much more time with to run the RANSAC algorithm, and therefore it is not suitable for real-time applications.

In addition, the choice of the error threshold is a basic problem for RANSAC. However, there is no general method for choosing the threshold (Zuliani 2012, p.44). The suitable value of the threshold depends on the nature of the problem and data set.

1 2 3 40

2

4

6

8

10

12

Number of outliers

Tim

e [

s]

RANSACBLAVE

Page 147: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

132

For the scenes with bad geometry, RANSAC may have the difficulty to distinguish between the inliers contaminated with noise and outliers, since the standard to judge the outliers is a hypothesis test, and the noise could cause large deviation of the estimated parameters. If the noise causes the fitting error of some points to be larger than the threshold, then these points (i.e. inliers in fact) are mistakenly treated as outliers.

In our test, the sum of squared residuals is chosen as the fitting error for the RANSAC estimated parameters. Therefore, the fitting error follows a 2 distribution with n-5 degree of freedom (n is the number of matches, and 5 point matches are the minimum number for motion estimation). We can see that this theoretical threshold is only determined by the degree of freedom and the risk level, without practical considerations of the problems. As a result, if the noise or inaccurate model causes the fitting error beyond this threshold (e.g. there is bad geometry or even ill-conditioned matrix within the model), some inliers may be treated as outliers by mistake. We noticed that this phenomenon caused by the inappropriate threshold happened many times during our test process. In this sense, RANSAC is not so robust and reliable under these situations. On the other hand, BLAVE do not need any threshold or experimental values, and all the algorithm procedures can be carried out automatically.

5.7 Summary

In this chapter, the following content is included and conclusions are drawn:

- Introduction of pinhole camera and image projection process - Basic principles of camera motion estimation - Camera motion estimation algorithm based on point and line matches - Optimal estimation (L1- and L2-norm) and robust estimation - Basic principles of BLAVE and RANSAC with analysis on advantages and

disadvantages - The method to support IMU orientation with camera rotation estimation as well as

the total observation equation for GPS/IMU/Camera system are given - Tests of the proposed camera motion estimation algorithm based on point/line

matches under different noise and outlier situations. We can see that line matches are strong constraints on motion estimation, especially on the rotation angles. The BLAVE method can successfully get rid of many possible outliers (less than 50%) in the data set, while the computation time is not influenced by the number of outliers.

Page 148: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

133

Chapter 6

MEMS applications and performance evaluation

6.1 Introduction

The MEMS sensors are continuously being a hot topic in both research fields and the market. The new trend of MEMS industry and market is briefly introduced in the beginning. To evaluate the performance of MEMS motion chips in real applications, typical MEMS motion sensors are chosen to carry out simulation tests based on the stochastic error modelling and analysis in Chapter 2. Discussion and conclusions are given based on the test results and analysis.

6.2 MEMS industry and market background

For many navigation applications, improved accuracy is not necessarily the most important issue, but meeting the required performance at reduced cost and size is. In particular, small navigation sensor size allows the introduction of guidance, navigation, and control into applications previously considered out of reach (Barbour, 2010). MEMS technology is one of the very active development areas. The MEMS sensors are widely applied in different application fields since its early emergence. The material development and accuracy improvement on MEMS are continually hot research focuses in many large companies and university labs all over the world.

The MEMS market is mainly divided into three groups: low-accuracy consumer electronic products, middle-accuracy products for different industrial applications and high-accuracy military applications (Nasiri, 2005). Within consumer electric products, MEMS sensors are normally small size, low cost and narrow temperature scope. As a result, the accuracy is in a low level, and widely used in consumer electric products like cell phones/tablets, cameras and GPS navigators.

The middle accuracy MEMS are mostly in the form of different modules, which are suitable for industrial environment and at a medium price level. The main application fields are industrial automation, medical devices, robot, and automobile safety systems.

The MEMS for military and space products are at high accuracy level, and with wide operational temperature scope. The typical application fields are navigation and attitude control systems for airplanes/missiles, optical aiming systems and so on.

In each accuracy level, there are some large companies that supply different sensors or modules for the market, such as the sensors introduced in Section 6.3. In recent years, the

Page 149: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

134

potential of MEMS market is growing steadily fast, and this trend is forecasted to continue, as shown in Fig. 6.1. We can draw a conclusion that there is a great potential and bright future in the MEMS market.

Fig.6.1 MEMS market forecast per application (from Yole Developpement, 2011)

Based on 20 years of experience in MEMS market analysis, Yole Developpement analysts presented in the 2012 edition of “Status of the MEMS Industry” that MEMS will continue to see steady, sustainable double digit growth for the next six years, with 20% Compounded Annual Growth Rate (CAGR) in units and 13% growth in revenues (Yole Developpement, 2012 a), as shown in Fig. 6.2.

Yole Developpement also expects continued strong growth in motion sensing, as these sectors will increasingly come to dominate the MEMS market totals, making up almost half of the overall market in 2017, with accelerometers, gyroscopes, magnetometers and combos (combined gyro and accelerometer) accounting for about 25% of the total market.

At present different MEMS sensors are in different maturity stage, i.e. the accuracy and stability differs a lot in each application field as shown in Fig 6.3. Among these sensors, the MEMS accelerometer technology is almost mature and widely used in automobile safety products, such as airbag and ABS (Anti-lock Braking System). The main problem with MEMS sensors are large systematic errors (e.g. zero-value offset, temperature change, error due to dynamic or measurement range), and stochastic errors (e.g. random walk, bias instability, quantisation error). On the other hand, both the MEMS material and producing technologies are improving quite fast. It is almost common view from the industrial fields that the accuracy and performance obstacles would be solved to a great extent within a few years, and the MEMS technology would greatly change the situation in many application fields.

Page 150: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

135

Fig. 6.2 MEMS market forecast (from Yole Developpement, 2012 a)

Fig.6.3 Maturity of MEMS devices in 2012 (from Yole Developpement, 2012 b)

Page 151: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

136

6.3 Augmented Kalman filter with MEMS sensor errors

In order to evaluate the MEMS chip performance, the coloured noise model should be augmented into the Kalman filter. As discussed in Chapter 2, the equivalent noise model to represent the three coloured noise (i.e. random walk, bias instability and rate ramp) within a MEMS sensor can be given by Eq. (2.54):

(4) (3) (2) (1) (0) (3) (2) (1) (0)1 2 3 4 0 1 2 3( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )z t a z t a z t a z t a z t e w t e w t e w t e w t (6.1)

Please note that the superscript of each item is the order of derivative.

For the differential equation, Eq. (6.1), there are different state space representations, among which the important ones are the controllable and observable canonical forms (Ogata 2002, pp. 649-650). To obtain the observable canonical form, the main steps are given in the following part (the detailed derivation can be found in Ogata (2002, pp. 691-692). First the transfer-function of Eq. (6.1) can be represented as

3 2 1

0 1 2 34 3 3 3

1 2 3 4

( )

( )

e s e s e s ez s

w s s a s a s a s a

(6.2)

where z(s) and w(s) are the Laplace transforms of z(t) and w(t), respectively. Then let us rewrite Eq. (6.2) as (for simplicity, we just use the abbreviated notation z and w instead of z(s) and w(s))

4 3 21 0 2 1 3 2 4 3( ) ( ) ( ) ( ) 0s z s a z e w s a z e w s a z e w a z e w (6.3)

By dividing the whole equation with 4s and rearranging, we obtain

1 2 3 40 1 1 2 2 3 3 4( ) ( ) ( ) ( )z s e w a z s e w a z s e w a z s e w a z (6.4)

Here we introduce new variables as follows:

14 0 1 3( ) ( )x s s e w a z x

13 1 2 2( ) ( )x s s e w a z x

12 2 1 1( ) ( )x s s e w a z x

11 3 4( ) ( )x s s e w a z

Then the observable canonical form representation of 6 MEMS sensors with Eq. (6.1) becomes

( ) ( ) ( )i i i i it t t x F x G w (6.5)

and ( ) ( )i i iz t t H x (6.6)

Page 152: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

137

where i=1,2,...,6 represents each of the inertial sensors (3 gyros and 3 accelerometers), ix

is a four dimensional vector, corresponding to the derivatives of z in Eq. (6.1), and wi is white noise. The coefficient matrices are given as (each with the coefficients as defined in Eq. 6.1)

4

3

2

1

0 0 0

1 0 0

0 1 0

0 0 1

i

a

a

a

a

F ,

3

2

1

0

i

e

e

e

e

G , 0 0 0 1i H

The process noise covariance matrix can be obtained as

T T TE[ ]ix i i i i i i Q G w (t)w (t) G G G (6.7)

where E[] is the stochastic expectation operator, and

T 2 2 2 20 1 2 3E[ ] , , ,i i diag Σ w (t)w (t) .

Therefore we can obtain

23 3 2 3 1 3 0

2T 2 3 2 2 1 2 0

21 3 1 2 1 1 0

20 3 0 2 0 1 0

ix i i

e e e e e e e

e e e e e e e

e e e e e e e

e e e e e e e

Q G ΣG , which includes the coloured noise coefficients

in Eq. (6.1).

For convenience, the INS error equations in the e-frame are rewritten here as (the detailed notation definition can be found in Section 3.2.5)

e e r v (6.8 a) e e e e e e e

ie b A2 ( ) ev N r Ω v a ε R b w (6.8 b)

e e e eie b G( ) ε Ω ε R d w (6.8 c)

Now we substitute the noise model derived in Section 2.5 into the noise terms, i.e. we apply the following expressions:

b b bA white quant col b w = a + a + a (6.9 a)

b b bG white quant col d w = ω + ω + ω (6.9 b)

Page 153: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

138

where each term on the right side is the same as in Eq. (2.29) and (2.30). As discussed in Section 2.5, the quantization error can be treated as white noise. Therefore this term in Eq. (6.9) can be merged into the white noise term, and then Eq. (6.9) actually becomes

bcolb = a (6.10 a)

bcold = ω (6.10 b)

The physical meaning of Eqs. (6.10 a, b) is quite straightforward, i.e. we use a better equivalent coloured noise model to replace the simple sensor noise assumption (random constant + white noise) as described in Eqs. (3.40) and (3.41).

If we substitute Eqs. (6.10 a, b) into Eqs. (6.8 a, b, c), a new system model can be obtained

e e r v (6.11) e e e e e e e b

ie b col A2 ( ) ev N r Ω v a ε R a w (6.12)

e e e e bie b col G( ) ε Ω ε R ω w (6.13)

These equations can be rewritten in matrix form as:

( )w c c xx Fx G u u Fx Gu w (6.14)

with the state vector, transition matrix and process design matrix: Te T T e T( ) ( ) ( )

ex r v ε

3 3 3 3e e e

iee

3 3 3 3 ie

2 ( )

0 I 0

F N Ω a

0 0 Ω

3 3( )e eb bdiag G 0 R R

T3 1 A G[ ]w u 0 w w is the augmented system white noise and wxw Gu is the white

for the state vector; b b T3 1 col col[ ]c u 0 a ω is the augmented coloured noise, which

contains the three coloured noise components random walk, bias instability and rate ramp described in Section 2.5.2. In Eq. (6.14), we can see that the coloured noise term cu should be augmented into the

Kalman filter, and the system white noise term wu augmented by the equivalent

quantization noise is used to compute the initial process noise covariance matrix. As discussed in Chapter 2, the quantization noise only influences the process noise covariance matrix, since the augmented terms can be transformed into the equivalent white noise. Therefore, in the equation derivation and analysis we will just keep the white noise term xw (neglect the quantization noise terms for convenience), and focus on the

coloured noise terms.

Page 154: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

139

From Eqs. (6.11) - (6.13), we can see that the coloured noise only exist in the velocity and attitude error equations, i.e the coloured noise vector of Eq. (6.14) is given by

3 1b

c colbcol

0

u a

ω

(6.15)

If we use z1(t), z2(t), and z3(t) to represent the coloured noise in the noise components of the gyros, and z4(t), z5(t), and z6(t) for that of accelerometers, we can rewrite Eq. (6.15) as

3 1

1

6

( )

...

( )

c

z t

z t

0

u (6.16)

where each coloured noise zi(t) follows from the differential equation (6.1). Hence we should build up a state space model for each sensor, i.e. the whole system of coloured noise is the combination of 6 differential equations in the form of Eq. (6.1).

By substituting the state space representation Eq. (6.6) into Eq. (6.16), we get

3 1

1 1

6 6

( )( )

...

( )

c

tt

t

0

H xu

H x

(6.17)

Therefore the coloured noise model for the IMU sensors is the combination of the six sensor models shown in Eq. (6.17).

To derive the augmented system model, we define the states in Eq. (6.17) as a new vectorT

1 1 6 6( ) [ ( ), ..., ( )]t t ty H x H x , and therefore the system state space representation can

be given with Eqs. (6.5) and (6.6) as

( ) ( ) ( )t t t Y YY F Y W (6.18)

( ) ( )t tZ HY (6.19)

where the state vectors are

T T T1 6( ) [ ( ) ... ( )]t t tY y y

T T T1 1 6 6( ) [ ( ) ... ( )]t w t w tYW G G

T1 3 1 6( ) [ , ( ), ..., ( )]t z t z tZ 0

Page 155: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

140

and the other matrices are given as

1 6( ) ( ,..., )t diagYF F F

T24 3 1 6[ ( ,..., )]diagH 0 H H

From Eq. (6.17) and (6.19), we can get the relation

3 3( ) ( ) ( ) ( ) ( )e ec b bt diag t t t Yu 0 R R Z H Y (6.20)

where

3 3( , , )e eb bdiag YH 0 R R H .

Since the driving white noise wi(t), i=1,2,..., 6 are independent on each other, the process noise covariance matrix is shown as follows by means of Eq. (6.7)

1 6

( ) ( ,..., )y yt diagYQ Q Q (6.21)

By substituting Eq. (6.20) and (6.18) into Eq. (6.14), we obtain

c x Y xx Fx Gu w Fx GH Y w

Let us rewrite the above equation in matrix form, and we get the augmented system model with coloured noise

( )( ) ( )

0 ( )( ) ( )

tt t

tt t

Y x

Y Y

F GH wx x

F wY Y

(6.22)

The process noise covariance matrix of the new system in Eq. (6.22) can also be obtained in the similar way with Eq. (6.21):

( ) ( )t diag x YQ Q Q (6.23)

6.4 System observability analysis

The efficiency of state estimation is dependent on the system observability. If the system is observable, the state estimate is only related with the process and measurement noise. However, if the system is not completely observable, we cannot get the entire accurate state estimates even if the noise were negligible (Goshen-Meskin and Bar-Itzhack 1992a).

Before we discuss the observability of a GPS/INS integrated system, the detailed definition of observability and some important conclusions should be given.

Page 156: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

141

6.4.1 Observability definition and properties

Consider a linear system described by the following model

( ) ( ) ( )t t tx F x (6.24) ( ) ( ) ( )t t ty H x (6.25)

where F(t) and H(t) are n n and p n matrices with continuous function elements. The

dynamics equation of the system is observable at t0 if there exists a finite 1 0t t such

that for any state 0x at time 0t , the output y(t) over the interval [ 0t , 1t ] suffices to

determine the state 0x .

Two import theorems regarding linear system obervability are given as follows (Chen 1999, p. 180 and Ogata 2002, Section 9.7).

Theorem 1: Suppose F(t) and H(t) are be n-1 times continuously differentiable functions, i.e. the system is a linear time-variant (LTV) system.. Then is observable at 0t if

there exists a finite 1 0t t such that

0 1

1 1

1 1

( )

( )

...

( )n

t

trank n

t

N

N

N

(6.26)

where the sequence

0 ( ) ( )t tN H

1( ) ( ) ( ) ( )k k k

dt t t t

dt N N F N , k=0,1,2,.., n-2. (6.27)

It can be easily proved that Eq. (6.26) is equivalent to finding a state vector ( )tx such that

0( ) ( ) ( ) 0t t t y N x

1( ) ( ) ( ) 0t t t y N x

...

( 1)1( ) ( ) ( ) 0n

nt t t y N x

If there is no nonzero state fulfilling the above equations, the system is observable at time 0t .

Page 157: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

142

Theorem 2: Suppose F(t) and H(t) are constant, i.e. the system is a linear time-invariant (LTI) system. Then the system is observable at 0t if and only if

-1n

rank n

H

HF

...

HF

(6.28)

If the LTI system is observable, it is observable at any initial time, and the initial state 0x

determination can be achieved at any nonzero time interval. If the rank of Eq. (6.28) equals to m < n, then there are n-m unobservable states in the system.

6.4.2 Observability analysis of MEMS coloured noise in integrated system

Let us rewrite the INS error equations, Eq. (6.8), in matrix form

e e

e e e e e e e eie b b A

e e e e eie b b G

(t) 2

e

r v

x v N r Ω v a ε R b R w

ε Ω ε R d R w

(6.29)

We consider that the integrated system is made up of a single GPS-antenna based on RTK and a low accuracy MEMS IMU. The positioning accuracy of GPS is assumed at centimetre level, while the MEMS has a much higher noise level than the Earth rotation speed. For the vehicle navigation applications, we assume that the vehicle is travelling within a relatively small area (e.g. within 100 km) and a short period of time (e.g. within several hours), and therefore we can neglect the lever arm error b

Go between the GPS

antenna and MEMS (see Sect. 3.3.3) as well as the gravity error in the following analysis.

In Eq. (6.29), the magnitude of the Earth rotation speed eie is in the order of 10-5 and the

maximum singular value of eN is in the order of 10-6 (see the equation for eN under Eq. (3.25)). Therefore these two items are considered to be negligible in the velocity error equation. We also assume that the IMU is calibrated before the mission, so that the sensor bias has been compensated.

The simplified GPS/INS integration error equations can therefore be formulated as

e e e

e e e eb A b A

e e e e e eie b G b G

0

( )

e e

r v r

v a ε R w F v R w

ε Ω ε R w ε R w

(6.30)

where

Page 158: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

143

3 3 3 3e

3 3 3 3e

3 3 3 3 ie

( )

0 I 0

F 0 0 a

0 0 Ω

The observation equation is the positioning difference in Eq. (3.82):

e e e e b e e eG i b G G( ) L p p R o r o ε (6.31)

Then we obtain the augmented system model with coloured noise from Eq3. (6.22) and (6.31):

e e

e e24 9

e eY

YY

r r

F GHv v

0 Fε ε

Y Y

(6.32)

and

e

e3 3 3 24 e

( )

eeG

r

vL I 0 o 0

ε

Y

(6.33)

It is clear that the system model in Eqs. (6.32) and (6.33) is LTI when the system is static. Note that, if the vehicle is static, the matrices H and F in Eq. (6.28) for supported inertial navigation will always be constant with time, i.e. adding more sampling epochs will not increase the rank of Eq. (6.28). What is more, it is well known that there are three unobservable states during the static alignment process in the n-frame (Jiang & Lin 1992; Bar-Itzhack & Berman 1988). This fact is easy to understand, since there is not enough output information to estimate each sensor error when the system is static (during the ground alignment process).

It is shown by experiments that the system observability can be improved by means of different manoeuvres. The improvement is explained by assuming that the error model becomes time varying and the time-varying nature excites the mode, which is unobservable at rest (Rhee et al. 2004). However, in such cases the system becomes LTV during the manoeuvres, involving rotation or unsteady specific force (Rhee et al. 2004), which makes the rigorous observability analysis much more complicated. Goshen-Meskin et al. (1992 a, b) proposed the piece-wise constant system method, but time goes by as this method is obviously not practical in many high dimensional and high dynamics applications. Another method is by means of the degree of observability based on SVD (Grewal &

Page 159: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

144

Andrews 2008, p. 543-544) or by the eignvalues and eignvectors of the error covariance matrix (Ham & Brown, 1983). Two basic manoeuvres, linear acceleration and steady turn, are analysed in Rhee et al. (2004), and it is shown that the number of observable modes is increased by at least 2 during such a manoeuvre. It is also proved that all unobservable states in the time-invariant error dynamics model, i.e. attitude, GPS antenna lever arm, and the component of gyro bias along the direction of the specific force, can be made observable through manoeuvring (Hong et al. 2005). Therefore it is reasonable to infer that the coloured noise term in Eq. (6.14) is observable through a combination of different

manoeuvres, since both rotation and acceleration are quite common in a vehicle navigation process. The observability of the whole states can also be shown by the error covariance matrix of the Kalman filter during the field test experiments (Hong et al. 2005). In the following sections, we assume that is observable without detailed rigorous

analysis on the time-variant system model under different manoeuvre scenarios. Since the position error is measured directly, the observability of position errors is guaranteed. As a result, Eq. (6.17) can be simplified by deleting the observable positional errors (the first three elements):

1

6

( )0

( ) ...0

( )

eb

c eb

z t

t

z t

Ru

R (6.34)

From Eq. (6.34), we get

11

6

( )0 0

... ( ) ( )0 0

( )

e bb e

c ce bb e

z t

t t

z t

R Ru u

R R (6.35)

Therefore each coloured noise zi(t), i=1,...,6 is observable.

By means of Eq. (6.18) and (6.19), which is the observable canonical form representation of the equivalent coloured noise in each sensor ( )iz t , it is easy to verify that the system in

Eq. (6.32) and (6.33) is observable, i.e. give ( )iz t , then each state ( )i tx is observable.

In summary, the coloured noise term in the system dynamic model could be observed through a combination of different manoeuvres. Although the overall rigorous analysis on the time-variant system under all different manoeuvres is overwhelmingly complicated and no unified analysis approach on time-variant systems is reached at present, the improvement of system observability through manoeuvres can be inferred, and it can also be verified by both simulations and real tests.

( )c tu

( )c tu

Page 160: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

145

6.5 Simulation tests for MEMS chip performance evaluation

There are a lot of research papers on the MEMS sensors performance evaluation on a unit level, e.g. Niu et al. (2006), Li et al. (2008). However, the topic of MEMS motion sensor performance evaluation at single chip level is almost a blank area yet: no performance report based on numerical results is found up to May 2013, and these MEMS chips are actually on the market only within the last few years.

We take the motion sensors in iPhone 4 as an example for the performance evaluation tests. The iPhone 4 is the first portable consumer device to incorporate a three-axis accelerometer (STMicroelectronics LIS331DLH), a three-axis gyroscope (STMicroelectronics L3G4200D), and a three-axis electronic compass. The integration of these three devices provides full nine degrees-of-freedom motion sensing, as shown in Fig. 6.4 (the compass chip not included).

Fig. 6.4 iphone 4 main board top side with the MEMS chips L3G4200D and LIS331DLH (from Dixon-Warren, 2010)

Besides the advantageous volume, the lower price level of MEMS chips is an overwhelming advantage for civilian applications, and also the idea of navigation with a sensor array becomes possible, which can reduce the navigation errors of the whole system and is tested in Section 6.5.3. The price for the MEMS gyroscope chip is about $3 and for the accelerometer about $1.5, which is far from the price level of a tactical level IMU, such as the ISIS-IMU tested in Chapter 2 at the price level of $10k. There are also combined gyro and accelerometer chips (also named Combo), such as MPU6000 at the price level of $15. Therefore we can see that the MEMS chips have great market potential against the normal sensors in the future, when the MEMS technologies are gradually becoming mature.

Page 161: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

146

6.5.1 Test set-up and the trajectory

In order to evaluate the navigation performance of the MEMS sensors under realistic conditions (i.e. the position and orientation error growths), we design a driving path with different manoeuvres as shown in Fig. 6.5. The vehicle first accelerates to the north for 10 seconds, climbs up for 10 seconds along the same direction to 35 metres, then turns 90 degree to the west and drives further for another 5 seconds. The whole driving trajectory takes about 60 seconds.

The initial values and set-up parameters for all the simulation tests are taken from the MEMS chip datasheets (STMicroelectronics, 2009 – 2011) or experience values (marked with *):

Initial position error 1X Y Z m (DGPS precision)

Initial misalignment 20Roll Pitch , 3Yaw (magnetic compass precision)

Initial standard error of gyro drift* 0 2.5deg/d h

Initial standard error of accelerometer bias* 3 20 4 10 /b m s

Random walk parameters for all gyros 1.8deg/gw h

Random walk parameters for all accelerometers3218 / 2.1 10 / /aw g Hz m s s

Quantization errors for all gyros 12 13.734deg/ngq FS h ( 2FS g is the

measurement range, and n = 16 bits output) Quantization errors for all accelerometers 1 4 22 2.99 10 /n

aq FS m s ( 250deg/FS s is the measurement range, and n = 16 bits output)

Ramp noise parameters for all gyros* 7 deg/gr h

Ramp noise parameters for all gyros and accelerometers* 6 34.3 10 /ar m s

Scale factor error for all gyros and accelerometers 0.2%sf

All tests are carried out at latitude 59 and longitude 18 (Stockholm)

Along the driving trajectory shown in Fig. 6.5, the error-free observations are generated first at each sampling epoch, and then different noise components or sensor errors according to the list above are added manually to simulate the real-time observations along this trajectory.

Page 162: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

147

Fig. 6.5 Test driving path

6.5.2 INS navigation error propagations

Since there are large errors with the MEMS chips, it is meaningless to show the results for a longer period, when the error is already larger than several hundred metres. Therefore only one minute of the error propagation result is given. The position and orientation standard errors are given in Figs. 6.6 and 6.7, respectively.

We can see that the northing and easting positional errors grow exponentially with time, while the height precision degrades quite slowly. This phenomenon is due to the fact that the magnitude of azimuth misalignment is quite large. This misalignment is projected in the horizontal directions, and behaves as a large bias, which causes quick error growth in horizontal directions. A detailed discussion about the “bias” caused by misalignment can be found in Horemuz (2006, pp. 50-52). Note that both horizontal positional errors and height error grow exponentially, but the height error grows slower during initial period (e.g. within the first 60 seconds as shown in Fig. 6.6), but later it grows even faster than the horizontal error (Horemuz 2006, p.52).

-100-50

0

50

100

150

200

250

300

350

400

0

20

East [m]

Driving Path

North [m]

Hei

gh

t [m

] End

Start

Page 163: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

148

Fig. 6.6 Position standard errors

Fig. 6.7 Orientation standard errors

0 10 20 30 40 50 600

50

100

150

200

250

300

350

400

450

Time [s]

Sta

nd

ard

err

or

[m]

NorthEastHeight

0 10 20 30 40 50 600

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Time [s]

Sta

nd

ard

err

or

[deg

]

RollPitchYaw

Page 164: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

149

It is important to notice that the standard error along the east direction is larger than in the north direction. The error differences between these two horizontal directions are relatively small but non-negligible, and they are due to the scale factor error and the rotation along east direction. We will see the differences more clearly in Fig. 6.10, and a more detailed analysis will be given there.

In principle, the orientation standard errors grow almost linearly in the same way for roll, pitch and yaw angles, as shown in Fig. 6.7. Since the initial value of yaw angle is determined by a magnetic compass, it has a large initial error (3 degrees). The roll and pitch angles can reach higher accuracies during the levelling procedure in the initialisation stage.

6.5.3 Performance evaluation and analyses

Before we start to evaluate the test results, the problems with the algorithm limitations should be addressed. There are small but non-negligible limitations for our navigation algorithm, i.e. numerical integration of the sensor outputs. To begin with, the noise-free navigation error along the driving path is given in Fig. 6.8. The navigation computation uses perfect initial values and noise-free observations, and therefore theoretically we should get zero errors all the time. In reality there are deviations from the theoretical results especially when there is a turning manoeuvre as shown in Fig. 6.8. This is due to the algorithm limitations, including the sampling interval and integration operations, etc. (GPSoft, 1998). In reality, the car is moving along a continuous trajectory. However we can only get the observations at a finite sampling frequency. What is more, the imperfect integration operations within the navigation algorithm also introduce numerical errors.

From Fig. 6.8, we can see that the computed position errors grow during and after the rotation manoeuvres due to the algorithm limitations. These errors could be reduced with a higher sampling rate, but cannot be completely eliminated, as shown in Fig. 6.9 with 10 times higher sampling rate than in Fig. 6.8.

From Figs. 6.8 and 6.9, we can see that the error curves are basically similar with each other, but the scale in Fig. 6.9 is just 1/10 of that in Fig. 6.8. Hence we can easily notice the accuracy improvements due to the higher sampling rate. The errors along north and east directions are reduced to about 1/5 of the original values, while the vertical position errors are reduced to about 1/10 of the original values.

Page 165: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

150

Fig. 6.8 Position errors due to algorithm limitations (sampling rate: 10 Hz)

Fig. 6.9 Position errors due to algorithm limitations (sampling rate: 100 Hz)

0 10 20 30 40 50 60 -0.2

-0.1

0

0.1

Err

or

[m]

Time [s]

East

North Height

0 10 20 30 40 50 60 -0.02

-0.015

-0.01

-0.005

0

0.005

0.01

Err

or

[m]

Time [s]

East

North Height

Page 166: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

151

To reduce the algorithm introduced errors, we set the sampling rate to 100 Hz during

the following tests based on the analysis above. Please note that this sampling

frequency (100 Hz) is the “normal” value for a typical MEMS. First, the single

MEMS chip position errors along the test trajectory are given in Fig. 6.10. The three

error curves basically follow their standard error curves, and they are within

reasonable ranges. We can see that the errors along east direction are basically larger

than in the north direction. The cause for this phenomenon is due to the scale factor of

rotation observations and the rotation integration in the east direction. In the north

direction there is only a constant acceleration, and this acceleration is carried out

during the first 10 seconds of the driving test. During this acceleration period, the car

is still at a low speed and the sensor errors have not caused significant influences on

the positioning accuracy. Therefore there is no large error during the beginning

acceleration period along the north direction.

Fig. 6.10 Single MEMS chip positioning errors (one instance)

Note that Figs. 6.10-6.13 have the titles with "one instance", which means that the

errors curves shown in these figures are just single outputs and these curves would

change each time due to the nature of the stochastic noise components.

Since the MEMS chip sensors are at a very low price level, it is possible to build up a

MEMS sensor array on the vehicle with several sensors of the same kind, and the

mean value of this sensor array output is expected to be more accurate than that of a

single sensor. In addition, multiple sensor fusion can also increase the reliability of

the navigation system.

0 10 20 30 40 50 60 -800

-600

-400

-200

0

200

400

600

Err

or

[m]

Time [s]

East

North Height

Page 167: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

152

To verify this idea, we run the test trajectory with 5 MEMS sensors at the same time, and we compute the mean values of the position errors. The 5 positioning error curves (solid lines with different colours) and their mean values (denoted by “o”) along east direction are shown in Fig. 6.11. We can see that the 5 error curves are basically different in both magnitudes and directions due to the stochastic errors, but their mean values are smaller than the position errors of a single sensor. It is therefore reasonable to infer that the mean values should be more reliable, if we add more sensors.

In Fig. 6.11, we can also notice that the mean value of the 5 sensor errors is about -100 m after 1 minute, which is about 4 times smaller than that of the single error curve in Fig. 6.10. This is partly because the 5 error curves in this instance are generally evenly distributed with different signs, i.e. 3 curves grow in the negative direction and 2 curves grow in the positive direction. As a result, the mean values are relatively small in this curve. It is straightforward that if 4 of the 5 curves randomly grow in the negative direction, the mean value would be therefore larger (with negative sign). However, in any case it can be inferred that the mean value of the error array is smaller than for a single sensor in a stochastic sense.

Fig. 6.11 Five MEMS chips positioning errors and their mean values (denoted by “o”)

0 10 20 30 40 50 60-500

-400

-300

-200

-100

0

100

200

300

Err

or

alo

ng

Eas

t d

ire

ctio

n [

m]

Time [s]

Page 168: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

153

In the same way, the single MEMS chip orientation errors along the test trajectory are given

in Fig. 6.12, and the 5 yaw angle error curves with their mean values (denoted by the stars)

are shown by Fig. 6.13 as a representative example. It is obvious that a similar conclusion

between the single error curve and the mean values can be drawn from these two figures. A

more detailed theoretical analysis with numerical test about this conclusion will be presented

below.

Fig. 6.12 Single MEMS chip orientation errors (one instance)

0 10 20 30 40 50 60 -2

-1

0

1

2

3

4

5

6

Err

or

[de

g]

Time [s]

Roll

Pitch Yaw

Page 169: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

154

Fig. 6.13 Orientation errors of 5 MEMS chips and their mean value (one instance)

It is well-known that the standard error of the mean value ( x ) is given by

1

ix xn

(6.24)

where ix is the standard error for a single observation ix and n is the number of sensors.

Hence we can plot the standard error curve of the mean values versus the number of sensors, which is shown in Fig. 6.14.

The figure shows that when the number of sensors increases from 1 to 5, the standard error of the mean decreases very fast to 50%. From 5 to 10 sensors, it drops about another 20%. For another 10% less of the standard error (i.e. 20% of single sensor standard error), we need another 15 sensors, and then to decrease even further 10% (i.e. 10% of the sensor standard error), we can see that another 75 sensors are needed. In total, a sensor array of 100 sensors can actually decrease the standard error by 90%.

0 10 20 30 40 50 600

1

2

3

4

5

6

Err

or

of y

aw a

ngle

[d

eg]

Time [s]

Page 170: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

155

Fig. 6.14 Standard error of the mean values

Hence we can draw a conclusion that by increasing the number of sensors from 1 to 10 each sensor contributes significantly to the final results. From 10 to 25 sensors, each sensor does not contribute much but still noticeable. However, it is almost meaningless to add more sensors to the system, if the number of sensors has reached 25.

Another important advantage we should notice with more sensors is that the system reliability is enhanced. With the observations from different sensors simultaneously, it is possible to detect the gross errors from individual sensors, and we could even turn off or replace one sensor, if it has much larger error than the others.

6.6 Summary

In this chapter, the following content and conclusions are included:

- MEMS industry and market are introduce, and we directly see the great potential of MEMS sensors

0 20 40 60 80 10010

20

30

40

50

60

70

80

90

100

No. of sensors

Std

. e

rro

r o

f th

e m

ea

n (

%)

Page 171: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

156

- Kalman filter augmentation is derived with coloured noise based the equivalent coloured noise model in Chapter 2. The noise of each MEMS sensor is modelled as a sub-system with 4 states.

- Observability analysis of integrated navigation system based on Chapter 3. The GPS/INS system in static, constant acceleration/rotation mode is not completely observable. The integrated system observability can be increased by different manoeuvres.

- Performance evaluation of MEMS single and multiple chips are carried out. We can see that single MEMS sensor has a very low accuracy, but a sensor array would help to increase the accuracy and reliability. In general, a sensor array with 5-10 sensors is recommended.

Page 172: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

157

7. Conclusions and proposals on further research

7.1 Summary and conclusions

The GPS/INS integration is the core of our navigation system, and its navigation performance is mainly determined by the bridging ability of the stand-alone IMU during GPS signal outage. With better knowledge of the sensor stochastic errors, better navigation accuracy can be reached and longer bridging time can be allowed. To analyse different types of stochastic errors, we built up different stochastic models of the IMU sensors, and the practical tests on a MEMS based IMU were carried out with these methods, including the AR model, GM process, PSD and AV analyses.

Although different methods of estimating stochastic errors lead to different error models with different coefficients, some stochastic errors of a specific sensor can be verified by comparing and analysing the methods and their results. Equivalent coloured noise models for three coloured noise components (random walk, bias instability and ramp noise) were derived, which is the foundation for further integrated system augmentation and is helpful for bounding the error drift during GPS outage and allowing faster GPS signal reacquisition.

Different levels of GPS/IMU integration were analyzed and compared in Chapter 3, i.e. loose, tight and ultra-tight integration. The simulation tests were carried out with tight integration and practical tests were carried out with loose integration. The EFK was briefly introduced for the state estimate with presence of the nonlinearities in practical navigation systems. The detailed implementation of the EKF GPS/IMU integrated system in loose structure was proposed. With the GPS receiver and IMU setup on the top of a vehicle, we did the practical data processing based on real-time data in an urban area. The EKF navigation performance and detailed analysis on the test results were given. From the tight integration simulations, we could see that even 2 satellites can provide enough information to keep the misalignment errors in a reasonable level when there is partial GPS signal outage. We have shown that when there is no GPS signal, the navigation solution of MEMS can be trusted only within dozens of seconds due to its large error drift. An important point that must be noted is that the GPS/IMU integrated navigation system is quite dependent on the accuracy of GPS, since we need GPS to correct the IMU sensor errors all the time. However, no GPS evaluation procedure is carried out in this thesis. As is well known, the GPS positioning accuracy is not high at the first epochs but would keep on improving during the beginning stage, and then it is kept within a bounded level during the following time if not losing the track of the satellites. The accuracy of GPS can be affected by many factors in real time situations,

Page 173: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

158

including the satellite numbers, PDOP, multipath, etc. In this thesis, we did not present detailed discussion about these factors. Since a digital map is usually available with the GPS receiver, the map matching algorithm and its link-choosing problem were introduced, and we proposed to solve it with lane tracking from camera images. Based on the principles of image processing on-line detection, we presented a lane detection procedure with practical considerations. The tests on highways, city streets and pathways under different scenes showed that lanes or road edges are successfully detected in most cases. The failure situations of lane detection were analysed in detail and possible solutions were proposed accordingly. To solve the problem of IMU error drift, we proposed a method to support the IMU orientation with camera rotation estimation based on the principles of computer vision. Both point and line matches methods were introduced. For the possible outliers in the data set, the optimal L1- and L2-norm and robust estimation with the BLAVE algorithm were introduced. The test results showed that line matches were strong constraints on motion estimation, especially on the rotation angles, and the BLAVE method can successfully remove possible outliers (less than 50%) in the data set, and the computation time is not influenced by the number of outliers, which is a great advantage compared with the RANSAC algorithm.

For the new trend of MEMS inertial sensors, its industry and market were introduced, from which we could directly see the great potential of MEMS sensors in civilian applications. To evaluate the MEMS navigation performance, a Kalman filter was augmented with the equivalent coloured noise model derived in Chapter 2. Observability analysis of an integrated navigation system was also briefly given. Performance evaluation of single and multiple MEMS chips show that the single MEMS sensor has a very low accuracy, but a sensor array would help to increase the accuracy and reliability. The theoretical analysis and test results show that when the number of sensors increases from 1 to 5, the standard error of the mean decreases very fast to 50%. From 5 to 10 sensors, it drops about another 20%. For further 10% reduction of the standard error, we need another 15 sensors. In conclusion, we estimated that a sensor array with 5-10 sensors can significantly improve the navigation accuracy. It should be mentioned that there is an overwhelming amount of papers and literature in the field of integrated navigation due to its complex combination nature of many disciplines. A comprehensive literature review on different topics is presented at each section of this thesis. Here some important ones are summarized and compared with this thesis. Hou (2004) discussed the stochastic error components of inertial sensors, and carried out tests with adaptive Kalman filters, signal process approaches and stochastic modelling. We follow his conclusions and mainly focus on AV analysis during our IMU error modelling. Han (2010) proposed an equivalent coloured noise model and augmented the Kalman filter in the n-frame. Similar procedures are carried out in this thesis with MEMS sensors, and the integrated system is then augmented in

Page 174: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

159

the e-frame. Horemuz (2008) presented a lane departure warning system with RTK method. The lane detection implementation in Chapter 4 was carried out based on this previous project. Weng et al. (1992) and Zhang (1995) originally proposed the motion estimation based on line correspondences and line segment correspondences, respectively. We adopted several of their algorithms and applied them on the camera motion estimation for integrated navigation purpose. Finally the originality of camera sensor device in navigation applications should be mentioned. To our knowledge, this is the first time that a camera sensor device is proposed to solve the link-choosing problem in the MM, and to support the IMU rotation with camera motion estimation. Furthermore, the application of BLAVE robust estimator is also original in navigation fields. Last but not least, the numerical performance evaluation of the latest MEMS chip sensors is initially carried out with realistic simulations.

7.2 Proposals on further research

This thesis introduces GPS/IMU integrated navigation structures, models some stochastic errors in MEMS based IMU with performance evaluation, estimates the camera motion from images and simulates of tight integration as well as practical implementation of loose integration. However, it must be admitted that integrated navigation is a huge research topic across many different disciplines. Both the theoretical and practical research sides on this topic can be deepened in many aspects. Here we briefly outline some of them:

1. Tight and ultra-tight architecture issues. The tight integration structure is built based on simulations. Due to the lab limitations, we have not implemented the whole tight integration system. Some key issues on tight and ultra-tight integration systems can also be a further theoretical research topic, such as IMU aided GPS signal reacquisition. In addition, the software GPS receiver for ultra-tight integration is also a promising research direction.

2. The MEMS sensors application, especially the chip sensors. In this thesis we modelled on the MEMS sensor errors and evaluated the performance based on simulations, but real implementation in a navigation system was not carried out due to limited time. It would be a good topic to set up a real platform and evaluate its performance with both single and multiple sensors.

3. System integration with more sensors. There are other positioning aiding sensors available on the vehicle, which could be integrated in the navigation system, such as speedometer and odometer.

4. The data fusion algorithm. In this thesis we used EKF as the data fusion algorithm, which is a first-order nonlinear approximation algorithm. EFK improvement against inaccurate system model and noise parameters can be further studied as a research topic. There are also other filters, such as

Page 175: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

160

unscented Kalman filter, which gives an approximate accuracy to the second order, and particle filter, which can theoretically reach to infinite order approximate accuracy.

5. Map matching. We proposed a method to solve the link-choosing problem for MM without implementation. Hence the MM implementation with image information can be a further research topic.

6. The application of LVN physical constraints. There are several physical constraints for LVN, such as the lateral velocity constraint and height constraint (Zhao, 2011, Sect. 4.2), which can be combined with the map information in a future research.

7. Observability and performance analysis of the integrated navigations system. In this thesis we only give out a brief analysis of the system observability and performance analysis. Further research on this topic is suggested, especially when more sensors are integrated into the system.

8. System integrity: gross error and slowly growing error detection. It would be better if we could detect these errors in the sensor outputs and remove them before they enter the system.

Page 176: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

161

References Alban S., Akos D. M., Rock, S. M., Gebre-Egziabher D. (2003). Performance Analysis and Architectures for INS-Aided GPS Tracking Loops. The Institute of Navigation's National Technical Meeting, 11-24 Jan., Anaheim, California, USA, pp. 611-622. Allan D. W. (1966). Statistics of atomic frequency standards. Proceedings of the IEEE, Vol. 54, No. 2, pp. 221–230. Aly M. (2008). Real time Detection of Lane Markers in Urban Streets. IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, June 2008, pp. 7-12. Baarda W. (1967). Statistical Concepts in Geodesy. Neth. Geod. Comm. Publ. on Geodesy, Vol.2, No. 4. Baarda W. (1968). A Testing Procedure for Use in Geodetic Network. Neth. Geod. Comm. Publ. on Geodesy, Vol.2, No. 5. Babu R., Wang J. (2004). Improving the quality of IMU-derived Doppler estimates for ultra-tight GPS/INS integration. In Proceedings of the European Navigation conference, Rotterdam, pp. 1–8. Barbour N.M. (2010). Inertial Navigation Sensors. NATO RTO Lecture Series, RTO-EN-SET-116, Low-Cost Navigation Sensors and Integration Technology, 2010. Bar-Itzhack I. Y., Berman N. (1988). Control Theoretic Approach to Inertial Navigation Systems. Journal of Guidance,Vol. 11, No. 3, pp.237-245. Beeby S., Ensell G., Kraft M., White N. (2004). MEMS Mechanical Sensors. Artech House, Inc. ISBN: 1-58053-536-4. Bendat J. S., Piersol A. G. (1966). Measurement and Analysis of Random Data. New York: John Wiley and Sons. Bhatti U., Ochieng W., Feng S. (2007 a). Integrity of an integrated GPS/INS system in the presence of slowly growing errors. Part I: A critical review Export. GPS Solutions, Vol. 11, No. 3. pp. 173-181 Bhatti U., Ochieng W., Feng S. (2007 b). Integrity of an integrated GPS/INS system in the presence of slowly growing errors. Part II: Analysis. GPS Solutions, Vol. 11, No. 3. pp. 183-192. Bos R., Waele S. D., Broersen P. M. T. (2002). Autoregressive spectral estimation by application of the Burg algorithm to irregularly sampled data. IEEE Trans. On Instrumentation and Measurement, 51(6), pp. 1289-1294. Brown A., Lu Y. (2004). Performance Test Results of an Integrated GPS/MEMS Inertial Navigation Package. Proceeding of ION GNSS 2004, Long Beach, California. pp. 825–832.

Page 177: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

162

Brown R. G., Hwang P. Y. C. (1997). Introduction to Random Signals and Applied Kalman Filtering, 3rd ed. New York: Wiley. ISBN: 0-471-12839-2. Canny J. F. (1986). A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol. PAMI-8, No. 6, 1986, pp. 679-698.

Chakraborty, B., Chaudhuri, P., Oja, H. (1998). Operating transformation retransformation on spatial median and angle test. Statistica Sinica 8, pp. 767-784.

Chen C. T. (1999). Linear System Theory and Design.3rd ed. New York: Holt, Rinehart and Winston. ISBN: 0-19-511777-8. Chen W., Chen Y. Q., Ding X.L. (2006). GNSS Applications in Deformation Monitoring, Intelligent Transport Systems, Precise Point Positioning and Atmosphere Study, GIS De-velopment Asia Pacific, Vol. 10, Issue 6, pp. 38-43.

Choi S., Kim T., Yu W. (2009). Performance evaluation of RANSAC family. Proceedings of 2009 British Machine Vision Conference, pp. 1-12.

Cornuejols G. (2007). Revival of the Gomory Cuts in the 1990s. Annals of Operations Research, Vol. 149 (2007), pp. 63–66.

Dantzig G.B (1947). Maximization of a linear function of variables subject to linear inequalities, 1947. Published in T.C. Koopmans (ed.): Activity Analysis of Production and Allocation, New York-London 1951 (Wiley & Chapman-Hall), pp. 339–347.

Department of Defence, Department of Transportation (2002). 2001 U.S. Federal Radio Navigation Plan. U.S. National Information Sererice, Springfield, Virginia, DOT-VNTSC-RSPA-01-3.1/DoD-4650.5.

Dixon-Warren St.J. (2010). Motion sensing in the iPhone 4: MEMS accelerometer. MEMS Inverstor Journal. http://www.memsjournal.com/2010/12/motion-sensing-in-the-iphone-4-mems-accelerometer.html.

Dixon-Warren St.J. (2011). Motion sensing in the iPhone 4: MEMS gyroscope. MEMS Inverstor Journal. http://www.memsjournal.com/2011/01/motion-sensing-in-the-iphone-4-mems-gyroscope.html

El-Diasty M., Pagiatakis S. (2008). Calibration and stochastic modeling of inertial navigation sensor errors. Journal of Global Positioning Systems, Vol. 7, No. 2, pp. 170-182 El-Diasty M., Pagiatakis S. (2009). A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors. Sensors. Vol. 9, No. 11, pp. 8473-8489. El-Sheimy N., Niu X. (2007). The Promise of MEMS to the Navigation Community. InsideGNSS, March/April 2007, pp. 46-56. El-sheimy N. (2008). The potential of partial IMUs for land vehicle navigation. InsideGNSS, Spring 2008, pp. 16-25.

Page 178: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

163

El-Sheimy N., Hou H., Niu X. (2008). Analysis and Modeling of Inertial Sensors Using AV, IEEE Transaction on instrumentation and measurement, Vol. 57, No.1, pp. 140-149. Elwenspoek M., Wiegerink R. J. (2001) Mechanical Microsensors. Springe-Verlag Berlin Heidelberg, ISBN: 3-540-67582-5. Eshel G. (2010). The Yule Walker Equations for the AR Coefficients. http://www.stat.sc.edu/~vesselin/STAT520_YW.pdf. University of South Carolina. Farrell J.A., Barth M. (1999). The Global Positioning System & Inertial Navigation. McGraw-Hill, New York. ISBN: 0-07-022045-X. Faugeras O., Maybank S.(1990). Motion from Point Matches: Multiplicity of Solutions. Intl. Journal Computer Vision, vol. 4, pp. 225-246. Faugeras O. (1993).Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, Cambridge, MA. ISBN: 0-262-06158-9. Fischler M.A., Bolles R.C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, Vol. 24, No.6, pp.381–395.

Fisher R. B. (2002). The RANSAC (Random Sample Consensus) Algorithm. Informatics homepage: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/FISHER/RANSAC/. May 6, 2002.

Gao G., Lachapelle G. (2008). A Novel Architecture for Ultra-Tight HSGPS-INS Integration, Journal of Global Positioning Systems, Vol. 7, No. 1, pp. 46-61. Gautier J. (2003). GPS/INS generalized evaluation tool for the design and testing of integrated navigation systems. Ph.D. thesis, Stanford University, USA. Gebre-Egziabher (2007). What is the difference between ‘loose’, ‘tight’, ’ultra-tight’ and ‘deep’ integration strategies for INS and GNSS. InsideGNSS, Jan-Feb 2007, pp. 28-33. Godha S., Cannon M. E. (2005). Intergration of DGPS with a MEMS-based inertial measurement unit (IMU) for land vehicle navigation application. Porceeding of ION GPS-05, Institute of navigation, Long Beach, pp. 333-345. Godha S., Cannon M. E. (2007). GPS/MEMS INS integrated system for navigation in urban areas. GPS Solutions 2007, Vol. 11, No. 3, pp. 193–203. Gold K. L., Brown A. K. (2004). A hybrid integrity solution for precision landing and guidance. Proceedings of IEEE position, location and navigation symposium, IEEE, California, pp. 165–174. Gonzalez R.C., Woods R.E. (2002).Digital Image Processing Using MATLAB (2nd edition). Prentice Hall. ISBN: 978-0201180756.

Page 179: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

164

Gonzalez R.C., Woods R.E., Eddins S. L. (2003). Digital Image Processing Using MATLAB. Prentice Hall. ISBN: 81-7758-898-2. Goshen-Meskin D., Bar-Itzhack I.Y. (1992a).Observability analysis of piece-wise constant systems-Part I: Theory. IEEE Transactions on Aerospace and Electronic Systems, Vol.28, No.4, pp. 1056–1067. Goshen-Meskin D., Bar-Itzhack I.Y. (1992b). Observability analysis of piece-wise constant systems-Part II: Application to inertial navigation in-flight alignment. IEEE Transactions on Aerospace and Electronic Systems, Vol.28, No.4, pp. 1068–1075. GPSoft (1998). Inertial Navigation System Toolbox User’s Guide. GPSoft LLC. Athens, Ohio, US.

Grafarend E. W., Awange J. L. (2012). Applications of Linear and nonlinear Models- Fixed effects, random effects, and total least squares. Berlin: Springer. ISBN: 978-3-642-22240-5.

Grewal M. S., Andrews A. P. (2008). Kalman Filtering Theory and Practice Using MATLAB. 3rd Edition, John Wiley & Sons, Inc, New York. ISBN: 978-0470173664. Grewal M.S., Weill L.R., Andrews A.P (2007). Global Positioning Systems, Inertial Navigation, and Integration. 2nd edition. Wiley-Interscience. ISBN: 0-470-04190-0. Grubbs F. E. (1969). Procedures for detecting outlying observations in samples. Technometrics 1969, Vol. 11, No. 1, pp. 1–21. Guerrier S. (2008). Integration of Skew-Redundant MEMS-IMU with GPS for Improved Navigation Performance. M.S. thesis, Dept. Geomatics Engineering, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland. Ham F. M., Brown R. G. (1983). Observability, eigenvalues, and Kalman filtering. IEEE Trans. Aerosp. Electron. Syst., vol. AES-19, no. 2, pp. 269–273. Han S. (2010). Novel GPS/INS Integration Architechture and Systematic Error Compensation Methods. Ph.D. thesis, Dept. of Optical Engineering, National University of Defense Technology, China. Han S., Wang J. (2011). Quantization and Coloured Noises Error Modelling for Inertial Sensors for GPS/INS Integration. IEEE sensors journal, Vol.11, No. 6, pp. 1493-1503. Hartley R., Zisserman A. (2004). Multiple View Geometry in Computer Vision 2nd edition. Cambridge University Press. ISBN: 978-0521540513.

Hettmansperger T. P., McKean J. W. (1998). Robust nonparametric statistical methods. (Kendall's Library of Statistics). First ed.. New York: Edward Arnold. pp. xiv+467. ISBN: 0340549378.

Hoaglin D. C., Welsch R. E. (1978). The Hat Matrix in Regression and ANOVA. The American Statistician. February 1978. Vol. 32, No. 1, pp. 17–22.

Page 180: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

165

Hofmann-Wellenhof B., Lichtenegger H. , Collins J. (2001). GPS, Theory and Practice (Fifth revised edition), Springer Verlag. ISBN: 3211835342. Hofmann-Wellenhof B., Legat K. , Wieser M. (2003). Navigation : principles of positioning and guidance. Springer-verlage. Wien, New York. ISBN: 978-3-211-00828-7.

Hong S., Lee M. H., Chun H., Kwon S., Speyer J.L. (2005). Observability of Error States in GPS/INS Integration. IEEE Transactions on Vehicular Technology. Vol. 54, No. 2, pp. 731-743.

Horemuz M. (2006). Integrated navigation. Lecture notes, Royal Institute of Technology (KTH), Division of Geodesy, Stockholm, Sweden. Horemuz M. (2007). Algorithm for processing of IMU and camera measurements. Report for project “Dynamic positioning by Integration of Sensor Data”, KTH, 2007. Horemuz M. (2008). Car collision warning system based on RTK GPS. FIG Working Week 2008. Stockholm Sweden, 14-19 June 2008. Horemuz M., Sjöberg L. E. (2001). Integration of GPS and INS at raw observable level. 3rd Int. Symp. On Mobile Mapping Technology, Cairo, Egypt. Horn R. A., Johnson C. R. (1990). Norms for Vectors and Matrices. Ch. 5 in Matrix Analysis. Cambridge, England: Cambridge University Press. Hou H. (2004). Modeling inertial sensors errors using AV, M.S. thesis, MMSS Res. Group, Department of Geomatics Engineering, The University of Calgary, Canada. Hough P.V.C. (1962). Methods and Means for Recognizing Complex Patterns. U.S. Patent 3,609,654. Huber P. (1981). Robust Statistics. New York: John Wiley & Sons. ISBN: 0471418056. IEEE Std 1293-1998. IEEE Standard Specification Format Guide and Test Procedure for Linear, Single-Axis, Non-gyroscopic Accelerometers. IEEE Std.647-2006. IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Laser Gyros. IEEE Std.952-1997. IEEE Standard Specification Format Guide and Test Procedure for Single –Axis Interferometric Fiber Optic Gyros. Inertial Science (1999). ISIS-IMU manual. http://www.inertialscience.com/manuals/ ISIS_broc.PDF Jekeli C. (2001). Inertial navigation systems with geodetic applications. Walter de Gruyter GmbH. ISBN: 978-3110159035. Jiang Y., Lin Y. (1992). Error estimation of INS ground alignment through observability analysis. IEEE Transactions on Aerospace and Electronic Systems, vol. 28, no. 1, pp. 92–96.

Page 181: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

166

Judy J.W. (2001). Microelectromechanical systems (MEMS): fabrication, design and applications. Smart Mater. Struct. Vo. 10, pp. 1115-1134. Jurisch R., Kampmann G. (1998).Vermittelnde Ausgleichungsrechnung mit balancierten Beobachtungen - erste Schritte zu einem neuen Ansatz. Zeitschrift für Vermessungswesen, 123, pp.87-92, 1998. Jurisch R., Kampmann G. (2001). Pluecker Coordinates: A new tool for geometrical Analysis and outlier detection. In First International Symposium on Robust Statitics and Fuzzy Techniques in Geodesy and GIS, Editor A. Carosio, March 2001, pp. 95-109. Jurisch R., Kampmann G. (2002): Introducing the natural generalization of partial redundancies within linear Gauss-Markov-Parameter estimation. In: Beiträge zur Wissenschaft, Technologie und Gestaltung, Hochschule Anhalt, Nr. 65, 2002. Jurisch R., Kampmann G., Linke J. (2002). Introducing the determination of hidden (latent) restrictions within linear regression analysis. In: Grafarend, Erik / Krumm, Friedrich W. / Schwarze, Volker S.: Geodesy - the challenge of the 3rd Millenium. Springer, Berlin, Germany, 2002, ISBN: 3-540-43160-8, pp, 333-348. Kalman R.E. (1960). A new approach to linear filtering and prediction problems. Journal of Basic Engineering, Vol. 82, No.1, pp. 35–45. Kampmann G., Renner B. (2000). Numerische Beispiele zur Bearbeitung latenter Bedingungen und zur Interpretation von Mehrfachbeobachtungen in der Ausgleichungsrechnung. Zeitschrift für Vermessungswesen, 125, S. 190-197. Kampmann G. (2009): Understanding Basic Principles Optimization, Regulation and Adjustment. Royal Institute KTH, Stockholm, Sweden – Internet Version Homepage KTH. http://www.infra.kth.se/geo/software/mec.html Kampmann G. (2010): Introduction to Inner Reference. Royal Institute KTH, Stockholm, Sweden – Internet Version Homepage KTH. http://www.infra.kth.se/geo/services/ ir_apps.html

Karmarkar N. (1984). A new polynomial-time algorithm for linear programming. Combinatorica 4 (4), pp. 373-395.

Kemperman J. H. B. (1987). The median of a finite measure on a Banach space. Statistical data analysis based on the L1-norm and related methods: Papers from the First International Conference held at Neuchâtel, August 31–September 4, 1987. pp. 217–230.

Kim H., Lee J. G., Park C. G. (2004), Performance Improvement of GPS/INS Integrated System Using Allan Variance Analysis, The 2004 International Symposium on GNSS/GPS. Sydney, Australia. 6–8 December 2004. Kluge K., Lakshmanan S. (1995). A Deformable-Template. Approach to Lane Detection. Proceedings of the Intelligent. Vehicles Symposium'95, pp. 54-59.

Page 182: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

167

Ko S., Bitmead R. (2007). State estimation for linear systems with state equality constraints. Automatica, Vol. 43, No.8, pp. 1363-1368. Kubena R., Vickers-Kirby D., Joyce R., Stratton F. (1999). A new tunneling-based sensor for inertial rotation rate measurements. J. Micro. Elect. Mech. Syst., Vol-8, 1999, pp. 439-447. Lee Y. C., O’Laughlin D. G. (1999). A performance analysis of a tightly coupled GPS/inertial system for two integrity monitoring methods. Proceedings of the 12th international technical meeting of the satellite division of the Institute of Navigation, Institute of Navigation, Nashville, pp. 1187–1200. Lee Y. C., O’Laughlin D. G. (2000). A further analysis of integrity methods for tightly coupled GPS/IRS systems. National technical meetings proceedings “navigating into the new millennium”, Institute of Navigation, Anaheim, pp. 166–174.

Leick A. (1995). GPS satellite surveying. Wiley & Sons, New York, Chichester, Toronto. ISBN: 0-4713062-6-6.

Li D., Landry R.J., Lavoie O. (2008). Low-cost MEMS sensor-based attitude determination system by integration of magnetometres and GPS: A real-data test and performance evaluation. Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008, pp. 342–350. Li T. et al. (2010). Ultra-tight Coupled GPS/Vehicle sensor Integration for Land vehicle Navigation. NAVIGATION, Vol. 57, No. 4, pp. 248-259.

Majumder A. (2012). Camera Calibration. Partial lecture notes for CS 211A: Visual Computing. Department of Computer Science, University of California, Irvine. http://www.ics.uci.edu/~majumder/vispercep/cameracalib.pdf.

Misra P., Enge P. (2006). Global Positioning System: Signals, Measurements, and Performance. 2nd edition. Ganga-Jamuna Press., Lincoln, Massachusetts, USA. ISBN: 0-9709544-1-7. Nasiri S. (2005). A Critical Review of MEMS Gyroscopes Technology and Commercialization Status. Inven. Sense, CA, 2005. http://www.invensense.com/mems/gyro/documents/whitepapers/MEMSGyroComp.pdf Nassar S. (2005). Accurate INS/DGPS positioning using INS data de-noising and autoregressive (AR) modeling of inertial sensor errors. Geomatica, Vol. 59, No. 3, pp. 283-294. Neumaier A., Schneider T. (2001). Estimation of Parameters and Eigenmodes of Multivariate Autoregressive Models. ACM Transactions on Mathematical Software, Vol. 27, No. 1, pp. 27- 57. Niu X., Goodall C., Nassar S., El-Sheimy N. (2006). An Efficient Method for Evaluating the Performance of MEMS IMUs. IEEE/ION Position Location and Navigation Symposium, PLANS 2006, San Diego, USA, April 25-27, 2006, pp. 766-771.

Page 183: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

168

Ogata K. (2002). Modern Control Engineering, 4th ed. Englewood Cliffs, NJ: Prentice-Hall. ISBN: 0-1361567-3-8 Ouladsine M., Shraim H., Fridman L., etc. (2008). Vehicle Parameter Estimation and Stability Enhancement using the Principles of Sliding Mode. Proceedings of 2007 American Control Conference, Newyork, USA, Vol. 48, pp.230-254, Papoulis A. (1991). Probability, Random Variables, and Stochastic Process, Third Edition, McGraw-Hill, Inc. ISBN: 0-07-048477-5. Patrich Y. H., Brown R. G. (2008): A new integrity approach to integrated muti-GNSS systems. InsideGNSS, Spring 2008, pp. 24-33. Putty M.W., Najafi K. (1994). A micromachined vibrating ring gyroscope. Tech. Dig. Solid-State Sensor and Actuator Workshop, Hilton Head Island, SC, pp. 213-220. Quddus M.A., Ochieng W.Y., Lin Z., Noland R.B. (2003). A general map matching algorithm for transport telematics applications, GPS Solutions, Vo. 73, pp. 157-167. Quddus M.A., Ochieng W.Y., Noland R.B. (2007). Current map-matching algorithms for transport applications: State-of-the art and future research directions'', Transportation Research C: Emerging Technologies, Vol.15, No. 5, pp. 312 – 328. Rhee I., Abdel-Hafez M.F., Speyer J.L. (2004). Observability of an integrated GPS/INS during maneuvers. IEEE Transactions on Aerospace and Electronic Systems, vol. 40, no. 2, pp. 526-535. Schmidt G. (2010). INS/GPS Technology Trends. NATO RTO Lecture Series. RTO-EN-SET-116, Low-Cost Navigation Sensors and Integration Technology. MIT. Schneider T., Neumaier A. (2001). Algorithm 808: ARfit—A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models. ACM Trans. Math. Softw. 27(1), pp. 58-65. Schwarz G. (1978). Estimating the dimension of a model. Ann. Statist. Vol. 6, No. 2, pp. 461-464. Shin E. (2001). Accuracy Improvement of Low Cost INS/GPS for Land Application, MSc Thesis, Department of Geomatics Engineering, University of Calgary, Canada, UCGE Report No. 20156. Simon D., Simon D.L. (2010). Constrained Kalman fltering via density function truncation for turbofan engine health estimation. International Journal of Systems Science. Vol. 41, No. 2, February 2010, pp. 159–171 Simon D., Chia T. L. (2002), Kalman Filtering with State Equality Constraints, IEEE Trans. on Aerospace and Electronic Systems, Vol. 38, No. 1, pp. 128-136. Sinha N. K., Kuszta B. (1983). Modeling and Identification of Dynamic Systems. Springer. New York: Van Nostrand Reinhold. ISBN: 0-442-28162-5.

Page 184: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

169

Sircoulomb V., Israel J., Hoblos G., etc. (2008). State estimation under nonlinear state inequality constraints. A tracking application. 16th Mediterranean Conference on Control and Automation, Ajaccio, France, pp. 1669-1674. Sjöberg L. E. (2007). Theory of satellite geodesy. Lecture notes, Royal Institute of Technology (KTH), Division of Geodesy, Stockholm, Sweden. Skaloud J. (1999). Optimizing Georeferencing of Airborne Survey Systems by INS/DGPS. Ph.D. Thesis, The Uni. of Calgary, Dept. of Geomatics Engineering, Calgary, Alberta.

Song H. A. (2005). Personal website: Digital Signal Processing. http://www.songho.ca/dsp/index.html

STMicroelectronics (2009). LIS331DLH: MEMS digital output motion sensor ultra low-power high performance 3-axes “nano” accelerometer. July 2009. STMicroelectronics (2010). L3G4200D MEMS motion sensor: ultra-stable three-axis digital output gyroscope. December 2010. STMicroelectronics (2011). TA0343 Technical article: Everything about STMicroelectronics’ 3-axis digital MEMS gyroscopes. July 2011. http://www.adafruit.com/datasheets/STMEMS.pdf Sun D. (2010). Ultra-Tight GPS/Reduced IMU for Land Vehicle Navigation. PhD Thesis, published as Report No. 20305, Department of Geomatics Engineering, The University of Calgary, Canada. Savage P. G. (2000). Strapdown Analytics, Part 1 & 2. Strapdown Associates, Inc. Maple Plain, Minnesota. ISBN: 0-12-787767-3. Savage P. G. (2002). Analytical modelling of sensor quantization in strapdown inertial navigation error equations. J. Guidance, Control and Dynamics, Vol. 25, No. 5, pp. 833–842. Tehrani M. M. (1983). Ring laser gyro data analysis with cluster sampling technique, Proceedings of SPIE, Vol. 412, pp. 207-220. Tiberius C. C. J. M. (1998). Recursive data processing for kinematic GPS surveying. Publications on Geodesy. No. 45. Netherlands Geodetic Commission, Delft. New series: ISSN 0165-1706. Tsui J. B. (2004). Fundamentals of Global Positioning System Receivers: A Software Approach, 2nd Edition. Wiley-Interscience. ISBN: 978-0-471-70647-2. Torr P.H.S., Murray D.W. (1993). Outlier detection and motion segmentation. Sensor Fusion VI, SPIE volume 2059, pp. 432-443. Wang L., Chiang Y., Chang F. (2002). Filtering method for nonlinear systems with constraints, IEE Proceedings - Control Theory and Applications, 2002, Vol. 149, No. 6, pp. 525-531.

Page 185: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

170

Wang Y., Teoh E.K., Shen D. (2004). Lane detection and tracking using B-Snake. Image and Vision computing, Vol. 22, pp. 269-280. Welch G., Bishop G. (2001). An Introduction to the Kalman Filter. SIGGRAPH 2001, Course 8, August 12-17. Wen W., Durrant-Whyte H. (1992). Model-based multi-sensor data fusion. IEEE International Conference on Robotics and Automation, Nice, France, pp. 1720 -1726. Weng J., Huang T.S., Ahuja N. (1992). Motion and Structure from Line Correspondences: Closed-Form Solution, Uniqueness and Optimization. IEEE Trans. on Pattern Analysis and Machine Intelligence, March, 1992, Vol. 14, No. 3, pp. 318-336. Woodman O. (2010). Pedestrian localisation for indoor environments. Ph D Thesis, University of Cambridge, Computer Laboratory, UK. Xu P. (1993). Consequences of constant parameters and confidence intervals of robust estimation. Boll Geod Sci Affini, Vol.52, pp. 231-249. Yang Y., Song L. , Xu T. (2002). Robust estimator for correlated observations based on bifactor equivalent weights. Journal of Geodesy. Vol. 76, pp. 353-358.

Yole Developpement (2011). Status of the MEMS Industry, 2011 edition. March 2011.

Yole Developpement (2012 a). Status of the MEMS Industry, 2012 edition. July, 2012.

Yole Developpement (2012 b). MEMS for Cell phones & Tablets, May 2012. Zhang N. (2011). Plane Fitting on Airborne Laser Scanning Data Using RANSAC. Project report from Mathematics LTH. http://www.maths.lth.se/matematiklth/personal/petter/rapporter/plane%20fitting.pdf

Zhang Z. (1995). Estimating Motion and Structure from Correspondences of Line Segments between Two Perspective Images. IEEE Transactions on Pattern Analysis and Machine Intelligence, December 1995, Vol. 17, No. 12, pp. 1129-1139.

Zhao Y. (2011). GPS/IMU integrated system for land vehicle navigation. Licentiate Thesis. Royal Institute of Technology (KTH). http://kth.diva-portal.org/smash/record.jsf?searchId=1& pid=diva2:446078 Zhao Y., Horemuz M., Sjöberg L.E. (2011). Stochastic modeling and analysis of IMU sensor errors. Archives of Photogrammetry, Cartography and Remote Sensing, Vol. 22, 2011, pp. 437-449. Zuliani M. (2012). RANSAC for Dummies. Aug. 13, 2012. http://vision.ece.ucsb.edu/~zuliani/Research/RANSAC/docs/RANSAC4Dummies.pdf.

Page 186: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

171

Appendix A Coordinate frames The operational inertial frame (i-frame) is defined as: Origin - at the center of mass of the Earth z-axis - parallel to the mean spin axis of the Earth x-axis - pointing towards the mean vernal equinox y-axis - completing a right-handed orthogonal frame The e-frame is an Earth-Centered-Earth-fixed Cartesian frame, and the WGS-84 used by GPS as the reference coordinate system is one of the realizations of the e-frame. It is defined as: Origin - at the center of mass of the Earth z-axis - parallel to the mean spin axis of the Earth x-axis - pointing towards the mean meridian of Greenwich y-axis - completing a right-handed orthogonal frame The navigation frame (n-frame) is commonly used to describe the navigation of a vehicle in a local coordinate frame. It is defined as a set of Cartesian coordinate axes too, usually in the direction of north-east-down (NED): Origin - at the origin of the sensor frame z-axis - aligned with the ellipsoidal normal at a point, in the down direction x-axis - pointing to north (parallel to the tangent to the meridian) y-axis - pointing to the east and completing a right-handed orthogonal frame The body frame (b-frame) refers to the vehicle to be navigated and the conventional definition is along the forward, right and through-the-floor directions: Origin - at the origin of the sensor frame z-axis - aligned with the ellipsoidal normal at a point, in the down direction x-axis - pointing to forward (the direction of instant motion) y-axis - pointing to the right and completing a right-handed orthogonal frame The camera frame (c-frame) is approximately aligned with b-frame if the camera “looks” in forward direction. The definition is given by: Origin - at the perspective center of the camera z-axis - pointing downward x-axis - pointing forward along the optical axis y-axis - pointing to the right and completing a right-handed orthogonal frame In addition, there is a 2D coordinate system - the positive image coordinates: Origin - the image point of the perspective center x-axis - pointing to the right y-axis - pointing upwards.

Page 187: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

172

Appendix B

Models of coloured noise

We derive the equivalent coloured noise model following the method introduced in Han (2010); Han and Wang (2011). First some primary knowledge on stochastic processes should be given. A coloured noise with specific spectral function (PSD) ( )S can be

written as (Brown and Hwang 1997, p. 137)

2

( ) 1 ( ) ( ) ( )S H j H j H j (B.1)

where ( )H j is the transfer function of the shaping filter. The physical meaning of Eq.

(B.1) is that we can obtain a coloured noise x(t) with specific PSD by the output of transferring a unit white noise through a shaping filter, which is shown in Figure B.1.

Figure B.1 Shaping filter. Therefore, with the Allan Variance analysis we can get the PSD for each coloured noise within the inertial sensors. Then by means of Eq. (B.1), the transfer function of each shaping filter ( )H j is obtained. If the coloured noise has a rational spectrum, ( )H jwould also be a rational function, which represents a finite-order linear system. But if the PSD is irrational, ( )H j would be an irrational function and represent an linear system

up to infinite order. In such cases, we should carry out a rational approximation instead.

After we get ( )H j , the following relation can be obtained

( ) ( ) ( )x j H j u j (B.2)

Since the input of the shaping filter is a unit white noise, i.e. ( ) 1u j , then we can get

the differential equation of the coloured noise by inverse Fourier transform of Eq. (B.2).

B.1 Rate random walk

From Table 2.2, the PSD of the rate random walk is

2 2( )rwS K (B.3)

where the relation 2 f has been used.

( )H jUnit white noise

( )x j ( )u j

Page 188: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

173

With Eq. (B.1), the corresponding transfer function of the shaping filter is

( )rwH j K j (B.4)

By means of inverse Fourier transform on both sides of Eq. (B.4), we obtain the differential equation of the rate random walk

1( )rwd Ku t (B.5)

B.2 Bias instability

From Table 2.2, the PSD of the bias instability is

2( )biS B (B.6)

With Eq. (B.1), the corresponding transfer function of the shaping filter is

( )biH j B j (B.7)

From Eq. (B.7), we can see that the transfer function of bias instability shaping filter is irrational. Therefore Eq. (B.7) cannot be implemented in real applications due to the

irrational term j . We can approximate the shaping filter by performing Taylor

expansion of the shaping filter and keep the first two terms of this expansion

00 0

0 0

1( )

2 2

x xx x x x

x x

(B.8)

Therefore Eq. (B.7) can be approximated by applying Eq. (B.8) as

0

0

2( )bi

B jH j

j j

(B.9)

where 0 is the expanding point.

Define 0j , and then rewrite Eq. (B.9) as

2

2( )bi

BH j

j

(B.10)

Performing inverse Fourier transform on Eq. (B.10), we obtain the differential equation of the bias instability noise

Page 189: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

174

222 ( )bi bid d B u t (B.11)

From Eq. (B.11), we can see that it is actually a first-order Gauss-Markov process. In fact

the autocorrelation function of Gauss-Markov process is given as 2( )R e and the

corresponding differential equation is (Brown and Hwang 1997, pp. 195-196) 22 ( )y y u t .

Please note that Eq. (B.11) is the exact equation deduced by Taylor expansion, which is different with direct application of Gauss-Markov process approximation in (Han 2010; Han and Wang 2011). Due to property of bias instability, i.e. it is quite low in frequency, the coefficient values would not be large differences.

Finally the relative magnitude error can be used to show the difference between the approximated and true transfer functions

( ) ( )

10*log( )

bi bi

bibi

H j H j

H j

(B.12)

Substitute Eq. (B.7) and (B.10) into Eq. (B.12), and we can obtain

2 4

10*log(2 1)bi

(B.13)

It can be easily proved that bi < 0 dB in the whole frequency domain.

B.3 Rate ramp

From Table 2.2, the PSD of the rate ramp is

32( )rrS R (B.14)

It is similar with Eq. (B.1) to get the corresponding transfer function of the shaping filter

3/2( ) ( )rrH j R j (B.15)

From Eq. (B.15), we can see that the transfer function of rate ramp shaping filter is also

irrational and cannot be implemented directly due to the irrational term 3/2( )j .

Page 190: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

175

Approximate the shaping filter by performing Taylor expansion in the same way. However, to make sure that the approximated transfer function has higher order than the original one (i.e. control the truncate error as shown in Eq. (B.20) below), we perform the Taylor expansion up to the third term

3 3 2 2 30 0 0 0 0

0 0

3 3 3 1( ) ( )

2 48 4x x x x x x x x x

x x (B.16)

So that we obtain

2 3

0

0

( )3 1

44

rr

RH j

jj

(B.17)

Define0

1

4 j

, and then Eq. (B.17) can be rewritten as

2 3

( )3 ( )rr

RH j

j

(B.18)

Similarly we can get the differential equation for rate ramp noise by inverse Fourier transform of Eq. (B.18)

333 ( )rr rrd d Ru t (B.19)

or in normalized form

3( )rr rrd d R u t (B.20)

From Eq. (B.19), we can see that the rate ramp noise is also approximated by Gauss-Markov process. The relative magnitude error can be calculated following Eq. (B.12)

3

2 4 6

( ) ( )10*log 10*log

( ) 9rr rr

rrrr

H j H j

H j

(B.21)

Due to the low frequency nature of rate ramp noise, rr < 0 dB can also be guaranteed.

Page 191: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

176

Appendix C

Dot /cross product and skew-symmetric matrix

For two vectors T1 2[ , ,..., ]na a aa and T

1 2[ , ,..., ]nb b bb , their dot and cross products are

defined by

1 1 2 21

...n

i i n ni

a b a b a b a b

a b (C.1)

sin( )a×b a b n (C.2)

where θ is the angle between the vectors a and b, and n is a unit vector perpendicular to the plane containing a and b in the direction given by the right-hand rule.

If the vector dimension is constrained to 3, i.e. T

x y za a a a , its skew-symmetric matrix

(also written as ( )a ) is defined by

0

0

0

z y

z x

y x

a a

a a

a a

A (C.3)

Let a, b and c are arbitrary three-dimensional vectors and A, B and C are the skew-symmetric matrices corresponding to these vectors. Then the following operations are equivalent:

TA = -A (C.4)

T Ta b = a b = b a (C.5)

a b = Ab = Ba (C.6)

a×(b c) = a b a c (C.7)

a×(b ×c) = (a c)b - (a b)c (C.8)

( ) ( ) Ta b c = a b c = a Bc (C.9)

( ) a b c = ABc (C.10)

a (b×c) = (a×b) c (C.11)

( ) a b c = ABc BAc (C.12)

Page 192: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

177

Appendix D

Direction cosine matrix and Euler angles

Consider the situation that two frames a and b with the same origin. The coordinates of an arbitrary vector p can be represented by pa in a-frame and pb in b-frame:

Ta a a ax y z p (D.1)

Tb b b bx y z p (D.2)

Then the following relation holds with the direction cosine matrix (or rotation matrix)

a a bbp R p (D.3)

The matrix abR transforms the coordinates of a vector expressed in the b-frame into the

coordinates of the same vector expressed in the a-frame.

The relative orientation between two frames can be described by a sequence of rotations. The rotation matrix around each axis are given by

1 0 0

( ) 0 cos sin

0 sin cosx x x

x x

R (D.4)

cos 0 sin

( ) 0 1 0

sin 0 cos

y x

y

x y

R (D.5)

and

cos sin 0

( ) sin cos 0

0 0 1

z z

z z z

R (D.6)

where ( )iR represents the rotation about the i-axis by the angle i , positive in the

counterclockwise sense as viewed along the axis toward the origin (right-hand rule). Each rotation matrix is orthogonal, i.e. 1( ) ( )T

i i R R . If the a-frame is the result of rotating the

b-frame, first about the z-axis by z , then about the new y-axis by y , and finally about the

new x-axis by x , we get the total transformation matrix given by

Page 193: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

178

( ) ( ) ( )ab x y z R R R R (D.7)

and this total transformation matrix is also orthogonal. It is particularly important that the transformation matrix depends on the sequence of the rotations, i.e.

( ) ( ) ( ) ( )x y y x R R R R (D.8)

The explicit form of Eq. (D.7) is given by

cos cos sin cos sin

sin cos cos sin sin cos cos sin sin sin cos sin

sin sin cos sin cos cos sin sin sin cos cos cos

z y z y yab z x z y x z x z y x y x

z x z y x z x z y x y x

R

(D.9)

Then the Euler angles can be computed from Eq. (D.9) as

(2,3)arctan

(3,3)

arcsin[ (1,3)]

(1,2)arctan

(1,1)

x

y

z

R

R

R

R

R

(D.10)

where R(i,j) is the element of ABR matrix at the i-th row and j-th column. R(1,1) and R(3,3)

are not equal to zero, since the angular increments during the update interval are normally small angles. If the Euler angles can be treated as small angles, i.e. these approximations hold

sin

cos 1

(D.11)

then Eq. (D.9) can be simplified as

1

1

1

z yab z x

y x

R I E (D.12)

where E is the skew-symmetric matrix of Euler angles:

0

0

0

z y

z x

y x

E (D.13)

Page 194: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

179

Appendix E Derivation of WSS equivalence (also found in Han, 2010)

In order to prove the WSS equivalence, two lemmas should be introduced first:

Lemma 1: Let u(t) be white noise, then for any non-negative integers 0, 0i j the following correlation function can be obtained

1 2 1 2[ ( ) ( )] ( 1) ( )i j j i jE D u t D u t D t t (E.1)

where ( )t is Dirac function, and D is differential operator.

Lemma 2: Let u(t) and v(t) be two independent white noise, then the combined noise model ( )

0 1 2 0 1 2( ) [ ... ] [ ... ]pn qmpn qms t f u f u f u f u g v g v g v g v is a WSS process, i.e

[ ( )] ( ) ( )x xE s t m t m t (E.2)

and

1 1 2 2 1 2 1 2 2 2 1 2[( ( ) ( ))( ( ) ( ))] ( , ) ( ( ), ( )) ( ,0)x x x x xE s t m t s t m t C t t C t t t t C t t (E.3)

where ( )xm t and 1 2( , )xC t t are the mean and autocorrelation function, respectively, and is

time interval.

According to Lemma 2, the new stochastic process

( )0 1 2( ) ( ) ( ) ( ) ... ( )l

ls t e w t e w t e w t e w t is also a WSS process. Therefore the equivalence

between ( )s t and ( )s t can be proved by the following two equations:

[ ( )] [ ( )]E s t E s t (E.4)

[ ( ) ( )] [ ( ) ( )]E s t s t E s t s t (E.5)

The mean of ( )s t and ( )s t can be computed by

0 1 0 1

0 1 0 1

[ ( )] [( ... ) ( ) ( ... ) ( )]

( ... ) [ ( )] ( ... ) [ ( )] 0

pn qmpn qm

pn qmpn qm

E s t E f f D f D u t g g D g D v t

f f D f D E u t g g D g D E v t

(E.6)

and

2

0 1 2

20 1 2

[ ( )] [ ( ) ( ) ( ) ... ( )]

( ... ) [ ( )] 0

ll

ll

E s t E e w t e Dw t e D w t e D w t

e e D e D e D E w t

(E.7)

Hence Eq. (E.4) holds.

Page 195: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

180

The autocorrelation function of ( )s t can be computed by

0 1 0 1

0 1 0 1

[ ( ) ( )] [( ... ) ( ) ( ... ) ( )]

[( ... ) ( ) ( ... ) ( )]

pn qmpn qm

pn qmpn qm

E s t s t E f f D f D u t g g D g D v t

f f D f D u t g g D g D v t

(E.8)

Since u(t) and v(t) are independent with each other, Eq. (E.8) can be simplified as

0 1 0 1

0 1 0 1

[ ( ) ( )] [( ... ) ( )] [( ... ) ( )]

[( ... ) ( )] [( ... ) ( )]

pn pnpn pn

qm qmqm qm

E s t s t E f f D f D u t f f D f D u t

E g g D g D v t g g D g D v t

(E.9)

According to Lemma 1, the autocorrelation function of ( )s t can be obtained as

2 20 1[ ( ) ( )] ( ) ( ) ... ( )l

lE s t s t r t r D r D (E.10)

where max( , )l pn qm and the coefficients , 1, 2,...,ir i l can be computed Eq. (E.9).

The autocorrelation function of ( )s t can be computed by

2

0 1 2

20 1 2

[ ( ) ( )] [( ... ) ( )

( ... ) ( )]

ll

ll

E s t s t E e e D e D e D w t

e e D e D e D w t

(E.11)

According to Lemma 1, Eq. (E.11) can be transformed to

2 2 20 1 0 2

1 2 2( 1) 2 21 2

[ ( ) ( )] ( ) ( 2 ) ( ) ...

[( 1) ( 1) 2 ) ] ( ) ( 1) ) ( )l l l l ll l l l

E s t s t e t e e e D

e e e D e D

(E.12)

Recall the original Eq. (2.40), and with Eq. (E.12) we can obtain

2 20 1[ ( ) ( )] ( ) ( ) ... ( )l

lE s t s t r t r D r D (E.13)

With Eq. (E.10) and (E.13), we can see that Eq. (E.5) holds.

Q.E.D.

Page 196: Key Technologies in Low-cost Integrated Vehicle Navigation ...656086/FULLTEXT01.pdf · Dr. Patric Jansson, Ms. Uliana Danila for their useful advices and everyday company; Thanks

181

Appendix F Derivation of line match equation (also found in Weng et al. 1992)

Assume a coordinate system fixed on the camera and a line passes through a point xp with direction l at time t0 . The projection normals at two time instants (t0 and t1) are given by t0 : 0 pxn = × l (F.1)

t1 : 1 1

1 0( ) (( ) ) ( )p p n = Rx T × Rl R x R T × l R n R T× l (F.2)

Multiply Eq. (F.2) with 1R , and we obtain

1 11 0

R n = n R T× l (F.3)

Apply the equation a×(b ×c) = (a c)b - (a b)c and Eq. (F.3) yields

1 1 1 1 10 1 0 0 0 0( ) ( ) ( ) ( ) n R n = n R T× l n l R T n R T l n R T l (F.4)

Please note that the last step in Eq. (F.4) utilizes the fact that 0 0 n l .

Substitute the 0n solved from Eq. (F.3) for the one in Eq. (F.4), and we obtain

1 1 1 1 1 10 1 1 1( ) n R T = R n R T l R T R n R T n T (F.5)

Substitute Eq. (F.5) into Eq. (F.4), then we can get the final result

10 1

n R n = 1( ) n T l (F.6)

Q.E.D.