82
Improvement of Vehicle Motion Information by Visual Odometry MASTER THESIS For The fulfilment of the academic degree M.Sc. in Automation and Robotics Submitted by: Rajith Hebbal Ravikumar Matriculation Number: 181059 Supervisor: Prof. Dr. Jian-Jia Chen Department of Computer Science Technical University of Dortmund Co-Examiner: Dr. Holger Faisst Passive safety and sensorics Continental Automotive GmbH Department of Electrical Engineering

Improvement of Vehicle Motion Information by Visual

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Improvement of Vehicle Motion Information by Visual

Improvement of Vehicle Motion Information byVisual Odometry

MASTER THESIS

ForThe fulfilment of the academic degree

M.Sc. in Automation and Robotics

Submitted by:Rajith Hebbal Ravikumar

Matriculation Number:181059

Supervisor:Prof. Dr. Jian-Jia Chen

Department of Computer ScienceTechnical University of Dortmund

Co-Examiner:Dr. Holger Faisst

Passive safety and sensoricsContinental Automotive GmbH

Department of Electrical Engineering

Page 2: Improvement of Vehicle Motion Information by Visual

Acknowledgment

I would like to express my heartfelt appreciation for all the persons who have truthfullyhelped me during the course of this master thesis.

Firstly, I would like to thank Mr. Michael Feser at Continental Automotive GmbH, forproviding opportunity and necessary resources to conduct this research study. I owe mydeepest gratitude to my supervisors in Continental Automotive GmbH, Dr. Holger Faisstand Mr. Sergey Azarkevich, who guided me throughout the work and helped me to focusin the right direction which the research demanded. Also special thanks goes to my fellowcolleagues who helped me in several circumstances.

I am grateful to my academic supervisor Prof. Dr. Jian-Jia Chen, from Department ofComputer Science, Technical University of Dortmund, for providing his crucial feedbackand guidance during this research.

I would like to thank my family and friends, for their continuous support and encourage-ment that helped me to successfully accomplish this research work.

Page 3: Improvement of Vehicle Motion Information by Visual

Eidesstattliche Versicherung

Name, Vorname Matrikel-Nr.

Ich versichere hiermit an Eides statt, dass ich die vorliegende Masterarbeit mit dem Ti-tel “Improvement of Vehicle Motion Information by Visual Odometry” selbststandig undohne unzulassige fremde Hilfe erbracht habe. Ich habe keine anderen als die angegebe-nen Quellen und Hilfsmittel benutzt sowie wortliche und sinngemaße Zitate kenntlichgemacht. Die Arbeit hat in gleicher oder ahnlicher Form noch keiner Prufungsbehordevorgelegen.

Ort, Datum Unterschrift

Belehrung:

Wer vorsatzlich gegen eine die Tauschung uber Prufungsleistungen betreffende Regelungeiner Hochschulprufungsordnung verstoßt, handelt ordnungswidrig. Die Ordnungswid-rigkeit kann mit einer Geldbuße von bis zu 50.000,00 € geahndet werden. ZustandigeVerwaltungsbehorde fur die Verfolgung und Ahndung von Ordnungswidrigkeiten ist derKanzler der Technischen Universitat Dortmund. Im Falle eines mehrfachen oder son-stigen schwerwiegenden Tauschungsversuches kann der Prufling zudem exmatrikuliertwerden.(§63 Abs. 5 Hochschulgesetz - HG -)

Die Abgabe einer falschen Versicherung an Eides statt wird mit Freiheitsstrafe bis zu 3Jahren oder mit Geldstrafe bestraft.

Die Technische Universitat Dortmund wird ggfls. elektronische Vergleichswerkzeuge (wiez.B. die Software ”turnitin”) zur Uberprufung von Ordnungswidrigkeiten imPrufungsverfahren nutzen.

Die oben stehende Belehrung habe ich zur kenntnis genommen:

Ort, Datum Unterschrift

Page 4: Improvement of Vehicle Motion Information by Visual

Declaration of AuthorshipI, Rajith Hebbal Ravikumar, declare that this thesis titled, “Improvement of VehicleMotion Information by Visual Odometry” and the work presented in it are my own. Iconfirm that:

• This work was done wholly or mainly while in candidature for a research degree atTechnical Universty of Dortmund

• Where any part of this thesis has previously been submitted for a degree or anyother qualification at this University or any other institution, this has been clearlystated

• Where I have consulted the published work of others, this is always clearly at-tributed

• Where I have quoted from the work of others, the source is always given. With theexception of such quotations, this thesis is entirely my own work.

• I have acknowledged all main sources of help

• Where the thesis is based on work done by myself jointly with others, I have madeclear exactly what was done by others and what I have contributed myself

Place, Date

Surname, Name

Page 5: Improvement of Vehicle Motion Information by Visual

AbstractA driver-less vehicle and Vehicle-to-Anything (V2X) systems highly rely on accurate andreliable motion information of the vehicle when it travels around the surrounding (insideand outside) environment. As a consequence, it is essential to maintain the precise vehi-cle’s information such as pose of the vehicle (position and direction) for its navigation.Further, the most important dispute in autonomous application is to obtain good estima-tion of position and orientation values for improving its performance. For accomplishingthis, the conventional localization approach seen in [1] use the measurements from GlobalNavigation Satellite System (GNSS) and Inertial measurement unit (IMU).

The major drawback of that conventional approach is :

1 The errors will get accumulated over the time because of the drift present in theIMU measurement data, and

2 The signal from GNSS will not be fully available or not reliable i.e. sometime thestrength of the GNSS signal will be diminished in some region, for instance, indense urban city, inside the tunnel, etc.

Thereby, it is very difficult to get good estimation of localization information from thosesensor measurements.

In this thesis work, we propose a stereo-vision based vehicle localization method to over-come the drawback of above mentioned traditional approach. In the proposed method,we use visual odometry technique [2] together with IMU, GNSS, and wheel odometry forestimating the pose of the vehicle in GPS denied environment. The measurement datafrom multiple sensors are fused together using Extended Kalman Filter(EKF) approach[3][4] to provide more accurate and reliable motion information for autonomous vehiclenavigation. The proposed method can perform with, or independently of GNSS system,thereby it can be used as a backup to GNSS system. And also we have tested the devel-oped algorithm on both simulated and real data.

Keywords: Driver-less vehicle, localization, pose of the vehicle, GNSS, IMU, visualodometry, multisensor data fusion, Extended Kalman Filter

Page 6: Improvement of Vehicle Motion Information by Visual

Contents

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Goals of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Theoretical Background 62.1 Inertial Measurement Unit . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 GNSS/GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Visual Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3.1 Egomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4 Wheel Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Mathematical Background 153.1 State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Non-Linear system and Linearisation . . . . . . . . . . . . . . . . . . . . 173.4 State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Kalman Filter 214.1 Overview of Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.1.1 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.2 Continuous-time Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . 244.3 Discrete-time Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 294.4 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.5 Parameter tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5 Design and Implementation of an Algorithm 365.1 Inputs to the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.1.1 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.2 Vehicle State Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.3 Outputs from the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 475.4 Proposed State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6 Results and Discussion 506.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.1.1 Behaviour of Position trajectry . . . . . . . . . . . . . . . . . . . 516.1.2 Position Error variation . . . . . . . . . . . . . . . . . . . . . . . 536.1.3 Variation of Camera Position . . . . . . . . . . . . . . . . . . . . 556.1.4 Variation of Attitude values . . . . . . . . . . . . . . . . . . . . . 56

i

Page 7: Improvement of Vehicle Motion Information by Visual

CONTENTS

6.1.4.1 Yaw angle . . . . . . . . . . . . . . . . . . . . . . . . . . 566.1.4.2 Roll and Pitch angle . . . . . . . . . . . . . . . . . . . . 57

6.1.5 Attitude Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.1.6 Camera attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

7 Application to real world data 627.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

8 Conclusion and Scope of Future Work 678.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678.2 Scope of Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Bibliography 69

ii

Page 8: Improvement of Vehicle Motion Information by Visual

List of Figures

2 Conceptual diagram of V2X communication module [5] . . . . . . . . . . 13 GPS signal projection in urban canyon [5] . . . . . . . . . . . . . . . . . 24 Overview of the proposed algorithm [5][6] . . . . . . . . . . . . . . . . . . 3

5 Configuration of Sensors in the car [5] . . . . . . . . . . . . . . . . . . . . 66 Alignment of accelerometers and gyroscopes sensor on each axes [7] . . . 77 Inertial Measurement Unit. (a) Block diagram of IMU (b)Miniature IMU

[8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Global Positioning System [9]. (a) GPS satellite constellation (b) Concept

of positioning by trilateration method (red dot on the car represents user’sposition) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

9 WGS84 Reference Frame[10] . . . . . . . . . . . . . . . . . . . . . . . . . 1010 Overview of visual odometry technique[11] . . . . . . . . . . . . . . . . . 1111 Different types of camera used in VO system[12]. (a) Stereo camera. (b)

Stereo omnidirectional. (c) Monocular camera. (d) Monocular omnidirec-tional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

12 Stereo-camera system with rotation( roll, pitch, yaw) and translation(x,y,z)parameters[13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

13 Wheel odometry with optical encoder [12] . . . . . . . . . . . . . . . . . 14

14 Block diagram of an observer . . . . . . . . . . . . . . . . . . . . . . . . 20

15 Basic Diagram of kalman filter representation . . . . . . . . . . . . . . . 2216 Structure of Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . 2917 Discrete-time kalman filter operation . . . . . . . . . . . . . . . . . . . . 3118 High-level representation of Extended kalman filter operation . . . . . . . 3319 Noise Covariance effect on Filter Operation [14] . . . . . . . . . . . . . . 34

20 Overall system structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 3721 Structure of Input segment . . . . . . . . . . . . . . . . . . . . . . . . . . 3822 Structure of Sensor Fusion Method . . . . . . . . . . . . . . . . . . . . . 3923 Status of different sensor measurement data . . . . . . . . . . . . . . . . 4024 Structure of EKF functional blocks . . . . . . . . . . . . . . . . . . . . . 4525 Operational status of EKF functional blocks . . . . . . . . . . . . . . . . 4626 Control flow diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

27 Dead reckoning [with GNSS, without camera] . . . . . . . . . . . . . . . 5128 Dead reckoning [without GNSS, without camera] . . . . . . . . . . . . . 5229 Dead reckoning [with camera, without GNSS] . . . . . . . . . . . . . . . 5230 Variation of position error vs time [with GNSS, without camera] . . . . 53

iii

Page 9: Improvement of Vehicle Motion Information by Visual

LIST OF FIGURES

31 Variation of position error vs time [without GNSS, without camera] . . . 5432 Variation of position error vs time [with camera, without GNSS] . . . . . 5433 Behaviour of delta-position vs time [with camera, without GNSS] . . . . 5534 Behaviour of yawangle vs time [with GNSS, without camera] . . . . . . . 5635 Behaviour of yawangle vs time [without GNSS, without camera] . . . . . 5736 Variation of roll and pitch angle [with GNSS, without camera] . . . . . . 5837 Variation of roll and pitch angle [without GNSS, without camera] . . . . 5838 Variation of attitude-error vs time [with GNSS, without camera] . . . . . 5939 Variation of attitude-error vs time [without GNSS, without camera] . . . 5940 Behaviour of delta-attitude vs time [with camera, without GNSS] . . . . 6041 Variation of attitude-error vs time [with camera,without GNSS] . . . . . 61

42 Dead reckoning [without camera information] . . . . . . . . . . . . . . . 6443 Dead reckoning [with camera information] . . . . . . . . . . . . . . . . . 6444 Dead reckoning [without camera information] . . . . . . . . . . . . . . . 6545 Dead reckoning [with camera information] . . . . . . . . . . . . . . . . . 65

iv

Page 10: Improvement of Vehicle Motion Information by Visual

List of Tables

1 State-space representation of a dynamic system[15] . . . . . . . . . . . . 16

2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Prediction stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Correction stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Prediction stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Correction stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

8 Sensor measurement data description . . . . . . . . . . . . . . . . . . . . 39

9 Simulation scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

10 Mean error value for (a)Test track-1 (b)Test track-2 . . . . . . . . . . . . 63

v

Page 11: Improvement of Vehicle Motion Information by Visual

Chapter 1

Introduction

An autonomous vehicle technology is an approach which uses the guidance by itself fornavigating in the environment without human intervention. This kind of technologyhas become a concrete reality concept that makes path for future development modulein which the computers will take over human intervention in driving the vehicle. Theautonomous navigation system will provide some possible fundamental changes in thetraditional transportation system. Such that with Vehicle-to-Anything (V2X) communi-cation module which allow vehicles to ‘talk’ to each other so that the information suchas position, velocity, braking status, etc., can be exchanged between them, which can beseen in figure 2. With this module we can efficiently avoid collisions between the vehi-cles and can also reduces the traffic congestion in urban areas. Furthermore, using thisconcept the traditional driving functions can be adequately replaced with new featureswhere driving functions are partially or fully automated. Therefore, the vehicles canassists itself while driving around the environment without human aid.[16]

Figure 2: Conceptual diagram of V2X communication module [5]

1

Page 12: Improvement of Vehicle Motion Information by Visual

CHAPTER 1. INTRODUCTION

In order to achieve this goal, the automotive industries have started working on the tech-nological advancement for enhancing its features in the areas such as adaptive cruise con-trol (ACC), automated parking system, crash warning systems, etc., where the vehiclescan make some necessary decisions based on the surrounding environmental conditions.In this regard, the fundamental challenge is to provide precise motion information of thevehicle (such as speed, orientation, position, etc). Further, those motion information canbe incorporate in autonomous technology for improving its performance while navigatingin the environment. The information about the vehicle’s internal state can obtained fromdifferent sensor module such as GPS, Lidar, Radar, IMU, camera. In addition to thissensor system, there must be an control module which uses sensor measurement infor-mation for detecting and avoiding the obstaclest to reach the targets efficiently. As aconsequence it would results in identifying an appropriate path for navigation.[17]

The second most important part in vehicle localization is to maintain (availability) theaccurate and reliable motion information over the time. But due to several factors, forexample, differing in accuracy of sensor measurement data especially in different weatherconditions makes difficult to get good estimation of motion information. Although GNSSsystem is the simplest technique available for localization as it can provide absoluteposition information. However, the GNSS system will also have its own drawback i.e. theinformation from GNNS system may not available in some regions, for example, insidethe tunnel, underground areas, and even in dense urban areas because of urban canyonsas in figure 3. Sometimes in the situation where GNSS signal might subjected to jumps(due to multiple reflection over the line of sight), spoofing, jamming.[12][18]

Figure 3: GPS signal projection in urban canyon [5]

In traditional localization method as mentioned in [1], the information of the vehicle

2

Page 13: Improvement of Vehicle Motion Information by Visual

CHAPTER 1. INTRODUCTION

such as position, velocity, orientation are obtained from GPS, IMU, Wheel odometry,respectively. Unfortunately, as mentioned in the earlier section the GPS signal are notavailable in GPS denied environment. Therefore, we cannot rely more on GPS signal forestimating vehicle position information because those corrupted GPS signal will give riseto wrong estimation of vehicle parameters. Furthermore, the IMU and Wheel odometrywill also have its own drawback in their measurement data ( i.e. they are prone to erroraccumulation over time) which leads to poor estimation of motion information.

By considering all the aspects mentioned above we wished to propose an algorithm whichovercomes the drawback of traditional localization approach. In the proposed method,we develop an vision-based localization algorithm in conjunction with GNSS and otherrelevant signal for estimating the motion information. The figure 4 shows a brief ideaabout the proposed algorithm, where the measurement data from different sensors willbe fused to reach the required output.

Figure 4: Overview of the proposed algorithm [5][6]

In current situation, vehicle demands to have better performance from every sensor mod-ule. Due to limitations of each sensors such as differing in accuracy especially in differentweather conditions, therefore it is significantly tough to improve the capabilities of in-dividual sensor. Fortunately, overall performance of the system can be improved by

3

Page 14: Improvement of Vehicle Motion Information by Visual

CHAPTER 1. INTRODUCTION

combining all the measurement data together. The technique used to merge all the mea-surement information from different sensor is called sensor fusion. Furthermore, with thisapproach we could provide more accurate and reliable result than that of the individualsensors alone.[12][16]

In vision-based localization approach, we use series of stereo-camera images to extract theego-motion information. The proposed algorithm is developed using Extended kalmanfilter (EKF) for estimating 9 DoF (Degrees of Freedom) of the vehicle information fromthe series of measurement data by the sensor module( such as GPS, IMU, stereo-cameraand Wheel odometry). As a result, the proposed algorithm could precisely calculate thevehicle’s current dynamic states even in GPS denied environment.

1.1 Motivation

The main objective of this thesis is to develop a feasible algorithm for estimating precisemotion information especially in the GPS denied environment. The benefits of such al-gorithm can improve the performance of autonomous car navigating in the surroundingenvironment. More specifically, we wish to develop a method using localization sensorsfor providing the good quality of estimation which are very close to the actual values.Moreover, the proposed method must be robust enough to the real-time (i.e. indoorsand outdoors) environmental conditions. Thus, the efficiency in estimating the pose of avehicle will increases even in the uncertain environmental conditions.[19]

In order to interpret the surrounding environmental conditions precisely the autonomoustechnology will strongly rely on sensors to learn the intellectual behavior of the environ-ment. Based on this information, the autonomous car will make some necessary decisionsfor driving around the environment. The most important aspect in enhancing the vehcile’sperformance is by gathering precise motion information in every situation. Therefore, foraccomplishing it the sensor modue will plays an important role in collecting the vehicleinformation, which can be seen in this thesis work.[19]

1.2 Goals of the thesis

The important goals of this thesis is to:

1 Design an algorithm to provide better estimation of motion information in the GPSdenied environment using multisensor data fusion

2 Simulating the algorithm (in MATLAB/Simulink) by including the uncertainties insystem dynamics as well in measurement model

3 Implement and evaluate the algorithm on real-time data

4 Compare and analyze the results against the reference data

4

Page 15: Improvement of Vehicle Motion Information by Visual

CHAPTER 1. INTRODUCTION

1.3 Thesis Outline

This thesis work deals with several different aspects, accordingly the relevant informationwhich are required to accomplish this work are described in every chapter. The followingsection gives an overview of the chapters included in this report:

• Chapter 2 provides a prerequisite knowledge to understand the basic concepts ofthe thesis work

• Chapter 3 explain the mathematical background and its constraints

• Chapter 4 gives the description about the filter used in development of the algorithm

• Chapter 5 presents the implementation of proposed algorithm in MATLAB/Simulinkenvironment

• Chapter 6 provides the simulated results of the developed algorithm

• Chapter 7 shows how algorithm behaves with real-time data

• Chapter 8 presents the conclusion of the thesis work

5

Page 16: Improvement of Vehicle Motion Information by Visual

Chapter 2

Theoretical Background

This chapter gives the theoretical base for the research of this thesis work. The fundamen-tal challenges in self-driving car is to learn the intellectual behavior of the surroundingenvironment. So that the decision can be made and timely executes based on the sur-rounding conditions. In autonomous application the real-time environmental conditionscan be perceived through the domain of sensors. Further, the decision making processare achieved by the algorithm, in which the surrounding conditions will be processed andaccordingly handles the car functions by manipulating the actuators. Therefore, the sen-sors will plays a vital role in autonomous function for understanding the environment.[12]

The figure 5 depicts the arrangement of different senors in the car.

Figure 5: Configuration of Sensors in the car [5]

6

Page 17: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

The sensor setup in the car will be responsible for perceiving both vehicle’s internalstate and external environment information. In this thesis work, we will come acrossdifferent measurement data from the sensors such as wheel odometer, IMU, GNSS andstereo-camera for gathering the relevant information, as seen in figure 5. Self-drivingcars usually relies on series of measurement data from the sensors for manipulating thecar’s function in real-time scenario. In this work, we use all kind of information fromdifferent sensor module with the help of sensor fusion method for determining the motioninformation.

In chapter 5, we will discuss more about sensor fusion approach and it’s realization fordeveloping the algorithm. The essential of using multidata senor fusion approach canprovide better interpretation about the situation. Thereby, the final decisions are madeby the single entity in a fused-system compared to that of individual systems(in which,the control actions are independently performed). In the following sections, we illustrateabout the different types of sensor module used in the system for reaching the goal ofthis thesis.

2.1 Inertial Measurement Unit

An Inertial Measurement Unit (IMU) is a Micro-Electro-Mechanical System (MEMS)module that determines the linear acceleration and angular velocity which is acting onthe object with a triad of accelerometers as well as gyroscopes. The IMU is usually re-ferred as a 6 Degrees of Freedom (DOF) sensors by containing 3 accelerometers and 3gyroscopes. The accelerometers sensor generates the three acceleration signals in everydirections about the coordinate axis of the object attached to that sensor. Similarly, thegyroscopes sensor will describe three angular velocity signals of the object about eachaxes direction.[7]

The figure 6 shows the configuration of accelerometers and gyroscopes about the axisorthogonally (it is important to place the axes orthogonally for calculating the objectorientations).

Figure 6: Alignment of accelerometers and gyroscopes sensor on each axes [7]

7

Page 18: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

Sometimes this integration also includes a 3D magnetic field sensor known as magne-tometer to determine the magnetic field components along every coordinate axis of theobject, along with the accelerometers and gyroscopes in the IMU module [12]. By com-bining all the above mentioned sensors together determines 9 DoF of vehicle information,thereby it is known as 9 DOF IMU sensor [20]. In this thesis work, we incorporate thecombination of 2 sensor types (acceleration and angular velocity signals) for tracking thevehicle motion as a 6 DOF IMU module, which is shown in figure 7.

Figure 7: Inertial Measurement Unit. (a) Block diagram of IMU (b)Miniature IMU [8]

Based on movement of the object, the angular rotation will be calculated by the integra-tion module in such way that the gyroscope sensor would make a quick reactions to thosemovement for determining the rotation information. Further, these angular rate measure-ments are not effected by the linear or angular accelerations even though the IMU is notlocated at the vehicle’s Center of Gravity(CoG). The IMU are being used in automotiveindustry for tracking the vehicle motion by providing the accurate vehicle data. It ispossible to establish the unmanned driving system in the autonomous application fieldby integrating the IMU along with Odometry and GPS sensor. As a consequence, theprecise position and orientation can be determined by implementing those sensor modules

8

Page 19: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

in advanced Kalman filtering approach.[12][21]

One of the main reason for using IMU in estimating the pose of vehicle is that it doesnot suffer from any data dropouts such as line-of-sight obstruction [22]. All the measure-ments from IMU module will be taken in vehicle body coordinate system, thereby anappropriate transformation will be made from IMU coordinate system into other coordi-nate system (in which the algorithm is developed) before using those measurements intothe algorithm. In chapter 5, we will discuss about the transformation of one coordinatesystems into another.

2.2 GNSS/GPS

A global navigation satellite system (GNSS) is a system of collective satellites used fordetermining the accurate position on or marginally above the surface of the Earth [23].The GNSS are usually a satellite-based navigation system which uses the geo-spatial po-sition and timing values across the earth for navigating around it [24]. The GNSS isthe general term used across the globe which comprises of GPS (America), GLONASS(Russian), Galileo (European), IRNSS (Indian) and other regional systems.[12]

In nominal GPS (Global Positioning System) there are 24 number of operating satellitesin the constellation which orbits around the earth. These operational satellites will beequally distributed into six orbital planes, which are inclined at 55 degrees with respectto equator to ensure coverage across the world [25], as shown in figure8. Therefore, theGNSS/GPS are used in many application domain for tracking and navigation purpose.[12]

Figure 8: Global Positioning System [9]. (a) GPS satellite constellation (b) Concept of posi-tioning by trilateration method (red dot on the car represents user’s position)

9

Page 20: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

The satellite-based navigation system usually works on the functions in which the real-time position values are determined by calculating the relative distance between thereference satellite and the known position. That is by using travel time of the transmittedradio signals from satellites along line-of-sight to calculate the distance [26]. Moreover, theinformation such as positioning or location are obtained with the concept of trilaterationwhich uses only four satellites, can be seen in figure 8. Hence, the signals received fromGNSS are determined in WGS84 Coordinate System, which is graphically depicted infigure 9.

Figure 9: WGS84 Reference Frame[10]

For more information about the WGS84 Coordinate System can be seen in [27]. The kindof GNSS information (WGS84 Coordinate System) used for developing the algorithm canbe seen in chapter 5.

2.3 Visual Odometry

In many application areas like in Robotics, automotive etc, the Odometry is not onlyused for estimating the distance traveled from the stating location, but it is also used forestimating the entire trajectory of a moving object. In this method, the integration isused on the velocity measurement for estimating the position, thus this method will bemore sensitive to errors if the measured data is not free from the noise[28]. The trajectoryof a moving object can be calculated in many ways, one of the method is konwn as VisualOdometry, which is more accurate than the above mentioned conventional techniques.The visual odometry, shown in figure10, is the process of incremental estimation of 6-DOF trajectory of a moving object (movement of an object in three-dimensional space ie

10

Page 21: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

translation and orientation about three perpendicular axes) by using sequence of visualfeatures with respect to the reference image.[12][29]

Figure 10: Overview of visual odometry technique[11]

In this approach, there can be a single camera (or) an array of cameras which are attachedrigidly to a moving object for example to an vehicle, by using array of image sequencescaptured from those camera(s) for estimating the pose of the object. This is process istermed as Visual Odometry because of the factor that the localization of the object isonly based on the camera images. And also the images will contains enormous amount ofinformation which are adequate to determine the movement of the object. Therefore, VOhas a prominent role in automotive application for example vision based driver assistancesystems as well as assisted braking systems.[12][30]

The VO is classified into two types based on the camera arrangement[31], they are asfollows:

• If one camera is used in the arrangement, then it’s called Monocular Visual Odom-etry

• If two (or more) cameras are used, then it’s referred to as Stereo Visual Odometry

The figure below shows the different types of camera system are used for pose estimationof the object.

11

Page 22: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

Figure 11: Different types of camera used in VO system[12]. (a) Stereo camera. (b) Stereoomnidirectional. (c) Monocular camera. (d) Monocular omnidirectional

2.3.1 Egomotion

Determining the pose of the camera (which is attached to an vehicle) relative to envi-ronment is termed as Egomotion. The method of estimating the egomotion of an vehiclewith help of VO technique has a big role in the area of driverless or unmanned groundvehicle for navigation purpose ie, the targets can be reached efficiently by avoiding theobstacles. The pose of an object (that is combination of position and orientation ) canbe determined by calculating the rotation and translation paramater of that object withrespect to reference point by using the sequence of images.[32]

The trajectory of the object using VO technique can be obtained based on the followingdifferent approaches [12]:

1 Feature-based approach : features points from the image are used

2 Appearance-based approach : which uses the pixels from the image sequence

3 Hybrid of feature- and appearance-based approach

The objective of this thesis is not to develop a best possible algorithm for extractingthe information such as object detection, disparity and tracking. Instead, we incorporatethose information which are obtained from the existing system. A stereo-camera systemsetup attached to the moving vehicle is used in this thesis work for accomplishing its goal,as seen in figure12.

12

Page 23: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

Figure 12: Stereo-camera system with rotation( roll, pitch, yaw) and translation(x,y,z) param-eters[13]

For finding the stereo correspondence, we use feature-based approach between the pairof images obtained from the stereo-camera. In this approach we detect the feature pointsfrom a first pair of image and then matching those points with the next pair of image.With the help of these information we calculate the Optical flow field, which gives thedirection of the camera for estimating its pose.[32]

The advantage of using VO will be an effective solution for localization when the GPSsignal is degraded due to multipath reflection and/or unavailable environment like insidethe Tunnel, underwater etc. And even VO is unaffected from the wheel slippery due tounfriendly road conditions. In chapter 5, we will show how visual Odometry parameterswill be used in the algorithm development for determining complete state of the vehicleprecisely.

2.4 Wheel Odometry

When we are traveling, for instance, in a car we can know the total distance traveledby looking at a device which is located on a car’s dashboard. The device is known asodometer. The wheel odometry is an instrument which determines the distance traveledby measuring the number of wheel rotations, and then multiplying it by the circumferenceof the wheel to estimate the change in position of the object over the time[12]. With theencoders we can measure the wheel rotations, thereby these rotation can be translatedinto linear velocity, as shown in figure 13.

13

Page 24: Improvement of Vehicle Motion Information by Visual

CHAPTER 2. THEORETICAL BACKGROUND

Figure 13: Wheel odometry with optical encoder [12]

In wheel Odometry, node will provide the significant information about the wheel speedto estimate the vehicle’s linear velocities. And also with servo status information thevehicle’s steering angle are estimated. Therefor, it can provide the relative position ofthe object over the time with respect to reference point. The wheel odometry are proneto drift due to wheel slippage as a result which causes the error to get accumulated overthe time. The wheel odometry have high sampling rates and can provide good short-termaccuracy.[12][33]

14

Page 25: Improvement of Vehicle Motion Information by Visual

Chapter 3

Mathematical Background

In control theory, we can model a physical system in mathematical terms for analyzingtheir characteristics over the time. The first task in analyzing the system is to develop amathematical description of it. A mathematical description of a physical system can beobained by set of mathematical equations which represents the dynamic characteristicsof a system. Such system equations are obtained by using physical laws to describe asystem.[34]

3.1 State Space

As I mentioned the physical system can be represented using mathematical equationsfor analyzing it. These equations can be set of differential equations known as Ordi-nary differential equations (ODEs), algebraic equations or combination of both knownas Differential-Algebraic equations (DAEs). The complexity of a system increases as thenumber of inputs and outputs of that system increases. Thereby, representing the systemwith Ordinary differential equations (ODEs) or Differential-Algebraic equations (DAEs)becomes cumbersome. This kinds of problem can be alleviated by using a method knownas State-Space representation.[34]

In state-space method, a system with inputs, outputs and state variables can be repre-sented as a vector in the Euclidean space. Each state variables will be on the axis of theEuclidean space. Euclidean space can be comprises of n-dimensional Euclidean plane for nstate variables [35]. Also we can define an nth order system (having nth order differentialequation) by using ’n’ first order differential equation in a matrix form [34]. Any physicalsystem represented in state-space form but need not necessarily be unique every time.[34]

In general, the system with ’p’ inputs, ’q’ outputs and ’n’ variables can represent in thefollowing state-space forms[34]:

15

Page 26: Improvement of Vehicle Motion Information by Visual

CHAPTER 3. MATHEMATICAL BACKGROUND

Table 1: State-space representation of a dynamic system[15]

System type State space model

continuous time-invariant x(t) = Ax(t) +Bu(t)y(t) = Cx(t) +Du(t)

Continuous time-variant x(t) = A(t)x(t) +B(t)u(t)y(t) = C(t)x(t) +D(t)u(t)

Explicit discrete time-invariant x(k + 1) = Ax(k) +Bu(k)y(k) = Cx(k) +Du(k)

Explicit discrete time-variant x(k + 1) = A(k)x(k) +B(k)u(k)y(k) = C(k)x(k) +D(k)u(k)

where,

x is called as ”state vector”, x ε Rn

y is called as ”output vector”, y ε Rm

u is called as ”input vector”, u ε Rp

A is n x n ”state matrix”

B is n x p ”input matrix”

C is q x n ”output matrix”

x := ddtx

3.2 Linear System

A system is said to be linear, if they obeys the principle of superposition (also knownas superposition property). Such that it should satisfies both Homogeneity as well asAdditivity properties. The system with linear property will always permits to have goodanalysis about their dynamic behavior.[36]

Additivity property : If input x1 gives the response y1 and input x2 gives response y2,then the sum of input (x1 + x2) produces the sum of response (y1 + y2).[36]

Effect(sum of causes) = sum ( effects (single cause))

y1 = F (x1)

y1 = F (x2)

F (x1 + x2) = F (x1) + F (x2) = y1 + y2 (3.1)

16

Page 27: Improvement of Vehicle Motion Information by Visual

CHAPTER 3. MATHEMATICAL BACKGROUND

Homogeneity property : If input x gives response y, then input αx will gives responseαy.[36]

F (αx) = αF (x) (3.2)

where, α is a scalar quantity

3.3 Non-Linear system and Linearisation

This section deals with non-linear systems and how to analyse their behaviour by usinglinearisation method. A system is said to be non-linear if it does not obeys the principleof superposition. Thereby, for a non-linear system the response to the sum of two inputscannot be calculated by considering the response of each inputs individually and thenlater adding their results. In real-time, any physical quantity are not operated completelyin the linear region over the time. Sometimes due to some external influences acting onthe system can cause to operate at non-linear region for some periods of time. Thereby,we can say that the pure linear systems does not exist in the real world. Analyzing thenon-linear systems are not easy and simple like the linear ones.[34]

A non-linear continuous dynamic system can be represent in the following form:

x = f(x, u) (3.3)

y = h(x) (3.4)

where,

f and h are non-linear system functions

x and u are state and input vectors

The theory of linear system cannot be applied to non-linear function for analysis itsdynamic behavior. In order to use the linear system theory on nonlinear function wemust linearize an nonlinear system around the operating/equilibrium point . That canbe accomplished by using a linearization technique. The characteristics of the nonlinearsystem will have a linear characteristics i.e, the stable behaviour at the vicinity of theoperating point.[34]

Let us consider the non-linear system dynamics given in equations 4.3 - 4.4 to approximatearound the operating point (x0, u0). Thereby, the non-linear system can be linearized us-ing Taylor series expansion about the operating point as follows [34]:

we define,

x = x0 + ∆xu = u0 + ∆u

17

Page 28: Improvement of Vehicle Motion Information by Visual

CHAPTER 3. MATHEMATICAL BACKGROUND

Taylor series expansion can be written as:

x0 + ∆x = f(x0 + ∆x, u0 + ∆u) = f(x0, u0) +(∂f

∂x

)x0,u0

∆x+(∂f

∂u

)x0,u0

∆u+ .......

(3.5)

By neglecting higher-order terms (i.e.,second order and above terms) in the above equa-tions leads to:

x0 + ∆x ≈ f(x0, u0) + ∆x(∂f

∂x

)x0,u0

+ ∆u(∂f

∂u

)x0,u0

(3.6)

If (x0, u0) satisfies the differential equation :

x0 = f(x0, u0)

The above equation 3.6 leads to :

∆x =(∂f

∂x

)x0,u0

∆x+(∂f

∂u

)x0,u0

∆u (3.7)

By re-defining :

∆x ∆= x

∆u ∆= u

Therefore, the equation 3.7 leads to:

x = A x+B u (3.8)

where,

A =(∂f

∂x

)x0,u0

B =(∂h

∂u

)x0,u0

(3.9)

Equations 3.8 and 3.9 describes the linearized version of non-linear system given by equa-tions 4.3 - 4.4, where A and B are the jacobian matrices.

The linear mathematical model for a nonlinear system is obtained by ignoring the higher-order terms in the Taylor series expansion for retention of only the linear term. Thereby,we can obtain good linear approximation where the resultant model can accurately rep-resents the dynamics of the actual system only if variables have small variances aroundthe operating condition.[34]

18

Page 29: Improvement of Vehicle Motion Information by Visual

CHAPTER 3. MATHEMATICAL BACKGROUND

3.4 State Estimation

In this section, we discuss about the state observer that plays a prominent role in provid-ing an estimate of internal state of a given system. Sometimes, the status/informationabout the internal states of the system are not available for direct observations. But withinputs and outputs of the system we can estimate unmeasurable variables. A device ordynamic system that is used to estimate the states of the system is called a state observer,or simply known as an observer.[34]

In real world, every state variables of a physical systems cannot measure at every in-stant. This is because the measurements are often corrupted by the external disturbances,and/or modeling errors in the process will subject to experimental error. Consequently,the number of measurement variables will always be less than that of the state variables.Thus, without considering these problems it is not possible to get good estimation ofdynamic behavior. In this section, we will discuss about the state estimation techniquesto know about the states which are not directly measurable.

If the observer estimates all state variables of the system despite of some unmeasurablestates, then it is called full-order state observer. An observer that estimates fewer thann state variables, where n is the dimension of the state vector, is called a reduced-orderstate observer [34].The observer can be designed for either a continuous-time system ora discrete-time system.

A full-state observer will comprises of the following characteristics [34]:

• The purpose of the observer is to generate an estimation of a state x(t) based onmeasurements of the system output y(t) and the system input u(t).

• The observer will use the mathematical model of a system in state space represen-tation. Thus, the A,B,C matrices of a systems are assumed to be known exactly.

• In designing the observer we assume that there is no measurement noise or externaldisturbances which acts on the dynamic system.

Figure 14 shows the general block diagram of a full-order state observer.

19

Page 30: Improvement of Vehicle Motion Information by Visual

CHAPTER 3. MATHEMATICAL BACKGROUND

Figure 14: Block diagram of an observer

The observer design approach which was discussed here was for the linear system. Un-fortunately, we cannot use the same designing approach for estimating the unmeasurablestate variables if the system is nonlinear in its nature. Consequently, the estimator forthe nonlinear systems can be obtain by linearizing the nonlinear system.

If the observer was designed for linearized model that is the system which are locally ob-servable will have a good estimation of states only in the vicinity of the equilibrium point.As system moves away from that equilibrium point then the system start behaving in non-linear nature. In such case the observer might provide poor or wrong estimation of states.Therefore, we must keep in mind that for a nonlinear systems the estimation will workfine only in the neighborhood of the equilibrium points for the locally observable system.With the linearized model we can design an observer for estimating the state locally, butfor global estimation of state we need have a unique solution to the observability map.[34]

The observer being considered in this section (for both linear and nonlinear case) is adeterministic system that is there is no measurement noise or unmeasured disturbancesacting on the system. This approach can give good estimation only if there is no mea-surement noise or unmeasured disturbances acting on the system. Otherwise, we mustuse an alternative approach known as Kalman filter for having good estimation in thenoisy environment. Because it uses the statistical properties of the system that is it usesthe knowledge of noisy environment in its design. We will discuss more about the kalmanfilter and its design in chapter 3.

20

Page 31: Improvement of Vehicle Motion Information by Visual

Chapter 4

Kalman Filter

The Kalman filter (KF) is a well-known mathematical approach used for stochastic es-timation of internal states of a system from noisy measurements. In this chapter, wewill present an overview of the Kalman filter along with fundamental view of linear sys-tems and probability theory which are needed for its development. In 1960, Rudolph E.Kalman addressed the limitations of the Wiener filter and presented a new method thatknown as Kalman Filter, which proved to be less abstruse mathematically and bettersuited to computation because of its online data processing property. This method is stillusing for solving the estimation problems in wide application areas.[37]

In Section 4.1 we will have brief introduction to Kalman filter and the basic conceptsinvolved in it. The Sections 4.2 and 4.3 presents the Kalman filter algorithm for continu-ous– and discrete-time linear system with the assumptions made for deriving the filter. InSection 4.4, we will discuss about the extensions of Kalman filter for nonlinear systems.The performance of the kalman filter can be improved by tuning its parameter, whichis discussed in section 4.5. The material discussed in this chapter is by accumulation ofknowledge gathered from [38], [39], [40], [41], and [42].

4.1 Overview of Kalman Filter

The process of filtering are based on estimating the state vector with the aid of the pastmeasurements. The term ”filter” is known for the fact that it removes or eliminate anunwanted signal or disturbance.[43]

The optimal state vector x are the one which incorporates all the necessary informationof the system. In modelling the system, we use the optimal state vector shows howstates evolve over the time. Sometimes, there can be hidden states as well unmeasuredvariables due to unknown properties of a physical system. While describing the systemdynamics these unknown dynamic properties are often not modeled in the state vectors.Thereby, the optimal dynamic characteristics of the systems cannot be achieved with-out incorporating those unknown dynamics. One of the possible way for handling theunmodeled dynamic properties is by considering as a noise/disturbance to the system.[43]

21

Page 32: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

Figure 15: Basic Diagram of kalman filter representation

The next challenging task would be how to incorporate the measurement noise in mod-eling the system. It can be done by describing the noise as a stochastic process by usingrandom variables which are defined by probability density function (pdf), for example,additive white Gaussian noise to describe the unknown dynamic properties of the sys-tem.[43]

Kalman filter is an method for estimating the unknown variables by using series of mea-surements data over a time rather than based on a single measurement alone [37]. Esti-mating the unknown variables with this approach could provide better accurate resultsbecause it will incorporates both statistical noise as well other uncertainty in it. TheKalman filter is also known as a recursive data processing algorithm, for the reason thatit uses only the present input measurements and the previously calculated state and itsuncertainty matrix for computing the present estimates. Therefore, it does not need ad-ditional past measurements for its computation. With the knowledge of system dynamicsand measurement model we can design the Kalman filter.[38]

4.1.1 Observability

A system is said to observable, if state x(0) can be determined uniquely with the knowl-edge of outputs y and manipulated variable u is known as observability. Therefore, wemust check for observability condition before implementing the Kalman Filter. It ispossible to determine the coupling between measured and unmeasured states by the ob-servability criteria.[44]

In order to derive a mathematical expression for checking the observability conditions,let us consider the dynamic system defined by the following equations

x(t) = A x(t) +B u(t) (4.1)

22

Page 33: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

y(t) = C x(t) +D u(t) (4.2)

With the knowledge of input vector u(t) and the output vector y(t) is sufficient to de-termine the initial state state of the system x(t0), then the above system is said to beobservable [44]. The criterion for the system to be observable is derived by assumingu(t)=0. This leads the equation in the following form :

y(0) = C x(0) (4.3)

y(1) = C x(1) = C A x(0) (4.4)

.

.

.

.

y(n− 1) = C An−1 x(0) (4.5)

which can be expressed in matrix form as,

CC A....

C An−1

x(0) =

y(0)y(1)....

y(n− 1)

(4.6)

From the definition of observability we can able to calculate x(0) from the output y(t).

Therefore, the observability matrix is given by :

Mobs =

CC A....

C An−1

(4.7)

The dynamic system defined by equations 4.1 — 4.2 is said to be observable, if and onlyif the observability matrix has full rank equal to ’n’, where n is the order of the system.If the rank of Mobs is ’n’, then the equation 4.7 has a unique solution.[44]

23

Page 34: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

In equation 4.6, if the size of y(t) is less than x(t), then there is no unique solution for theequation 4.6. The observability criteria determines whether there is a coupling betweenmeasured and unmeasured states. If (A,C) is observable, then the speed of convergencecan be set arbitrarily by the choice of error feedback matrix. Otherwise, a part of thesystem dynamics can only be simulated and the error converges to zero with the statesif the system is stable.

By calculating the determinant of Mobs would give the rank of the system. Thereby wecan interpret the system is observable:

• if the determinant is non-zero, then the rank is full, which implies the system isobservable

• If the determinant is zero, then the rank is not full, which implies system is non-observable

If the system is non-observable, then we cannot able to compute the steady-state value ofKalman gain (which will be discussed in the upcoming section). This kalman gain is usedin “Correction” stage to update the estimates with measurements of the (real) system.

4.2 Continuous-time Kalman filter

The Kalman fillter is known as optimal estimation algorithm due to the fact that itminimises the mean square error of the estimated parameters provided the noise to beGaussian. The new measurements will be processed as they arrives because of its recur-sive nature thereby the previously predicted states will be sequential updated. Therefore,the KF are used in many application areas, for instance, to find the tracking informationin the data prediction task. The concept of Kalman filter algorithm mentioned here willuse the techniques outlined in previous sections.[41][42]

The Kalman filter presented below use the continuous-time system model in state spaceform [41] [42] as given by equations 4.1 — 4.2:

x(t) = A x(t) +B u(t) + w(t) (4.8)

y(t) = C x(t) + v(t) (4.9)

If there are some uncertain dynamic behavior which acts on the system can be modeled byusing the term w(t). Similarly, if system measurements are subjected to noise/disturbancethen it can be modeled using v(t).[42]where,

• x(t) and u(t) are state and input vectors

24

Page 35: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

• A and B are system and control matrices

• w(t) is a nx1 disturbance or process noise parameter of the state variables. Weassume the process to be random (white noise) errors with zero mean, therefore itcan be represented by covariance matrix Q

• C is the output matrix for output vector y(t).

• v(t) is the vector of q measurement noise parameter of the measurement variables.Similar to process noise, we assumed the measurement noise to be zero mean addi-tive white Gaussian noise, given by covariance matrix R. The Information obtainedfrom sensor has the limitations associated with the physical medium.

The cross-correlation between measurement and process noise will be zero if they areindependent of each other, then the normal probability distributions function is givenby[41][42],

p(w) ∼ N(0, Q)

p(v) ∼ N(0, R)(4.10)

where, N represents the Normal or Gaussian distribution

The co-variances of measurement and process models might change with each time stepor measurement, however here we assume they are stationary over time and are given by[41][42]:

E[wk wTj ] = Q δ(k − j) (4.11)E[wk] = 0 (4.12)

E[vk vTj ] = R δ(k − j) (4.13)E[vk] = 0 (4.14)

E[wk vTj ] = 0 ∀j, k = 1, 2, .....n (4.15)

where,

n is the total number of sampling instants

E is the expectation operator

δ is Kronecker delta function defined as

25

Page 36: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

δ(k − j) =0 if k! = j

1 if k = j(4.16)

Let us begin with defining the predicted state vector from the state dynamic equation by[41][42]:

xt|t−1 = At xt−1|t−1 +Bt ut (4.17)

where,

xt|t−1 is the predicted state vector

xt−1|t−1 is the previous estimated state vector

u is the input vector

A and B are matrices defining the system dynamics

Next, the state error covariance matrix involved in calculating the prediction of anunknown variable xt is given by [41][42]:

E[ek ekT ] = P k (4.18)

where,

• P is the nxn error covariance matrix. The main diagonal elements of P are thevariances, while the off-diagonal terms of P are the covariances associated to thecorresponding state vector terms.

• et is the difference between the true and estimated value of x and is given by

et = xt − xt (4.19)

Equation 4.18 may be expanded to give;

P t|t−1 = E[(xt − xt|t−1) (xt − xt|t−1)T ] (4.20)

By taking the difference between 4.17 and4.8 gives

xt − xt|t−1 = A [(xt−1 − xt|t−1) + wT ] (4.21)

The equation 4.20 can be re-written as

26

Page 37: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

P t|t−1 = E[(A (xt−1 − xt−1|t−1) + wt) × (A (xt−1 − xt−1|t−1) + wt)T ]= A E[(xt−1 − xt−1|t−1) × (xt−1 − xt−1|t−1)T ]× AT +A E[(xt−1 − xt−1|t−1)wT

t ] + E[wt xt−1 − xt−1|t−1] AT +E[wt w

Tt ]

(4.22)

Noting that the state estimation errors and process noise are uncorrelated, which leadsto:

E[(xt−1 − xt−1|t−1) wTt ] = E[wt(xt−1 − xt−1|t−1)T ] = 0 (4.23)

P t|t−1 = A E[(xt−1 − xt−1|t−1) (xt−1 − xt−1|t−1)T ] AT + E[wt wTt ] (4.24)

P t|t−1 = A P t−1|t−1 AT +Q

t(4.25)

The Kalman filter algorithm consists of initialization step, there after it alternately op-erated in two stages such as Prediction and Correction stages with essentially a set ofmathematical equations to estimate the states by minimizing the error.[42]

Initialization stage : The operations in this step are executed only once.

x0 = E[xt] (4.26)

P 0 = E[(x0 − xt) (x0 − xt)T ] (4.27)

where,

Initial state estimate: x0|0 = x0,

Initial state covariance matrix: P 0|0 = P 0

Prediction Stage: In this stage, the KF calculates a new priori estimate xt at everystep. The prediction equations are also known as time update or propagation equations,which predict the new state of the system from the current state and also project errorcovariance for next time step.

The standard prediction stage equations are given by

¯xt|t−1 = At xt−1|t−1 +Bt ut (4.28)

P t|t−1 = At P t−1|t−1 AT +Q

t(4.29)

27

Page 38: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

From the equations 4.28 - 4.29 we can determine the priori state estimation and errorcovariance in prediction stage.[41][42]

Correction Stage : The correction stage is also known as measurement updatewhere the operations uses the measurement data. In this stage, the predicted state anderror covariance are updated with the measurement information.

The equations executed in the correction stage are given by:

Kt = P t|t−1 CTt (Ct P t|t−1 C

Tt +Rt)−1 (4.30)

K is known as kalman gain (here we are not intended in the derivation of kalmangain, rather we wish to know about its role in the filter performance)

xt|t = ¯xt|t−1 +Kt (yt− Ct

¯xt|t−1) (4.31)

P t|t = P t|t−1 −Kt Ct P t|t−1 (4.32)

The posteriori state are calculated from the equation 4.31 using Kalman gain and sys-tem measurements. In other words, we can say that the predicted states are correctedwith the new measurement information [14][41][42]. Similarly, the error covariance areupdated using new measurement information in the equation 4.32.

The time update equations are used to estimate a priori estimates for the next time stepand the measurement update equations are responsible for improving the posteriori esti-mate by using new measurement into the a priori estimate. Therefore, the Kalman filteralgorithm look like a predictor-corrector algorithm that is for each time update and forevery new measurements, the Kalman filter will run in a cycle/loop (that is the previousestimates will be the input for the current state) by executing prediction-correction stagesequations for solving numerical problems. It is not necessary to run all these equationsin every cycle, sometimes only few of these equations will be executed in few cycles.[14][41][42]

The figure 16 illustrates the kalman filter operation.

28

Page 39: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

Figure 16: Structure of Kalman filter

From the above figure we can interpret that the time update equations can be run con-tinuously to projected forward in time without measurement updates, it is possible onlyif the process model was perfectly accurate. In real time the process model will neverbe accurate, thereby we must periodically incorporate the measurements to regulate thesolution from diverging. The estimation is not only based on the prediction or the mea-surements updates, rather it also uses the Kalman gain K by combining the predictionand the measurements based on the their covariance.[14][41][42]

4.3 Discrete-time Kalman Filter

This section describes about the discrete-time kalman filter. A major difference that oc-curs is the use of the discrete time index, k. Consequently, the estimation of state and themeasurements will also occur at discrete points in time [45]. The discrete time kalmanfilter also have the similar operations and assumptions as that of the continuous-timealgorithm.[41][42]

The systems model which is represents in the state space will be in discrete-time :

xk+1 = A xk +B uk + wk (4.33)

yk

= C xk + vk (4.34)

The standard equations for time and measurement updates [41][42] are presented in thetable 2, 3 and 4.

29

Page 40: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

Table 2: Initialization

x0 = E[xk] (4.35)

P 0 = E[(x0 − xk) (x0 − xk)T ] (4.36)

where,

Initial state estimate: x0|0 = x0,

Initial state covariance matrix: P 0|0 = P 0

Table 3: Prediction stage

¯xk = A xk−1 +B uk (4.37)

P k = A P k−1 AT +Q (4.38)

In time update equations shown in table 3, which estimates the state and covarianceby projecting forward from time step k − 1 to step k. In the previous section we havediscussed about the Initial conditions.

Table 4: Correction stage

Kk = P k CT (C P k C

T +R)−1 (4.39)

xk = ¯xk +Kk (yk− C ¯xk) (4.40)

P k = (I −Kk C) P k (4.41)

We can say that the equations 4.51 - 4.53 mentioned in the above table carries out theoperations when there is a measurement update.

Figure 17 propose a complete picture of the Kalman filter operation by combining theequations from table 3 and 4.

30

Page 41: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

Figure 17: Discrete-time kalman filter operation

Both continuous time and discrete time Kalman filter algorithm has the similar function-ality irrespective of time format whether it is discrete ’k’ or continuous ’t’.

The above section presents the designing of Kalman filter algorithm for linear systems.However, in real time many system might have nonlinear characteristics, ie the processmodel and/or the measurement model will be nonlinear in nature. The Kalman filter op-eration discussed above cannot be applied directly for those nonlinear systems. Therefore,we need to include an additional step known as linearization process before designing thekalman filter for non-linear system. The following section presents an Extended KalmanFilter (EKF) for estimating the nonlinear systems.

4.4 Extended Kalman Filter

In the earlier sections we have mentioned about the Kalman filter for estimating thestate of a system by considering both system and measurement model to be linear. Inthis section, we will demonstrate how to relax the assumption that are made in linearmodel for estimating the states with the appropriate nonlinear system model. A Kalmanfilter that uses the linearized version of the system about the optimal/nominal pointis referred to as an Extended Kalman filter (EKF). In the following section we try togeneralize the Extended kalman filter equations for nonlinear system and measurementmodels.[41][42][45]

However, the behavior of linear systems are administered by linear system and measure-ment models, similarly the nonlinear systems behavior are administered by a nonlinear

31

Page 42: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

system model and a nonlinear measurement model. Thus, by using linearization tech-nique (as discussed in the chapter 3.3) we can extend the ideas and techniques of linearKalman filter for estimating the nonlinear system [41]. Nonlinear system can be linearizedusing Taylor series expansion around the operating point with partial derivatives of theprocess and measurement functions (by neglecting the higher order terms).

To begin the formulation of EKF, let us consider a nonlinear dynamic system, consistingof a nonlinear process model and nonlinear measurement model as given below :

xk+1 = f(xk, uk) + wk (4.42)

zk = h(xk) + vk (4.43)

where,

f is the nonlinear state transition function vector and h is the nonlinear measure-ment or output function vector.

the random variables vk and wk represents the process and measurement noise

The above mentioned nonlinear functions must be linearized around the equilibrium pointbefore implementing the filter. This is done by determining the Jacobian matrices of thenonlinear functions for every time step :

A =(∂f

∂x

)x=xk−1|k−1

(4.44)

B =(∂h

∂u

)u=uk−1|k−1

(4.45)

C =(∂h

∂x

)x=xk|k−1

(4.46)

where,

A and B are the Jacobian matrices of system dynamics and C is the Jacobianmatrices of the system measurements

We can summarize the EKF algorithm using above mentioned Jacobian matrices. Thus,the complete set of EKF equations is shown below in table 5, 6 and 7.

Table 5: Initialization

x0 = E[xk] (4.47)

P 0 = E[(x0 − xk) (x0 − xk)T ] (4.48)

32

Page 43: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

Table 6: Prediction stage

¯xk = A xk−1 +B uk (4.49)

P k = A P k−1 AT +Q (4.50)

Table 7: Correction stage

Kk = P k CT (C P k C

T +R)−1 (4.51)

xk = ¯xk +Kk (zk − C ¯xk) (4.52)

P k = (I −Kk C) P k (4.53)

Both Kalman filter and Extended kalman filter equations are similar to each other withfew exceptions. In EKF, the system and measurement Jacobian matrices must be calcu-lated at the beginning of each cycle. The critical part in EKF algorithm compared to KFis to update the covariance matrices based on the linearized state equations [45]. Thebasic operation of EKF is similar to that of KF. Figure 18 depicts the complete functionaloperations of Extended kalman filter.

Figure 18: High-level representation of Extended kalman filter operation

33

Page 44: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

4.5 Parameter tuning

The performance of the above mentioned filters (both kalman filter and extended kalmanfilter) not only depend on the process and measurement models of the system but also onthe parameters involved in the filter design. The relatively simple or poor process modelcan provide a satisfactory performance with the adequate uncertainty incorporated in theprocess is described in the covariance matrix Q. Similarly, the amount of uncertainty inthe measurement can be included via selection of covariance matrix R.[46]

The kalman filter performance accuracy is dependent on selection of the parameters dur-ing its design. The selection of kalman filter parameters such as x0, P0, Q andR are knownas filter tuning [47]. Sometimes, if these parameters are not chosen properly then theperformance of kalman filter might be inferior to that obtained by simpler techniques.[46]

The P 0 is one of the important tuning parameters in KF design. In principle its valuecan be kept at some arbitrary. Because if P 0 is set to zero then the filter does not be-lieves the measurements, then there will be no learning potential from the measurement.On the other hand, if the value of P 0 is set extremely large, then the filter believes themeasurements more than that of the state model, which leads to large fluctuations inestimation of state and parameter. Therefore, the initial estimates P 0 is neither kept tozero nor too large value for good estimation.[46][47]

Figure 19: Noise Covariance effect on Filter Operation [14]

Similarly, the matrices Q and R has much more significant effect on overall performanceof the filter. The term Q and R will act as a weighting factors between the predictionequations and the measurement equations. These terms will play a prominent role duringthe Kalman filter operation. That is by considering a larger Q means that there will belarger uncertainty in the real state variables of the system, which signifies the systemwill be less trusted. Similarly, considering a larger R is equivalent to considering largeruncertainty in the measurement which means that the measurement will be less trusted.The influence of filter’s correction operation will be less applied when the measurement

34

Page 45: Improvement of Vehicle Motion Information by Visual

CHAPTER 4. KALMAN FILTER

is less trusted, the other way would be filter’s correction operation will be applied morewhen the system is less trusted. The figure 19 explains the effect of covariance on filteroperation. The best possible tuning is to get near to an optimal solutions.[14][46]

In the next chapter, we will discuss about the EKF implementation in accordance tothe requirements of thesis for achieving its aim. Further, we can see how system andmeasurement modules are used in EKF implementation for estimating the necessaryinformation.

35

Page 46: Improvement of Vehicle Motion Information by Visual

Chapter 5

Design and Implementation of an Algorithm

In the previous chapter, we have seen how system and measurement models are used indesigning the filter. Tracking of vehicle motion information (which is driven around theenvironment) plays a key role in autonomous technology. It can be achieved by usingvehicle dynamics and measurement model in the algorithm. The figure 5 shows the typesof sensor modules used in developing the estimation algorithm.

This chapter describes the design of a filter model used for determining the vehicle motioninformation. For accomplishing this, we need the full dynamic and measurement modelsof the system as illustrated in figure 14. In the upcoming sections, we will describe howthese models are developed for providing the estimation of the vehicles state using rela-tive measurements. And also it is important to know how the measurement data fromevery sensor module are used in the filter for estimating the vehicle’s dynamic state.

An algorithm is developed by designing an Extended kalman filter using vehicle’s dynam-ics model for reaching the goal of this thesis. The filter is designed by using linearizedfunctions of the non-linear system model around the equilibrium point. In this thesiswork, the proposed algorithm for estimating the vehicle information is designed, imple-mented and tested in the Matlab/Simulink environment.

Before getting into the filter design process, let us have a brief overview about the struc-ture of the algorithm. That is, the kinds of parameters used and how those parameterswill be incorporated in designing the algorithm. For doing this, let us consider a high-levelrepresentation of the system as shown in figure 20.

36

Page 47: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Figure 20: Overall system structure

From the figure above we can interpret that there are three individual segments involvedin development of the algorithm. These segments are classified as Input, State Observerand output of the algorithm. First, let us begin with Input segment of the algorithm. Inwhich, we will discuss about the kinds of measurement data from different sensor moduleused in the algorithm development.

5.1 Inputs to the algorithm

In chapter 2, we have discussed about the sensors involved in the development of algo-rithm. In this section, we are going to describe the types of measurement data from thesensor modules and how these data are incorporated in the algorithm for reaching thedesired output.

In figure21 we can see the varieties of measurements data used in designing the filter.The proposed method uses sensor fusion approach for combining all the sensor data to-gether to provide more accurate and reliable result than that of individual sensor. In thefollowing subsection we will see more about the sensor fusion techniques and its advantage.

The figure below depicts the flow of measurement data from different sensor module inthe algorithm.

37

Page 48: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Figure 21: Structure of Input segment

5.1.1 Sensor Fusion

In driver-less navigation system, tracking the pose of the vehicle has become a criticalpart due to sensor limited capabilities in different weather conditions. Also, the differenttypes of information received from the sensor will differ in accuracy due to several con-ditions, for instance, GNSS system fails to provide accurate and reliable information inGPS denied environment [48]. As a result, it is not possible to get good estimation re-sults with individual sensor data. But by fusing all the available information from severalsensors together can able to improve its performance and also in some situation wherefew sensors information overcomes the failure of others sensor module. The approachused to merge different sensor information is called as sensor fusion.[49]

By combining measurement data from several sensors would reduce the uncertainity indeterming accurate pose information than that of individual sensors used alone. Thealgorithm for achieving multidata fusion can be obtain by using Kalman Filter or Bell-mansıs dynamic programming equations [50]. With sensor fusion approach, the systemperformance has been improved by reducing the bias and effect of sensor noise in the sys-tem and also robustness of the system can be improved.The figure 22 depicts the overviewof the sensor fusion technique.[51]

The table8 gives the information about each sensor measurement data determined indifferent coordinate systems.

38

Page 49: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Table 8: Sensor measurement data description

Sensors Measurement description Coordinate systems

1) IMU 3D Acceleration(ax, ay, az)3D Angular velocity (wx, wy, wz) Vehicle Body system

2) GNSS 3D Position (lat, lon, height)3D Velocity (ve, vn, vu)

Position in WGS84 andVelocity in Navigationsystem

3) Wheel Odometry Wheel speedSteering wheel angle Vehicle Body system

4) Camera change in Position (dPosX, dPosY, dPosZ)change in Attitude (droll, dpitch, dyaw) Vehicle Body system

Figure 22: Structure of Sensor Fusion Method

In this work, we use signals from several sensors including stereo-camera, IMU, GNSS andWheel odometry sensors for estimating position, velocity, and orientation informationsof the vehicle. With sensor fusion technique we can able to analyze the environmentalconditions more precisely. Thereby, it would results in making better decisions than thosemade by independent sensor for manipulating the cars function.

39

Page 50: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Figure 23: Status of different sensor measurement data

The figure above depicts the operational status of the sensor modules used in the system.In the above graph, x- and y-axis represents time in ‘msec’ and availability of measure-ment data, respectively. Each sensor measurement data are represented by a ‘pulse’ withsome positive magnitude which indicates that the particular sensor information are avail-able to the algorithm at the corresponding time instant. Similarly, the ‘pulse’ with zeromagnitude indicates that the sensor information are not available at that time instant.The measurement data from IMU sensor will be available to the system at every 10msec.Similarly, the wheel odometry data and camera information are obtained at every 10msecand 60msec, respectively. For every 100msec there will be an information update fromthe GNSS sensor.

5.2 Vehicle State Observer

The purpose of using the observer is to estimate the internal parameters of the systemwhich may or may not able to read from measurement devices. In chapter 4, we havecome across about the observer and its functionality. Due to nonlinear nature of the sys-tem dynamics and measurement model, we must use stochastic model in designing theobserver for better estimation of the state. This can be done by designing the Extendedkalman filter for estimating the state in the noisy environment.

The first step in designing the filter is to model the full non-linear dynamic system in state-space form. For doing this, let us consider the full non-linear system and measurementmodels given by the equation :

40

Page 51: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

xk+1 = f(xk, uk) + wk (5.1)

zk = h(xk) + vk (5.2)In the first step, we begin with modeling the system dynamics by considering all the nec-essary input and state variables for designing the filter. The state vector xk and inputsvector uk of the system are described in the following sections.

In this thesis work, the complete dynamics of the system is represented by 18 statevariables and they are given as follows:

xk =

x(1)x(2)x(3)x(4)x(5)x(6)x(7)x(8)x(9)x(10)x(11)x(12)x(13)x(14)x(15)x(16)x(17)x(18)

=

latlon

heightvevnvurollpitchyawwx0wy0wz0ax0ay0az0posXposYposZ

(5.3)

where,x(1) to x(3) are vehicle’s position information

x(4) to x(6) are vehicle’s velocity information

x(7) to x(9) are vehicle’s orientation information

x(10) to x(12) is angular velocity offset values

x(13) to x(15) is acceleration offset value

x(16) to x(18) gives camera position informationSimilarly, the inputs which are acting on the system model are mentioned in the controlvector uk, which are as follows:

uk =

u(1)u(2)u(3)u(4)u(5)u(6)

=

wxwywzaxayaz

(5.4)

41

Page 52: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

where,

u(1) to u(3) are angular velocity values

u(4) to u(6) are acceleration values

The information from IMU sensor will be used as a control inputs to the system model.

Moreover, due to non-linear behavior of the system we cannot directly use the systeminformation’s for designing the filter. Perhaps, we need to linearize the system about theoptimal point. Thus, after performing the linearization technique on the system (as seenin chapter 3.3), the system and input matrices are given as follows:

A =(∂f

∂x

)x=xk−1|k−1

(5.5)

B =(∂h

∂u

)u=uk−1|k−1

(5.6)

where,

A and B are the Jacobian matrices of system and measurement models

Since there are 18 state variables in the system, the size of system matrix A will be :

size[A] = 18 ∗ 18

Similarly, we have 6 control variables in the input vector. Therefore, the size of inputmatrix ‘B′ will be:

size[B] = 18 ∗ 6

Due to large matrix size, it is very difficult to mention all the parameters of the systemand control matrices in this document. For developing the algorithm we have carefullyconsidered every parameters in those matrices.

The next step in the algorithm development is to build the measurement model in state-space form. Since we are using measurement information from three different sensors, asa result we must build three measurement models separately for each sensor module.

The measurement models of the system are given below:

1) Measurement model for Wheel Odometry

The velocity measurement vector is given by

zk =

z(1)z(2)z(3)

=

vxvyvz

(5.7)

42

Page 53: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

where,z(1) to z(3) are velocity measurements in Vehicle Body system

Since wheel odometry provides measurements in vehicle body coordinates and our modelworks with navigation coordinates. Thus, for calculating measurement vector zk we mustconvert from navigation coordinates to vehicle body. This can be done with the followingtransformation matrix.

RBN =

BN11 BN12 BN13BN21 BN22 BN23BN31 BN32 BN33

where,

BN11 = cos(pitch) ∗ cos(yaw)

BN12 = cos(pitch) ∗ sin(yaw)

BN13 = −sin(pitch)

BN21 = cos(yaw) ∗ sin(pitch) ∗ sin(roll)− cos(roll) ∗ sin(yaw)

BN22 = cos(roll) ∗ cos(yaw) + sin(pitch) ∗ sin(roll) ∗ sin(yaw)

BN23 = cos(pitch) ∗ sin(roll)

BN31 = sin(roll) ∗ sin(yaw) + cos(roll) ∗ cos(yaw) ∗ sin(pitch)

BN32 = cos(roll) ∗ sin(pitch) ∗ sin(yaw)− cos(yaw) ∗ sin(roll)

BN33 = cos(pitch) ∗ cos(roll)

2) Measurement model for GNSS

The velocity measurement vector is given by

zk =

z(1)z(2)z(3)

=

vevnvu

(5.8)

where,z(1) to z(3) are velocity measurements in Navigation system

There is no need of transferring from one coordinate to another to determine the velocitymeasurements. Because both GNSS and system works in navigation coordinate system.

The position measurement vector is given by

zk =

z(1)z(2)z(3)

=

latlon

height

(5.9)

(5.10)

where,

43

Page 54: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

z(1) to z(3) are position measurements in WGS84 system

Similarly, there is no need of transferring from one coordinate to another to determineposition measurements. Since both GNSS and system works in WGS84 system.

3) Measurement model for Camera

The position measurement vector is given by

zk =

z(1)z(2)z(3)

=

dPosXdPosYdPosZ

(5.11)

where,

z(1) to z(3) are change-in-position measurements in camera system

The attitude measurement vector is given by

zk =

z(1)z(2)z(3)

=

dRolldP itchdY aw

(5.12)

where,

z(1) to z(3) are change-in-attitude measurements in camera system

Since camera provides measurements in camera body coordinates and our model workswith vehicle coordinates. Thus, for calculating measurement vector zk we must convertfrom vehicle coordinates to camera coordinates system. This can be done with the fol-lowing transformation matrix:

RBC =

1 0 00 cos(roll) sin(roll)0 −sin(roll) cos(roll)

The equation 5.2 represents the measurement model of the system. In which, the outputmatrix C is nothing but the Jacobian matrix of measurement model due to non-linearsystem characteristics. Therefore, the output matrix C is calculated as follow:

C =(∂h

∂x

)x=xk|k−1

(5.13)

Also, the uncertainties present in system and measurement models are modeled by usingQ ( Process noise covariance) and R ( Measurement noise covariance) matrices, respec-tively.

44

Page 55: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Finally, with all the above mentioned matrices and vectors together is the linearized formof the original nonlinear system 5.1 - 5.2, about the equilibrium point. The most impor-tant part in filter design is to check the necessary observability condition, which can beseen in chapter 4.1.1. Using Extended kalman filter (EKF) approach we will develop thefilter module for estimating the motion information of the vehicle.

The two fundamental steps which describes the operating principles of Extended kalmanfilter are as follows:

1 Prediction step, also known as “Time update” and

2 Correction step, also known as “Measurement updates”

The necessary operations involved in Prediction and Correction stages can be seen in thefigure18. With the relevant vector and matrix information, the Prediction and Correctionblocks are constructed. Since we use three sensor modules for estimating the motioninformation. As a consequence, there will be three individual correction blocks in thefilter for the corresponding sensor module. The three individual correction blocks in thealgorithm are as follows:

1 Odometry correction block: In this block, the predicted states and error covarianceare calculated with wheel odometry measurement information

2 GNSS correction block: In this block, the predicted states and error covariance arecalculated with GNSS measurement information

3 Camera correction block: In this block, the predicted states and error covarianceare calculated with camera measurement information

Figure 24 illustrate the operation of Extended kalman filter with prediction and threecorrection blocks together.

Figure 24: Structure of EKF functional blocks

45

Page 56: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Here, the three correction blocks in the algorithm are arranged one after the other inseries fashion, as seen in figure 24. Further, the operation of every correction block willbe executed only when there is an measurement update from the corresponding sensormodule. Suppose if there is no measurement update from the particular sensor modulethen its corresponding correction block will not be executed. The operations involvedin each correction block are given in the table 7. Therefore, by using above mentionedfunctional blocks we can achieve the complete operations of Extended kalman filter.

Figure 25: Operational status of EKF functional blocks

The figure 25 depicts the operational status of each functional blocks used in the filter,which implies that at what time instant does the operations of prediction and correctionblocks will be executed.

In the figure above the x- and y- axis represents time in ‘msec’ and status of each func-tional blocks of the extended kalman filter, respectively. The execution status of thefunctional block is represented by a ‘pulse’ with some positive magnitude, which impliesthat the algorithm will run the corresponding functional block equations based on thesensor measurement information. Similarly, the ‘pulse’ with zero magnitude indicatesthat there will be no execution of functional blocks at that time instant.

From the figure9, we can see that there is an update of IMU information for every 10msec. Since the prediction block uses the IMU measurement information, as a result theprediction block will be executed for every 10 msec. Similarly, from the figure above wecan interpret that all three correction blocks will be executed at different time instant.This is because for each sensor module an corresponding correction block has been build.As a consequence, the operations of each correction block will be controlled by the re-spective sensor module.

46

Page 57: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Firstly, we begin discussion with the odometry correction block. From figure 23 we cansee that the measurement update of the wheel odometry is at every 10 msec. Therefore,the operations of odometry correction block are executed at every 10 msec. In the secondcase, the camera correction block will run at every 60msec due to camera measurementupdate which is taking place at the corresponding time instant. In similar fashion, theGNSS correction block will run at every 100msec due to update in GNSS measurementinformation. If there is more than one measurement update from the sensor module,then the corresponding correction blocks will be executed simultaneously because eachmeasurement are independent of each others. For an instance, at 60msec both odometryand camera will be the information. In such case, the odometry and camera correctionblocks will be executed simultaneously.

5.3 Outputs from the algorithm

The last part in algorithm development is the output segment. This segment gives vehocleinformation which are determined by the algorithm. With the complete system andmeasurement model, the algorithm will estimate the following parameters:

• Three-dimensional Velocity information in Navigation coordinate system

• Three-dimensional Position information in WGS84 coordinate system

• Three-dimensional Attitude information in Vehicle Body Coordinate system

5.4 Proposed State of the art

In this thesis work, we present the approach for improving the motion information of thevehicle (especially in GNSS denied environment) using visual odometry. The proposedalgorithm uses the information from different sensor to provide better estimation of mo-tion information when the vehicle drives across the surrounding environment.

The control flow of the proposed state of the art is presented in the figure below:

47

Page 58: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

Figure 26: Control flow diagram

The above flow graph presents the operation flow in the proposed algorithm. The vision-based localization technique is developed to provide more accurate, reliable and availablemotion information in real-world environment. The complete filter model is build inMATLAB/Simulink environment. The flow begins when the sensor inputs the details ofthe vehicle motion like angular velocity, acceleration, steering angle and relative position,etc. These input details would be processed synchronously with the help of appropriateblocks in Simulink. Then, it is converted into a suitable format and passed on as a In-putData file to the state estimator.

The processed InputData file will be passed to the state estimator, which is designedusing Extended Kalman Filter technique. The control flow sequence in the EKF aredescribed as follows:

1 Initialisation phase: In this phase, the initial state estimate and Initial error co-variance information will be passed to prediction stage at time t=0 and control willflow to prediction stage. This step will be executed only once during beginning ofthe filter process

48

Page 59: Improvement of Vehicle Motion Information by Visual

CHAPTER 5. DESIGN AND IMPLEMENTATION OF AN ALGORITHM

2 Prediction stage: Once the initial values are passed, the prediction block will beexecuted at time t=0 (later, for every 10msec this block will be executed). Then,the control will flow to correction stage in the next time instant depending onthe update of new sensor measurement. Suppose, if there is no new measurementupdate from the sensor module. Then, the control will remains in this stage.

3 Correction stage: Depending on the new sensor measurement update, the relevantcorrection blocks will run in the order from left to right. That is odometry, cameraand GNSS correction blocks runs at every 10msec, 60msec & 100msec, respectively.When the update of new measurement, the control goes to preprocessing stageto fetch the new information before executing the correction blocks. Once thecorrection blocks are executed, the control will flow to prediction stage. The controlflow between the prediction and correction stage will be keep exchanging until thenecessary termination condition is reached. Thereafter, the control goes to stopstage.

4 Stop stage: The control will remain in this stage when the necessary terminationcondition is satisfied. Usually, the termination condition will be specified by thetime period

Once the algorithm terminates after meeting the necessary condition, the results of thealgorithm that is the estimated values of the vehicle motion information are obtained. Inchapter 6 & chapter 7, we will see the results of the algorithm tested against simulationand real-world data.

49

Page 60: Improvement of Vehicle Motion Information by Visual

Chapter 6

Results and Discussion

This chapter gives the results of the algorithm developed (which was discussed in the pre-vious chapters) for simulation data. By inspecting the algorithm output with referencedata we can analyze whether it has reached the objective of thesis. Here, the referencedata was used to validate the results of the algorithm. With the MATLAB/Simulink ,which is a mathematical computation tool that can be used to simulate the real timescenaerios of the dynamic system for making the analysis of the dynamic behaviour.Simulink is an approach for understanding the behavior of the system by means of agraphical representation.[52]

With Simulink tool we can create a virtual platform for testing the algorithm in differentscenario modes, thereby this leads for automakers to examine the performance in varioussimulation models. At every phase in the virtual platform will gives an idea about thedesired fundamental development in the dynamic system for improving its performance inreal-time situation. This could be possible with associated changes in the correspondingdevelopment models. The necessary discussions on these virtual models are potentiallyuseful for this work is also pointed out in this section. The goal of this chapter is toprovide a simple computation of the algorithm in Simulink environment and validate thealgorihm with the reference data.[52]

6.1 Simulation

In this section, we will discuss about the behavior of proposed algorithm through inter-preting the simulation results. The analysis of the algorithm in simulation environmentgives a brief idea how good is the estimated results from the algorithm under the opti-mal conditions. The measurement model for camera system is developed using Symbolictoolbox in Simulink. The vehicle information such as acceleration, angular rate, steeringangle, and position are used as a input. The outputs from the blocks are graphically rep-resented in XY plot. Also, by tuning the system parameter we can improve the algorithmbehavior. Thus, the performance of the algorithm is optimized by tuning the parameters.

The simulated results of the algorithm are mentioned below gives a brief idea about thevariation of estimated parameter around the surrounding environment. For simulatingthe algorithm in ideal conditions we have made some assumptions in the camera module.

50

Page 61: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

That is we have considered that the camera data is synchronized with algorithm data(the delay in camera measurement update are neglected) and the camera parameters areproperly calibrated.

The developed algorithm is simulated based on three different situations, as depicted intable 9. In each situation, the relevant sensor measurement data is used for simulation.

Table 9: Simulation scenarios

Situation Description

1) With GNSS, Without Camera Only good GNSS signal are available forestimating the desired information

2) Without GNSS, Without Camera The strength of GNSS signal is weak or itmight not fully available

3) Without GNSS, With Camera Camera data is used to get better estimationin GNSS denied environment

6.1.1 Behaviour of Position trajectry

Let us begin discussing about the position estimation for all the simulation scenarios9.Following are the plots of position variation over the time as ego vehicle moves across theknown trajectory.

The figure 27 shows the algorithm result of position estimation with GNSS information.

Figure 27: Dead reckoning [with GNSS, without camera]

51

Page 62: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

The green line in the figure27 depicts the GNSS signals used for estimating the positionalong the trajectory path. The blue and pink line indicates reference and observer dataalong the trajectory. In figure 27 we can see that the there is no deviation between thereference and algorithm estimated data, which means that the algorithm can providegood estimation of position parameter by using GNSS signals.

Figure 28: Dead reckoning [without GNSS, without camera]

Figure 29: Dead reckoning [with camera, without GNSS]

From the figure 28 we can interpret that in the absence of GNSS signal the deviation

52

Page 63: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

between the estimated and reference data is more. That is the algorithm gives wrongposition estimation in GPS denied environment. With the stereo-camera information wecan able to reduce this deviation in the position estimation, which can be seen in figure29.

6.1.2 Position Error variation

The following plots shows how the position error will change as time progress (to getbetter visualization, we have shown the estimated result from 1240 msec to 1245 msecbut the original estimation is for the complete time period). The blue and green linesin the graph represents reference data and observer data respectively. And x- and y-axisimplies position error in ‘m‘ and time in ‘msec‘ respectively.

The figure 30 shows the plot of position error by the algorithm which uses GNSS signal.

Figure 30: Variation of position error vs time [with GNSS, without camera]

The position error is calculated by determining the deviation between estimated andreference data (the errors are determined in all three directions of Navigation coordinatesystem). In figure 30 we can see the change in position error over the time. Further, wecan see that the error rate is very less in all three axes direction which means that theobserver can provide precise estimation values.

53

Page 64: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

Figure 31: Variation of position error vs time [without GNSS, without camera]

From figure 31 we can see that the amount of deviation is increasing in GNSS deniedregion. Thus, we can see that the error in all three axes direction is keep on increasingas time travels which implies that the observer is not good enough to provide correctestimation in GPS denied region. Therefore, these errors will make an strong impact onestimating the position along the trajectory (as seen in figure28).

Figure 32: Variation of position error vs time [with camera, without GNSS]

By employing camera information into algorithm will significantly reduces the deviationin estimating the position value. From the figure32 we can see the improvement in

54

Page 65: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

reducing the position error from the camera data in all axes directions. As a result, thedeviation in estimating the position is reduced along the trajectory(as seen in figure29).

6.1.3 Variation of Camera Position

In this section we will discuss how the camera information will make an impact in re-ducing the error between the estimated and reference data. In the following graph, thefirst 3 subplots from top refers to change in position values in x,y, and z direction. Whilethe fourth subplots shows the status of the camera data used in the algorithm. The blueand green lines represents change-in observer data and camera reference data respectively.

The figure below depicts the position estimation using stereo-camera data in GNSS de-nied region (to have better estimation visualization, we have show the estimated resultfrom 55msec to 75msec but the original result is executed for the complete time period).

Figure 33: Behaviour of delta-position vs time [with camera, without GNSS]

In the third case9, the camera information (such as change in position values about allthree axes direction) are used in the algorithm for improving the position estimation.From figure 33 we can interpret that the observer estimation are very close to referencedata which gives low error rate between them. Consequently, the camera data will havestrong implication in reducing the deviation error in position trajectory, which can beseen in figure29.

55

Page 66: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

6.1.4 Variation of Attitude values

6.1.4.1 Yaw angle

The following plots gives the orientation information of the vehicle. The figure 34 showsthe behavior of yaw angle as vehicle travel across the trajectory (using GNSS signal).

The figure below gives the behavior of yaw angle with respect to time. The blue, red andgreen lines represents observer, reference, and GNSS data respectively.

Figure 34: Behaviour of yawangle vs time [with GNSS, without camera]

In figure 35 we can interpret that there is some deviation present between the observerand reference data in GNSS denied region.

56

Page 67: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

Figure 35: Behaviour of yawangle vs time [without GNSS, without camera]

The yaw angle information will plays a prominent role in determining the vehicle Headingi.e. vehicle‘s pointing direction. The poor GNSS signal might contain some noise in hismeasurement which would leads to wrong estimation of the yaw angle. Consequently,this yaw angle data will results in deviation in the trajectory plot as seen in figure28. Asa result, a significant effect will act on yaw angle estimation in GNSS denied region.

6.1.4.2 Roll and Pitch angle

The following figure 36& 37 shows how roll and pitch angle varies with respect to time,respectively. In the following graphs, the top subplot refers to behavior of roll angle,while the bottom subplot refers to variation of pitch angle. The red and blue lines arethe reference and observer data, respectively.

57

Page 68: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

Figure 36: Variation of roll and pitch angle [with GNSS, without camera]

Figure 37: Variation of roll and pitch angle [without GNSS, without camera]

From the above plots we can say that there is no huge effect on roll and pitch angle valuesby the GNSS signal. But as I mentioned in the previous section, the GNSS signal willhave a strong effect on determining the yaw angle.

58

Page 69: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

6.1.5 Attitude Error

The following plots 38 & 39 shows the changes in attitude-error as time progress. Theblue and green lines represents reference data and observer data respectively. And x- andy-axis implies attitude-error and time respectively.

Figure 38: Variation of attitude-error vs time [with GNSS, without camera]

Figure 39: Variation of attitude-error vs time [without GNSS, without camera]

From the above plots we can say that the error in estimating the roll and pitch angleincreases in the GPS denied environment.

59

Page 70: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

6.1.6 Camera attitude

This section shows how camera attitude information will make an impact in reducingthe error between estimated and reference data. In figure 40, the first 3 subplots fromthe top refers to change in roll, pitch, and yaw angle with respect to time. While thefourth subplots shows the status of the camera data used in the algorithm. The blue andgreen lines represents change-in observer data and camera reference data respectively (tohave better estimation visualization, we have show the estimated result from 56msec to72msec but the original result is executed for the complete time period).

Figure 40: Behaviour of delta-attitude vs time [with camera, without GNSS]

Figure 40 shows the estimation of camera attitude over the time. The figure 41 showshow the attitude-error varies with respect to time when camera information are used.

60

Page 71: Improvement of Vehicle Motion Information by Visual

CHAPTER 6. RESULTS AND DISCUSSION

Figure 41: Variation of attitude-error vs time [with camera,without GNSS]

The above graphs clearly shows the impact of camera information for improving the es-timation in GNSS denied environment. The figure 41 shows the amount of deviationis significantly reduced by incorporating the camera data. Thereby, we can see the im-provement in position estimation with camera information in GPS denied environmentas shown in the figure 29.

61

Page 72: Improvement of Vehicle Motion Information by Visual

Chapter 7

Application to real world data

In the previous chapter, we have seen the results of the algorithm for simulated data. Alsothere was some assumptions made with the stereo-camera system such that the cameraparameters was calibrated properly. Thereby, the algorithm will estimates the vehicle’sdynamics under good simulated conditions. The purpose of this chapter is to analyze thebehavior of algorithm in real-world scenarios. Here, we have also made some necessaryextension in the algorithm for synchronizing the camera data. In this chapter, we aregoing to present the results of the algorithm for two different real-time test scenarios(Test track-1 and Test track-2).

A necessary setup was made in the car and driven in the surrounding environment forcollecting the real-time data from different sensor modules such as IMU, wheel odome-try, stereo-camera and GNSS. The algorithm was then tested on those collected data toshow the impact of camera information in estimating the motion information. In the firstcase, we don’t use the stereo-camera information for position estimation, while in the sec-ond case stereo-camera information are used for position estimation. In this way, we cansee how does camera parameter values will plays an prominent role in position estimation.

The following table 10 gives the information about the mean error value for two differenttest scenarios in GPS denied environment. The first column in the tabel10 gives the cam-era parameters used in position estimation. While the second column gives the ’meanerror values’ calculated between camera and reference data. Similarly, we calculate the’mean error values’ between dead reckoning observer and reference data, i.e. the observeruses only wheel odometry data (without GNSS, without camera) for estimating the deadreckoning are shown in the third column.

62

Page 73: Improvement of Vehicle Motion Information by Visual

CHAPTER 7. APPLICATION TO REAL WORLD DATA

Table 10: Mean error value for (a)Test track-1 (b)Test track-2

From the table 10 we can see the camera data has less deviation from reference data ascompared to dead reckoning values for the test track-1. As a result, the algorithm withthis camera data can provide better position estimation which can be seen in figure 43.On the other hand, for test track-2 the camera data has more deviation from referencedata as compared to dead reckoning values. Therefore, the performance of the algorithmcannot be improved with this camera data which can be seen in figure 43.

The following plots gives the results of algorithm for the test track-1 scenario. Figure 43& 42 shows the behavior of position estimation across the trajectory.

63

Page 74: Improvement of Vehicle Motion Information by Visual

CHAPTER 7. APPLICATION TO REAL WORLD DATA

Figure 42: Dead reckoning [without camera information]

Figure 43: Dead reckoning [with camera information]

From the above graphs, we can say that the algorithm can provide better position esti-mation by using camera data, which implies the parameters of camera data has very lessmean error value compared to simulated data. But still we can see that there is somedeviation present in estimated data from the reference data, this is because of non-zeromean error value in the camera data.

64

Page 75: Improvement of Vehicle Motion Information by Visual

CHAPTER 7. APPLICATION TO REAL WORLD DATA

On the other hand, the effect of higher mean error values in camera data could givesbad estimation results. Figure 45 & 44 shows the behavior of the position across thetrajectory for the test track-2 data.

Figure 44: Dead reckoning [without camera information]

Figure 45: Dead reckoning [with camera information]

From the above plots, we can interpret that the algorithm without camera data (whichhas lesser error value) could give better estimation results. But, there will be some devi-

65

Page 76: Improvement of Vehicle Motion Information by Visual

CHAPTER 7. APPLICATION TO REAL WORLD DATA

ation in the estimation which is due to non-zero mean error value.

7.1 Summary

By considering all the above aspects we can say that the algorithm could provide a betterposition estimation only if camera parameter has very low mean error values. Otherwise,the algorithm without camera information could results in better estimations. Due toseveral negative aspects of camera system such as poor calibration, light conditions (likedirect sunlight, shadows), image conditions (image blur, image scale variance) might leadsto wrong extraction of motion information from camera images. And also these errors willkeep on accumulating from one frame-to-another over the time. Hence, the proposed al-gorithm could results in bad estimation by using these camera measurements[12]. Hence,we come to an conclusion that with good calibrated camera parameters can provide betterestimation results.

66

Page 77: Improvement of Vehicle Motion Information by Visual

Chapter 8

Conclusion and Scope of Future Work

In this chapter, we summarize the contributions towards the thesis work and will discussabout its future development.

8.1 Conclusion

The advancement in autonomous vehicle development has promisingly growing in bothacademic and industrial sector. The most important part in autonomous domain is tomaintain accurate vehicle information in real-time, but due to several factors such asprone to disturbance, poor calibration, etc., it is very difficult to maintain good level ofaccuracy in the information. In this thesis work, we have demonstrated the inconveniencecaused by the GNSS module due to its inhibition in certain environmental regions. Asa result, the performance will decays in estimating vehicle’s motion information for theautonomous navigation.

A favorable approach has been proposed to overcome the limitations in accuracy level ofvehicle information in the autonomous driving. In the proposed method, an algorithm hasbeen designed, developed, and tested against the simulation and real-time data to givean experimental analysis of dynamic behavior of the system using stereo-camera data.The objective of using sequence of stereo images is to improve the motion informationfor vehicle localization in GPS denied regions. Due to several negative aspects of camerasystem such as poor calibration, light conditions, image conditions, etc, will leads to wrongextraction of vehicle pose information from the camera images. Therefore, the systemwith bad camera data will results in false estimation of motion information, which can beseen in chapter-7. In real-time scenarios, one of the fundamental challenges is to have wellcalibrated camera information for reaching the desired output. This thesis work can alsobenefits the associated applications such as driver assistance system, Vehicle-to-Anything(V2X) communication, pedestrian safety system, collision avoidance, electronic stabilitycontrol, passive safety systems and so on.

8.2 Scope of Future Work

This thesis has made an investigation on motion information approaches for autonomousdriving application. A lot of related work can make in future to improve the accuracy andefficiency of the proposed algorithm. By including additional sensing modules can reduce

67

Page 78: Improvement of Vehicle Motion Information by Visual

CHAPTER 8. CONCLUSION AND SCOPE OF FUTURE WORK

the estimation errors could results in improving the system performance. Moreover, thestereo-camera trade-off must be taken into consideration for the upcoming developmentwork.

Some of the future works are summarized below for improving the algorithm performancewhen stereo-camera system fails to provide the precise ego-vehicle information,

• The algorithm efficiency can be enhanced by incorporating other localization tech-niques such as Lidar, Radar into the algorithm. Also, we must be aware of highcomputation cost required by those techniques

• The Mapping technology can be integrated into the algorithm, thereby the robust-ness of the algorithm can be improved

68

Page 79: Improvement of Vehicle Motion Information by Visual

Bibliography

[1] S. Wang, Z. Deng, and G. Yin, “An accurate gps-imu/dr data fusion method fordriverless car based on a set of predictive models and grid constraints,” Sensors,vol. 16, no. 3, p. 280, 2016.

[2] N. Nourani-Vatani, J. Roberts, and M. V. Srinivasan, “Imu aided 3d visual odometryfor car-like vehicles,” in Australasian Conference on Robotics and Automation, 2008.

[3] L. Wei, C. Cappelle, Y. Ruichek, and F. Zann, “Intelligent vehicle localization inurban environments using ekf-based visual odometry and gps fusion,” IFAC Pro-ceedings Volumes, vol. 44, no. 1, pp. 13776–13781, 2011.

[4] D. C. Salmon and D. M. Bevly, “An exploration of low-cost sensor and vehiclemodel solutions for ground vehicle navigation,” in Position, Location and NavigationSymposium-PLANS 2014, 2014 IEEE/ION, pp. 462–471, IEEE, 2014.

[5] https://www.continental-corporation.com/en/press/press-releases/2017-12-18-cellular-v2x-116994.

[6] https://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/?imm˙mid=0d6ffb.

[7] A. King, “Inertial navigation-forty years of evolution,” GEC review, vol. 13, no. 3,pp. 140–149, 1998.

[8] http://smokespark.blogspot.de/2015/10/69-sense-hat-using-inertial-measurement.html.

[9] http://dannyblair.uwinnipeg.ca/gps.htm.

[10] https://commons.wikimedia.org/wiki/File:WGS˙84˙reference˙frame˙(vector˙graphic).svg.

[11] http://www.cvlibs.net/datasets/kitti/.

[12] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5084145/.

[13] W. Zhao and N. Nandhakumar, “Effects of camera alignment errors on stereoscopicdepth estimates,” Pattern Recognition, vol. 29, no. 12, pp. 2115–2126, 1996.

[14] M. B. Rhudy, R. A. Salguero, and K. Holappa, “Akalman filtering tutorial for un-dergraduate students,”

[15] https://en.wikipedia.org/wiki/State-space˙representation.

[16] J. M. Anderson, K. Nidhi, K. D. Stanley, P. Sorensen, C. Samaras, and O. A. Oluwa-tola, Autonomous vehicle technology: A guide for policymakers. Rand Corporation,2014.

69

Page 80: Improvement of Vehicle Motion Information by Visual

BIBLIOGRAPHY

[17] M. Rajasekhar and A. K. Jaswal, “Autonomous vehicles: the future of automo-biles,” in Transportation Electrification Conference (ITEC), 2015 IEEE Interna-tional, pp. 1–6, IEEE, 2015.

[18] https://csl.illinois.edu/news/researchers-developing-solution-gps-denied-environments.

[19] M. Shelley, “Monocular visual inertial odometry on a mobile device,” Master’s thesis,Institut fur Informatik, TU Munchen, Germany, 2014.

[20] G. Wetzstein, “Ee 267 virtual reality course notes: 3-dof orientation tracking withimus,”

[21] B. Tessendorf, F. Gravenhorst, B. Arnrich, and G. Troster, “An imu-based sen-sor network to continuously monitor rowing technique on the water,” in IntelligentSensors, Sensor Networks and Information Processing (ISSNIP), 2011 Seventh In-ternational Conference on, pp. 253–258, IEEE, 2011.

[22] https://www.designworldonline.com/6dof-sensors-improve-motion-sensing-applications/.

[23] W. Lechner and S. Baumann, “Global navigation satellite systems,” Computers andElectronics in Agriculture, vol. 25, no. 1-2, pp. 67–85, 2000.

[24] A. El-Rabbany, Introduction to GPS: the global positioning system. Artech house,2002.

[25] U. Nations, “Current and planned global and regional navigation satellite systemsand satellite-based augmentations systems,” in Proceedings of ICG, pp. 15–40, 2010.

[26] G. Cook, Mobile robots: navigation, control and remote sensing. John Wiley & Sons,2011.

[27] N. TR8350, “2: Department of defense world geodetic system 1984,” Its Definitionand Relationships with Local Geodetic Systems,, vol. 4, 1997.

[28] D. Scaramuzza and R. Siegwart, “Appearance-guided monocular omnidirectionalvisual odometry for outdoor ground vehicles,” IEEE transactions on robotics, vol. 24,no. 5, pp. 1015–1026, 2008.

[29] J. Borenstein, H. Everett, L. Feng, et al., “Where am i? sensors and methods formobile robot positioning,” University of Michigan, vol. 119, no. 120, p. 15, 1996.

[30] Y. Cheng, M. Maimone, and L. Matthies, “Visual odometry on the mars explorationrovers,” in Systems, Man and Cybernetics, 2005 IEEE International Conference on,vol. 1, pp. 903–910, IEEE, 2005.

[31] D. Fernandez and A. Price, “Visual odometry for an outdoor mobile robot,” inRobotics, Automation and Mechatronics, 2004 IEEE Conference on, vol. 2, pp. 816–821, IEEE, 2004.

[32] S. Guo and C. Meng, “Monocular visual odometry and obstacle detection sys-tem based on ground constraints,” in International Conference on Social Robotics,pp. 516–525, Springer, 2012.

70

Page 81: Improvement of Vehicle Motion Information by Visual

BIBLIOGRAPHY

[33] https://github.com/AutoRally/autorally/wiki/Wheel-Odometry.

[34] K. Ogata, “Modern control engineering,” Book Reviews, vol. 35, no. 1181, p. 1184,1999.

[35] D. Ludwig, “The radon transform on euclidean space,” Communications on Pureand Applied Mathematics, vol. 19, no. 1, pp. 49–81, 1966.

[36] https://www.electrical4u.com/different-types-non-linearities-in-control-system/.

[37] https://en.wikipedia.org/wiki/Kalman˙filter.

[38] R. G. Brown and P. Y. Hwang, “Introduction to random signals and applied kalmanfiltering: with matlab exercises and solutions,” Introduction to random signals andapplied Kalman filtering: with MATLAB exercises and solutions, by Brown, RobertGrover.; Hwang, Patrick YC New York: Wiley, c1997., 1997.

[39] A. Kelly, “A 3d state space formulation of a navigation kalman filter for autonomousvehicles,” tech. rep., CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICSINST, 1994.

[40] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear approaches.John Wiley & Sons, 2006.

[41] https://www.coursehero.com/file/15139298/Kalman-filter/.

[42] R. Faragher, “Understanding the basis of the kalman filter via a simple and intuitivederivation [lecture notes],” IEEE Signal processing magazine, vol. 29, no. 5, pp. 128–132, 2012.

[43] R. Jansberg, “Tracking of an airplane using ekf and spf,” Master’s thesis, 2010.

[44] https://www.yumpu.com/en/document/view/35411311/state-estimation-with-kalman-filter/21.

[45] G. Bishop, G. Welch, et al., “An introduction to the kalman filter,” Proc of SIG-GRAPH, Course, vol. 8, no. 27599-23175, p. 41, 2001.

[46] https://www.researchgate.net/post/how˙to˙initialize˙the˙error˙covariance˙matrix˙and˙process˙noise˙covariance˙matrix˙How˙are˙they˙different˙and˙in˙what˙way˙they˙impact˙the˙filter.

[47] M. Shyam Mohan, N. Naik, R. Gemson, and M. Ananthasayanam, “Introduction tothe kalman filter and tuning its statistics for near optimal estimates and cramer raobound,” arXiv preprint arXiv:1503.04313, 2015.

[48] W. Elmenreich, “An introduction to sensor fusion,” Vienna University of Technology,Austria, 2002.

[49] https://medium.com/@wilburdes/sensor-fusion-algorithms-for-autonomous-driving-part-1-the-kalman-filter-and-extended-kalman-a4eab8a833dd.

[50] Y. Vershinin, “A data fusion algorithm for multisensor systems,” in InformationFusion, 2002. Proceedings of the Fifth International Conference on, vol. 1, pp. 341–345, IEEE, 2002.

71

Page 82: Improvement of Vehicle Motion Information by Visual

BIBLIOGRAPHY

[51] W. Elmenreich, “Sensor fusion in time-triggered systems,” 2002.

[52] O. Beucher and M. Weeks, Introduction to MATLAB & SIMULINK (A ProjectApproach). Laxmi Publications, Ltd., 2008.

72