44
DEGREE PROJECT, IN , SECOND LEVEL COMPUTER SCIENCE STOCKHOLM, SWEDEN 2015 Visual Map-based Localization applied to Autonomous Vehicles JEAN-ALIX DAVID KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC)

Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

DEGREE PROJECT, IN , SECOND LEVELCOMPUTER SCIENCE

STOCKHOLM, SWEDEN 2015

Visual Map-based Localizationapplied to Autonomous Vehicles

JEAN-ALIX DAVID

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC)

Page 2: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

Visual Map-based Localization applied to

Autonomous Vehicles

Jean-Alix [email protected]

Master’s Thesis in Computer Scienceat Computer Science and Communication School

Supervisor: Patric JENSFELTExaminer: Stefan H Å CARLSSONINRIA Supervisor: Amaury NEGRE

June 2015

Page 3: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

Abstract

This thesis is carried out in the context of Advanced DriverAssistance Systems, and especially autonomous vehicles. Its aimis to propose a method to enhance localization of vehicles on roads.I suggests using a camera to detect lane markings, and to matchthese to a map to extract the corrected position of the vehicle.

The thesis is divided in three parts dealing with: the map,the line detector and the evaluation. The map is based on theOpenStreetMap data. The line detector is a based on ridge de-tection. The results are compared with an Iterative Closest Pointalgorithm.

It also focuses on implementing the components under a real-time constraints. Technologies such as ROS, for synchronizationof the data, and CUDA, for parallelization, are used.

Page 4: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

Contents

Contents

Acknowledgments

List of Figures

1 Introduction 1

1.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Background 3

3 Methods 5

3.1 OpenStreetMap data . . . . . . . . . . . . . . . . . . . . . . . 53.1.1 Basic structure . . . . . . . . . . . . . . . . . . . . . . 53.1.2 Lane markings generation . . . . . . . . . . . . . . . . 6

3.2 Ridge detector . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.3 ICP algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.1 Matching . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.2 Minimization . . . . . . . . . . . . . . . . . . . . . . . 13

4 Tests and results 15

4.1 Platform and test environment . . . . . . . . . . . . . . . . . . 154.1.1 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . 154.1.2 Environment . . . . . . . . . . . . . . . . . . . . . . . 154.1.3 Experimental protocol . . . . . . . . . . . . . . . . . . 16

4.2 Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2.1 Data storage . . . . . . . . . . . . . . . . . . . . . . . . 164.2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.3 Ridge detector . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.3.1 Implementation . . . . . . . . . . . . . . . . . . . . . . 184.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Page 5: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

4.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 204.4 ICP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . 204.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 21

5 Conclusion 23

5.1 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Bibliography 24

Appendices 26

A OpenStreetMap 27

A.1 Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27A.1.1 Point features . . . . . . . . . . . . . . . . . . . . . . . 27A.1.2 Nodes on Ways . . . . . . . . . . . . . . . . . . . . . . 27A.1.3 Structure . . . . . . . . . . . . . . . . . . . . . . . . . 28

A.2 Way . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28A.2.1 Types of way . . . . . . . . . . . . . . . . . . . . . . . 28

A.3 Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30A.3.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . 30A.3.2 Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30A.3.3 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30A.3.4 Types of relation . . . . . . . . . . . . . . . . . . . . . 30A.3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 31

A.4 Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31A.4.1 Keys and values . . . . . . . . . . . . . . . . . . . . . . 31

B ROS 33

B.1 Robot Operating System . . . . . . . . . . . . . . . . . . . . . 33B.2 ROS Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

C Platform 35

C.1 Car . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35C.2 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

C.2.1 Stereo camera . . . . . . . . . . . . . . . . . . . . . . . 35C.2.2 RGB camera . . . . . . . . . . . . . . . . . . . . . . . 36

Page 6: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

Acknowledgments

I would like to thank:

– Amaury NEGRE, my supervisor in INRIA, for his help and advice.

– Christian LAUGIER and the INRIA for accepting me, and allow me todo my master thesis in their lab.

– Patric JENSFELT, my supervisor in KTH, for his help and guidance.

– Stefan CARLSSON, my examiner for this thesis.

Page 7: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

List of Figures

1.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

3.1 Representation of OpenStreetMap data on a top-down view . . . 63.2 Line generator graph . . . . . . . . . . . . . . . . . . . . . . . . . 73.3 Representation of the lane markings generation . . . . . . . . . . 73.4 Corrected data for a crossroads . . . . . . . . . . . . . . . . . . . 83.5 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.6 Ridges detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.1 Lexus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 Tested route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.3 Ridges detection on highway . . . . . . . . . . . . . . . . . . . . . 194.4 Ridges detection on residential road . . . . . . . . . . . . . . . . . 194.5 ICP correction on highway . . . . . . . . . . . . . . . . . . . . . . 21

B.1 ROS concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Page 8: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

1 Introduction

Advanced Driver Assistance Systems (ADAS) have been around for a longtime. Good examples of such systems are the well-known Anti-lock BrakingSystem (ABS) and Electronic Stability Program (ESP). They already providea great increase of the car safety for driver, passengers and other road users.

However it is still possible to improve security. To do so the next stepis a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve by high level pereception and decision algorithms andperforment control of the vehicle. Given the lack of precision of GPS for lo-calization, it is necessary to implement new ways to improve localization forprecise control.

Here we introduce a method using a geographic map and images froma camera to do the localization of the vehicle, by comparing them with anIterative Closest Point (ICP) algorithm.

1.1 Problem statement

The purpose of this thesis is to implement a method to improve vehicle local-ization using visual information and a map.

We also want to verify the following contraints:

• Real-time processing

• Cheap equipment adapted to vehicle use

• Embeded on the vehicle

The method is described in the figure 1.1. It has been divided in three parts:

• The treatment of OpenStreetMap map

• The lane markings detection

• The evaluation including a comparison with the ICP algorithm

1

Page 9: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 1. INTRODUCTION

The first part consists of analyzing and adaptating the map. We chooseOpenStreetMap because it is free, opensource and highly adaptable, especiallywe can store and query the database directly on the car. The second part cor-responds to the implementation of a line detector to detect the lane markings.We implemented a ridge detector as it is an efficient method and only requirea cheap monocular camera. The last part is the implementation of the com-parison algorithm. We choose to implement an ICP algorithm because it isprecise.

Camera image

Map data

Corrected position

Detected ridges

OpenStreetMap data

Map generator Ridge detector

ICP

Figure 1.1: The three parts of the approach: map generation, ridge detectionand comparison with ICP algorithm are articuled as shown by this figure.

2

Page 10: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

2 Background

It is important for autonomous vehicles to ensure a precise control, thus itis important to have a precise localization. Moreover autonomous vehiclesrequire a global positionning for pathplanning, but affordable sensors thatgive absolute localization, such as GPS, are not precise enough for control. Tostay in a lane a vehicle need a precision up to centimeters, where a simple GPSis precise up to a few meters. The sensors allowing good absolute precisionare too expensive. Thus a local localization is needed, as well as a map todeduce the global positionning. We can achieve local localization with cheapsensors, for example with a camera.

Labayrade [1] proposed to generate a map using visual information anduse it to improve lane detection, but he confines himself to lane detection.Parra [2] proposed to use visual information as odometry, allowing to maintainlocalization even in case of GPS blackout. To obtain good results he used astereo camera to obtain the visual information, which is still expensive. Wededuce of these that using visual information is a good way to do localization.Moreover it is possible to find cheap monocular camera giving good images.

In 2014 Mercedes-Benz achieve a 103 km journey with an autonomo vehicleusing lanelets [3] as map representation. Lanelets are a map representationdefining the parts of the road where the vehicle can drive, like virtual rails.They are efficient features for localization, but as they are a complex hand-made map representation, it is not practical to adapt it to any situation, thuswe want to use a simpler and more global map. OpenStreetMap [4, 5], a free,opensource, user-generated map which provides a lot of up-to-date informationabout roads, is an excellent solution.

Several features can be used to track the road and match it to the map, asthe lanelets for example. Some methods use 3D cameras to detect the shapeof the road. That is what Danescu [6] and Nedevschi [7] porposed. They bothuse stereo cameras to detect the curvature of the road. They are precise asthey give a direct information on the position of the vehicle relative to theroad. The drawback of stereo cameras is that they are more expensive thanmonocular cameras, and it require to have a lot of information on the road

3

Page 11: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 2. BACKGROUND

stored in the map. The same inconvenience applied to the methods usinggeomedtrical models of the road [8, 9, 10], and trying to match the road seenon the image to a geometrical model, as a clothoide curve. Other solutions usemonocular cameras and different methods to track the road. Xu [11] showedthat it is possible to use Hough transform to detect curves. Kuk [12] andLiu [13] applied it to lane detection. Hough transform allows to detect linesand curves, but it is not adapted to be used in correlation with the map, asthe map is not in Hough space. Several methods have been proposed to uselane markings as visual features. Gruyer [14] proposed a method using a mapof lane markings and two lateral cameras to detect the lane markings. Thisallows to correct the lateral drift and localize the vehicle in its lane. Duringthe Mercedes-Benz journey in addition to lanelets lane markings were alsoused [15]. In a road context a good features for localization appears to be lanemarkings, as they bound the roads, they are the less prone to changes, andnormally present on every road. A drawback is that they can not be visible forsome temporary moments, for example during winter the road can be coveredby snow or during work. However this drawback generally occurs in difficultsituations, which would still require driver’s attention, thus it can be ignored.

To detect lane markings ridgeness is the most used feature. Negre [16]proposed an algorithm using ridgeness to detect elongated structures in ascene and on different scales. The algorithm detects ridge points and provideelongation and orientation, which is interesting as orientation can be anotherfeature to match the lines of the map. Lopez [17, 18, 19] applied ridgenessto lane marking detection. Kang [20] extended it to multi-lane detection.Ridgeness needs no a priori information on the road neither expensive sensor,as only a monocular monochrome camera is necessary and it can easily beparallelizable to keep great performance.

Lane markings have often been used to locate a vehicle on the road, butgenerally it is relative to a map generated locally. Here we want a globallocalization which will allow to combine precise local control and global posi-tionning for pathfinding for example. To make the local map generated by thesensors and the global map coincide we need an algorithm which can comparethem. Different algorithm can be used to match detected lines to the map andcorrect localization, for example filters, such as Kalman Filter and its variantsor Particle Filter, as done in [6]. They offer good and smooth results whiletracking position, but lack precision in complex scenarios. For example in amulti-lane scenario a particle filter evaluate two particles the same way andnot be able to choose the good one, as parallel lanes are not distinguishable.Iterative algorithms can also be used, such as the Iterative Closest Point (ICP)algorithm [21]. The advantage of such algorithm is that it provide a preciselocalization, but the results are less smooth than the ones with filters.

4

Page 12: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

3 Methods

This chapter describes the theoritical approach for each part of the thesis.

3.1 OpenStreetMap data

To improve localization we need a precise map. Moreover we want it freeand opensource, because of our cost constraint. Thus OpenStreetMap [4, 5]appears as a good solution, as it is free, opensource and usermade, which meanit is simple to use and to adapt to our needs.

3.1.1 Basic structure

The OpenStreetMap data are composed of three basic primitives:

• Nodes

• Ways

• Relations

Nodes define geographical points with their latitude and longitude. Thiscan either be a real physical object, for example a bus stop, or an imaginarypoint defining the shape of a road. Ways define more complex features suchas roads and boundaries, they consist in ordered lists of nodes. The list ofnodes define the shape of the feature. If the first and last nodes of the wayare the same then it is a closed way which can define an area. Relationsdescribe other constraints between nodes, ways and/or relations. All of themcan have several associated tags describing the meaning of a particularelement. A tag is composed of a key and a value. The key describes theclass of the feature, for example "highways" means the feature is a road, andthe value details the specific feature, for example the value of "highways"can be "motorway" or "residential". An extract of the OpenStreetMap wiki

5

Page 13: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

can be found in Appendix A for more details and examples, an example ofOpenStreetMap data can be found in figure 3.1.

(a) without satellite view (b) with satellite view

Figure 3.1: Representation of OpenStreetMap data on a top-down view

3.1.2 Lane markings generation

A problem with OpenStreetMap is that it lacks data on lane markings. Thuslane markings have to be added to the map. This has been done semi-automatically using the data on roads and lanes. We created a new Open-StreetMap map tag to identify lanes markings. The key of the tag is "marking"and its value is either "middle" or "border" depending on the place of the line.

Then for each road we created as many lines as there are lanes on theroad plus one, the border lines having the "marking" tag value "border", andthe others the value "middle". This part was done automatically by what wecall the line generator, as shown by the figure 3.2. The line generator iterateover each way and then iterate over each node of the way to duplicate thenode the desired number of times to create the new ways representin the lanemarkings. As there is no convention for the positions of the lanes relative tothe coordinates of the nodes, we decided set the coordinates as the center ofthe road and split the lanes on either side of the road.

6

Page 14: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

Database

Line generator

Raw dataGenerated lines

Figure 3.2: The line generator takes the raw data from the database togenerate lines and puts them back into the database. It is done offline,

before manual correction.

(a) Raw OpenStreetMap data, a linerepresents a road, we do not see the

lanes.

(b) Modified OpenStreetMap data,here a line represents a lane markings,

thus we can see the lanes.

Figure 3.3: Representation of the lane markings generation

7

Page 15: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

We finally manually corrected some places, essentially crossroads. Thisis done using a dedicated software, where the user can independently moveeach node and way, using a satellite view as reference. This part was doneoffline, and the database was then able to be used onboard by the otheralgorithms. It is a long and fastidious process, which shows another benefitof OpenStreetMap for a full scale implementation, as it can be done by allcontributors if we publish the new tags.

Figure 3.3 shows the results of the conversion of data, and figure 3.4 showsa detailed view of a crossroads with satellite image.

Figure 3.4: Data were corrected for complicated areas such as crossroads,here a detailed view of a corrected crossroads with satellite view.

Finally the data stored are lists of points that represent lane markings.Each line can be seen as a list of segments, where the end of each segment is

8

Page 16: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

the begining of the next one.

3.2 Ridge detector

This part of the report describes the method used to detect lane markings. Itis based on the method proposed by López [18], and using ridges as featureto detect lines.

3.2.1 Theory

This method uses Laplacian values of images to detect ridges, as proposed byTran and Lux [22]. For each image the algorithm follows these steps:

1. Projection of the image into the horizontal plane using camera positionrelative to the vehicle, roll and pitch angles of the vehicle relative to theground.

2. Computation of the Laplacian.

3. Elimination of pixels where the Laplacian is lower than a threshold.

4. Computation of the gradient and the ratio between Laplacian and gra-dient.

5. Elimination of pixels where the ratio is lower than an other threshold.

6. Computation of the Hessian matrix, its eigenvalues and their eigenvec-tors.

7. Elimination of pixels where eigenvalues are almost equal.

The first step allows us to work in the same plane as the one of the map,and uses the camera calibration information and the roll and pitch angles ofthe vehicle to do the projection. Using the roll and pitch angles allows us tocorrect the projection, they are given by an Inertial Measurmant Unit (IMU).

Then by keeping pixel with a high Laplacian value we keep only bright ob-ject surrounded by darker zone, which correspond to the ridges. The Laplacianis defined as follow:

L(f(x, y)) =∂2f(x, y)

∂x2+

∂2f(x, y)∂y2

9

Page 17: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

And each derivative is calculated using Sobel operator with a 5 × 5 kernel.Thus:

∂x=

1 2 0 −2 −14 8 0 −8 −46 12 0 −12 −64 8 0 −8 −41 2 0 −2 −1

and

∂y=

−1 −4 −6 −4 −1−2 −8 −12 −8 −20 0 0 0 02 8 12 8 21 4 6 4 1

On figure 3.5 we can see the Laplacian for an image. Further steps are neededto extract lane marking.

(a) Camera image (b) Projected image (c) Laplacian of the image

Figure 3.5: Results for the Laplacian computation

The ratio between Laplacian and the norm of the gradient allows to removethe edge of these objects, thus allowing to keep only the center part of eachridge. Indeed in the middle of bright object the gradient will be almost zero,and on the edges it will be very high. Thus by dividing by the norm ofthe gradient we remove pixels that correspond to border of bright objects.

10

Page 18: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

Moreover this allow, by choosing the threshold, to choose the size of objectswe want to keep, ie the width of the lane markings we detect. The norm ofthe gradient is defined by:

‖grad(f)‖ =

(

∂f

∂x

)2

+

(

∂f

∂y

)2

The Hessian matrix gives us the direction of the ridges, thus we can justkeep ellongated ridges corresponding to lines, by keeping ridges where oneeigenvalue of the Hessian matrix is greater than the other. The Hessian matrixis define as follow:

H(f) =

∂2f

∂x2

∂2f

∂x∂y∂2f

∂y∂x

∂2f

∂y2

The second order derivatives are defined by composing the previous 5×5 Sobelkernels. Which give three 9 × 9 kernels for the second order.

The greatest eigenvalue of the Hessian matrix gives us th direction of theline in question. Then we search the maximum value of ratio we definedearlier over the orthogonal direction. We only keep this maximum and discardother value, this allows us to keep only one-pixel-wide lines. Having only one-pixel-wide lines allows to minimize the quantity of data send as input to theICP, which is important as it will mean less computation and thus betterperformance.

On figure 3.6 we can see the results of the algorithm. The image 3.6a isthe one from the camera, the image 3.6b is the projection of this image on atop-down view and the image 3.6c shows the detected lines. These results willbe detailed and discussed later.

3.3 ICP algorithm

In this part we describe the ICP algorithm [23] used to register the mapand the detected lines, and then correct the localization of the vehicle. Thisalgorithm consists in matching the detected lines to the ones stored in themap, find the transformation that corrects the position of the vehicle in themap, and iterate after applying the transformation.

3.3.1 Matching

The inputs of the matching are the pointcloud of pixels considered as lanemarkings and a list of local segments extracted from the map around theposition given by the GPS. For each point in the pointcloud we search the

11

Page 19: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

(a) Input image

(b) Projected image (c) Detected ridges

Figure 3.6: Ridges detection

12

Page 20: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

closest line to the point in a predifined range, which direction coincide withthe principal component of the Hessian matrix, this allows a better matchwhen there are line in different directions. The range is defined to avoid tomatch points to lines too far away, no more than one lane at first, and therange is increased if no pixels match. Then this matching is used to find thecorrected position.

3.3.2 Minimization

The corrected position is given by the transformation minimizing the followingerror:

E =N−1∑

i=0

‖q(i) − Tα.p(i)‖2

α = (tx, ty, θ)t

where p(i) is a point of the pointcloud, q(i) is the closest point on thematching segment, N is the number of points in the pointcloud and Tα is the2D transformation we are looking for, with tx, ty, θ its parameters. Tα can bewritten as a matrix in homogeneous coordinates:

Tα =

cos(θ) − sin(θ) tx

sin(θ) cos(θ) ty

0 0 1

The sum correspond to the sum of errors between the pointcloud and themap, that is what we want to minimize. The minimization is done usingthe Levenberg-Marquardt algorithm, because simpler algorithms as Gauss-Newton algorithm may not converge as we often only have information on onedirection because localy the lane markings are parallel lines.

For a point p(i) the Jacobian matrix of Tα is the 2 × 3 matrix:

J (i) =

(

1 0 −p(i)x sin(θ) − p(i)

y cos(θ)0 1 p(i)

x cos(θ) − p(i)y sin(θ)

)

And the global Jacobian matrix is the 2N × 3 matrix:

J =

1 0 −p(0)x sin(θ) − p(0)

y cos(θ)0 1 p(0)

x cos(θ) − p(0)y sin(θ)

1 0 −p(1)x sin(θ) − p(1)

y cos(θ)0 1 p(1)

x cos(θ) − p(1)y sin(θ)

......

...1 0 −p(N−1)

x sin(θ) − p(N−1)y cos(θ)

0 1 p(N−1)x cos(θ) − p(N−1)

y sin(θ)

13

Page 21: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 3. METHODS

The correction of the transformation is:

t = [J tJ + λ.diag(J tJ )]−1J tr

with r the vector of errors of matching

r =

q(0) − Tα.p(0)

q(1) − Tα.p(1)

...q(N−1) − Tα.p(N−1)

Where λ is the damping factor. It is adapted depending on eigenvalues ofJ tJ .

Finally the updated transformation is Tα+t. We then iterate these twosteps with the updated position. The number of iterations is a parameter al-lowing to have either a fast algorithm if it is small, either a better convergenceif it is larger.

To ensure better convergence we add to the error the term ‖pgps − Tα.ppos‖2,

where ppos is the current position of the vehicle, pgps is the position given bythe GPS. Thus when there is no match between pixels and lines the algo-rithm converge toward the GPS position instead of diverging because of theincreasing range of matching.

14

Page 22: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

4 Tests and results

This chapter presents the experimental protocol for the tests, the results tothe tests and discussions about these results.

4.1 Platform and test environment

4.1.1 Platform

The tests have been done with a Lexus LS600h (as this thesis is part of aproject in partnership with Toyota), shown in figure 4.1, equipped with thefollowing sensors:

• GPS

• IMU

• Monocular RGB camera

• Stereovision camera

• Lidars

• CAN bus

In our experiment we only use the GPS, the IMU and a monocular camera.More technical details on the equipment can be found in Appendix C.

4.1.2 Environment

The environment tested is composed of different kinds of road, with crossroadsand roundabout, but mostly highway. The predefined route can be seen onfigure 4.2, and is 11.3 km long. It is composed of highway, residential roadswith crossroads and roundabouts. The tests principally take place in daylightwith good weather condition.

15

Page 23: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

IMU + GPS(localization)

computer(online computation and data acquisition)

User interface

Lidar sensors and cameras(perception)

Figure 4.1: Lexus

4.1.3 Experimental protocol

The tests have been realized offboard, with a configuration equivalent to theone onboard. We first recorded the sensors data for a trip on the predefinedroute. Then we tested the algorithm on the recorded data.

This was done using ROS [24, 25] (Robot Operating System), more detailson ROS can be found in Appendix B. ROS allows us to record the data whiledriving the car on the chosen route, and then replaying them on anothercomputer to test the algorithms in the same conditions.

4.2 Map

4.2.1 Data storage

The data have to be stored on a local computer, as they need to be used in thevehicle while driving. To do so we use a software released by OpenStreetMapand named OSM3S [26]. It is an API which acts as a database to which the

16

Page 24: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

Figure 4.2: Tested route

user can send query and get the data back as a XML file. The database canbe populated by different dataset, here we populated it with the dataset of thefrench region Rhône-Alpes, and our lane markings dataset for a small routearound the INRIA.

4.2.2 Discussion

The map is globally correct, but there are sometimes some differences withthe real road which can lead to errors in the results of the algorithm. Theseerrors are due to the fact that the OpenStreetMap data are not always correct,as this is a user-made map. But there is also an unknown during the creationof the lines, we generally do not know where the coordinates of the pointsconstituing a road refer to. We supposed that they refer to the center of theroad, but sometimes it is false as it can refer to the center of a particular lane,depending on how the creator of the road did it.

Also some lines that exist on the road may not appear in the map as

17

Page 25: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

they are not marked as roads. For example the cycle ways or pedestrianpaths. Another problem is the fact that the data are not always up-to-date,to correct this we could use an internet connection to update the map evenwhen on the road.

4.3 Ridge detector

4.3.1 Implementation

The implementation was done in C++ using the OpenCV [27] library for imageprocessing, and using ROS to manage interaction between all sensors andparts of the platform, especially synchronization between images and inertialdata. Thus the input are the images from the camera and the roll and pitchangles of the vehicle, and the output is a pointcloud of pixels considered aslane markings.

To improve performance the Laplacian, gradient and Hessian matrix com-putations were parallelized using CUDA and a GPU. This allows to have areal-time algorithm, as the image treatment is much faster.

4.3.2 Results

On figures 3.6 and 4.3 we can see the results of line detection for highway. Theresults are good, because all lines are detected, and there is no detection wherethere is no lines. However lines that are too thin are not always detected oronly partially, but this is not really a problem as having more pixels for a linedoes not increase significantly the results of the ICP. We can also see somealiasing on long lines, indeed they are not always sraight and aligned. It isdue to the fact that we only keep a one-pixel-wide line.

On residential roads the detection also works for the lines, but there arealso a lot of detection that are not lane markings. For example pavements,poles or trees are often detected as line, due to their elaongated shape andtheir color which is brighter than the background. We can see in figure 4.4that lines are detected, but also objects in background such as safety rail ortrees. The results are even worst for roundabout, because the camera doesnot see much of the road, indeed while entering a roundabout the road goesoutside the field of view of the camera.

18

Page 26: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

(a) Camera image (b) Projected image (c) Detected ridges

Figure 4.3: Ridges detection on highway

(a) Camera image (b) Projected image (c) Detected ridges

Figure 4.4: Ridges detection on residential road

19

Page 27: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

4.3.3 Discussion

The ridge detector has good results on highway scenarios, but more mixedresults on other types of roads.

These results are mainly due to the fact that while on highway a largepart of the image is covered by the road, but in residential area the camerasee more background and thus the image include more useless information.

A way to improve these results could be to adapt the detector to detectdifferent size of line, as opposed to here where the detector is calibrated foran average size of line. Moreover a better orientation of the camera could alsoimprove the results, because for this application we only need to see the road,and it would induce less error while projecting the image.

4.4 ICP

4.4.1 Implementation

This part was also implemented in C++ and using ROS to handle pointcloudsand transformations between the different frames.

4.4.2 Results

The results of the ICP to improve localization are highly dependent on theresults of the ridge detector, Indeed when the ridge detector returns goodresults then we can expect good results from ICP, but with bad detection theICP is most likely to diverge. Thus the results are good for highway scenarios,it means the localization is well corrected, as the car is detected in the rightlane, and less good for roads with a lot of crossroads and roundabout.

On figure 4.5 we can see the results of the ICP in a highway scenario. Ittake place on a two-lane road before it merges with another two-lane road. Inthe lower right corner we can see the view of the camera, and thus that weare on the rightmost lane. The green lines correspond to the lines of the map.The red dots correspond to the detected lines. The red arrow corresponds tothe position given by the GPS, which puts the car on the left of the leftmostlane. The white rectangle represents the car in its corrected position, it is inthe middle of the rightmost lane, which is the correct position.

On roads with a high number of lanes, like highways, the matching can bewrong by a couple of lanes, indeed the relative position to the lane is almostalways well corrected, but the offset in number of lanes can sometimes bewrong. It depends on the initial position, which is set to the GPS position atthe begining. The correction is done well when the initial position is not too

20

Page 28: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

Figure 4.5: ICP correction on highway

far away from the real position, or when there is not too many lanes, like onan entrance road to the highway.

On residential roads the results are equivalent if the ridge detector workedwell. But the quality of the results will drop when arriving to a crossroadsor a roundabout, where the ICP is most likely to diverge due to bad linesdetection. It will only be corrected when the number of matchs is low, andthe algorithm converge to the GPS position, which is equivalent to a reset ofthe position.

4.4.3 Discussion

The ICP works well, but a few improvments can still be done. There issometimes some jumps in position due to the fact it is not filtered. Thusfiltering could be added to smooth the variation of the corrected position and

21

Page 29: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

CHAPTER 4. TESTS AND RESULTS

thus avoiding some discontinuities. Another way to improve this algorithmcould be to weight the errors for each points, depending on their results to theridge detector. Another problem to correct is the longitudinal correction, theone in the direction of the road. Indeed on a straight road the lateral positionis well corrected, but changing the longitudinal position of the vehicle doesnot impact the matching, as we match points to lines, and thus it does notimpact the results either. A way to correct this could to take into account thevelocity of the car and use it to correct the longitudinal position.

We also briefly tested a different approach than the ICP to improve theresults of the algorithm on residential roads. We implemented a particle fil-ter, where each particle was evaluated using the results of the ridge detectorand the matching. It improved the results on residential roads, as resultsdue to bad detection of lines was filtered, but it also deteriorated the resultson highway. The overall results was a bit better, but a lack of precision ap-peared. Thus it was not keep as viable solution, but the two solutions couldbe combined to improve the global method.

22

Page 30: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

5 Conclusion

In this report we have presented a method to localize a vehicle on roads usingvisual information and an opensource map. The approach was divided in threeparts corresponding to the different modules of the developped software. Thefirst part treated the map, it analysed the existing data from OpenStreetMapand extended them with lane markings data. The second part corresponded tothe lane markings detection using the camera and a ridge detection method.The third and last part implemented an ICP algorithm to compare the de-tected lines and the one stored in the map, and then return the updatedlocalization of the vehicle.

Our results shows that this method is viable. Indeed we had good resultson highway, and more mixed results on other kind of roads. This is mainlydue to the quality of the road and thus the quantity of usefull data seen bythe camera.

5.1 Future works

The proposed algorithm can be improved and extended in many ways. Wealready proposed several improvments for each part of the algorithm. Howeverother upgrades can be done to improve every parts, for example we couldparallelized the part of the code which are not yet, this will results in a greatgain of computation time, especialy for the ICP part where manipulationsof pointclouds could easily be parallelized when applying transformation tothem. Another way to enhance the overall algorithm would be to improvethe map, especially the semi-automatic construction of the lines. Indeed abetter map means a better localization, here we only corrected the map forcrossroads and roundabout, thus there were still errors on the rest of the map.

Finally it would be interesting to develop an application using this algo-rithm and a controllable vehicle to do a line follower.

23

Page 31: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

Bibliography

[1] Raphael Labayrade. How autonomous mapping can help a road lanedetection system? In Control, Automation, Robotics and Vision, 2006.ICARCV’06. 9th International Conference on, pages 1–6. IEEE, 2006.

[2] Ignacio Parra Alonso, David Fernández Llorca, Miguel Gavilán, Sergio Ál-varez Pardo, Miguel Ángel García-Garrido, Ljubo Vlacic, and Miguel Án-gel Sotelo. Accurate global localization using visual odometry and digitalmaps on urban environments. Intelligent Transportation Systems, IEEETransactions on, 13(4):1535–1545, 2012.

[3] Philipp Bender, Julius Ziegler, and Christoph Stiller. Lanelets: Efficientmap representation for autonomous driving. In Intelligent Vehicles Sym-posium Proceedings, 2014 IEEE, pages 420–425. IEEE, 2014.

[4] Mordechai Haklay and Patrick Weber. Openstreetmap: User-generatedstreet maps. Pervasive Computing, IEEE, 7(4):12–18, 2008.

[5] OpenStreetMap wiki. http://wiki.openstreetmap.org/wiki/Main_

Page. [Online; accessed 17-March-2015].

[6] Radu Danescu and Sergiu Nedevschi. Probabilistic lane tracking in diffi-cult road scenarios using stereovision. Intelligent Transportation Systems,IEEE Transactions on, 10(2):272–282, 2009.

[7] Sergiu Nedevschi, Rolf Schmidt, Thorsten Graf, Radu Danescu, DanFrentiu, Tiberiu Marita, Florin Oniga, and Ciprian Pocol. 3d lane detec-tion system based on stereovision. In Intelligent Transportation Systems,2004. Proceedings. The 7th International IEEE Conference on, pages 161–166. IEEE, 2004.

[8] Jens Goldbeck and Bernd Huertgen. Lane detection and tracking byvideo sensors. In Intelligent Transportation Systems, 1999. Proceedings.1999 IEEE/IEEJ/JSAI International Conference on, pages 74–79. IEEE,1999.

24

Page 32: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

BIBLIOGRAPHY

[9] Yue Wang, Dinggang Shen, and Eam Khwang Teoh. Lane detection usingspline model. Pattern Recognition Letters, 21(8):677–689, 2000.

[10] ZuWhan Kim. Robust lane detection and tracking in challenging scenar-ios. Intelligent Transportation Systems, IEEE Transactions on, 9(1):16–26, 2008.

[11] Lei Xu, Erkki Oja, and Pekka Kultanen. A new curve detection method:randomized hough transform (rht). Pattern recognition letters, 11(5):331–338, 1990.

[12] Jung Gap Kuk, Jae Hyun An, Hoyong Ki, and Nam Ik Cho. Fast lanedetection & tracking based on hough transform with reduced memoryrequirement. In Intelligent Transportation Systems (ITSC), 2010 13thInternational IEEE Conference on, pages 1344–1349. IEEE, 2010.

[13] Guoliang Liu, F Worgotter, and Irene Markelic. Combining statisticalhough transform and particle filter for robust lane detection and track-ing. In Intelligent Vehicles Symposium (IV), 2010 IEEE, pages 993–997.IEEE, 2010.

[14] Dominique Gruyer, Rachid Belaroussi, and Marc Revilloud. Map-aidedlocalization with lateral perception. In Intelligent Vehicles SymposiumProceedings, 2014 IEEE, pages 674–680. IEEE, 2014.

[15] Julius Ziegler, Henning Lategahn, Markus Schreiber, Christoph G Keller,Carsten Knoppel, Jochen Hipp, Martin Haueis, and Christoph Stiller.Video based localization for bertha. In Intelligent Vehicles SymposiumProceedings, 2014 IEEE, pages 1231–1238. IEEE, 2014.

[16] Amaury Nègre, James L Crowley, and Christian Laugier. Scale invariantdetection and tracking of elongated structures. In Experimental Robotics,pages 525–533. Springer Berlin Heidelberg, 2009.

[17] A López, J Serrat, J Saludes, C Cañero, F Lumbreras, and T Graf. Ridge-ness for detecting lane markings. In Proceedings of the 2nd InternationalWorkshop on Intelligent Transportation Systems (WIT’05), 2005.

[18] A López, C Cañero, J Serrat, J Saludes, F Lumbreras, and T Graf.Detection of lane markings based on ridgeness and ransac. In IntelligentTransportation Systems, 2005. Proceedings. 2005 IEEE, pages 254–259.IEEE, 2005.

25

Page 33: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

BIBLIOGRAPHY

[19] A López, J Serrat, C Cañero, F Lumbreras, and T Graf. Robust lanemarkings detection and road geometry computation. International Jour-nal of Automotive Technology, 11(3):395–407, 2010.

[20] Seung-Nam Kang, Soomok Lee, Junhwa Hur, and Seung-Woo Seo. Multi-lane detection based on accurate geometric lane estimation in highwayscenarios. In Intelligent Vehicles Symposium Proceedings, 2014 IEEE,pages 221–226. IEEE, 2014.

[21] Szymon Rusinkiewicz and Marc Levoy. Efficient variants of the icp al-gorithm. In 3-D Digital Imaging and Modeling, 2001. Proceedings. ThirdInternational Conference on, pages 145–152. IEEE, 2001.

[22] Thanh Hai Tran Thi and Augustin Lux. A method for ridge extraction.In 6th Asian Conference on Computer Vision 2004-ACCV’04, volume 2,2004.

[23] Zhengyou Zhang. Iterative point matching for registration of free-formcurves. 1992.

[24] ROS website. http://www.ros.org/. [Online; accessed 17-March-2015].

[25] ROS wiki page. http://wiki.ros.org/fr. [Online; accessed 17-March-2015].

[26] OSM3S wiki page. http://wiki.openstreetmap.org/wiki/Overpass_

API. [Online; accessed 17-March-2015].

[27] OpenCV website. http://www.opencv.org/. [Online; accessed 17-March-2015].

26

Page 34: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

A OpenStreetMap

This appendix is an extract of the OpenStreetMap wiki, and aim to explainin more details the structure of OpenStreetMap data.

A.1 Node

A node is one of the core elements in the OpenStreetMap data model. Itconsists of a single point in space defined by its latitude, longitude and nodeid. A third, optional dimension (altitude) can also be included: key:ele. Anode can also be defined as part of a particular layer=* or level=*, wheredistinct features pass over or under one another; say, at a bridge. Nodes canbe used to define standalone point features, but are more often used to definethe shape or ‘path’ of a way. Over 2.000.000.000 nodes exist in the globalOSM data set (as of 2013).

A.1.1 Point features

Nodes can be used on their own to define point features. When used in thisway, a node will normally have at least one tag to define its purpose. Nodesmay have multiple tags and/or be part of a relation. For example, a telephonebox may be tagged simply with amenity=telephone, or could also be taggedwith operator=*.

A.1.2 Nodes on Ways

See also: WayMany nodes form part of one or more ways, defining the shape or ‘path’

of the way. Where ways intersect at the same altitude, the two ways mustshare a node (for example, a road junction). If highways or railways crossat different heights without connecting they should not share a node (e.g.highway intersection with a bridge=*). Where ways cross at different heights

27

Page 35: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX A. OPENSTREETMAP

they should be tagged with different layer=* or level=* values, or be taggedwith location=* ‘overground’ or ‘underground’. There are some exceptions tothis rule, roads across dams are by current definition required to share a nodewith the waterway crossing the dam. Some nodes along a way may have tags.For example:

• highway=crossing + crossing=* to define a pedestrian crossing along ahighway=*

• natural=tree to identify a lone tree on a barrier=hedge

• building=entrance to identify a doorway into a building=*

A.1.3 Structure

Name Value

id integer number ≥ 1

lat decimal number ≥ −90.0000000 and ≤ 90.0000000with 7 decimal places

lon decimal number ≥ −180.0000000 and ≤ 180.0000000with 7 decimal places

tags A set of key/value pairs, with unique key

A.2 Way

A way is an ordered list of nodes which normally also has at least one tag oris included within a Relation. A way can have between 2 and 2,000 nodes,although it’s possible that faulty ways with zero or a single node exist. A waycan be open or closed. A closed way is one whose last node on the way isalso the first on that way. A closed way may be interpreted either as a closedpolyline, or an area, or both.

A.2.1 Types of way

Open way

An open way is a way describing a linear feature which does not share afirst and last node. Many roads, streams and railway lines are open ways.

28

Page 36: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX A. OPENSTREETMAP

Closed way

A closed way is a way where where the last node of the way is shared withthe first node with suitable tagging. A closed way that also has a area=yestag should be interpreted as an area (but the tag is not required most of thetime, see section below). The following closed way would be interpreted asclosed polylines:

• highway=* Closed ways are used to define roundabouts and circularwalks

• barrier=* Closed ways are used to define barriers, such as hedges andwalls, that go completely round a property.

Area

An area (also polygon) is an enclosed filled area of territory defined asa closed way. Most closed ways are considered to be areas even without anarea=yes tag (see above for some exceptions). Examples of areas defined asclosed ways include:

• leisure=park to define the perimeter of a park

• amenity=school to define the outline of a school

For tags which can be used to define closed polylines it is necessary to alsoadd an area=yes if an area is desired. Examples include:

• highway=pedestrian + area=yes to define a pedestrian square or plaza.

Areas can also be described using one or more ways which are associated witha multipolygon relation.

Combined closed-polyline and area

It is possible for a closed way to be tagged in a way that it should be interpretedboth as a closed-polylines and also as an area.

For example, a closed way defining a roundabout surrounding a grassy areamight be tagged simultaneously as:

highway=primary + junction=roundabout, both being interpreted as apolyline along the closed way, and landuse=grass, interpreted on the areaenclosed by the way.

29

Page 37: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX A. OPENSTREETMAP

A.3 Relation

A relation is one of the core data elements that consists of one or moretags and also an ordered list of one or more nodes, ways and/or relations asmembers which is used to define logical or geographic relationships betweenother elements. A member of a relation can optionally have a role whichdescribe the part that a particular feature plays within a relation.

A.3.1 Usage

Relations are used to model logical (and usually local) or geographic relation-ships between objects. They are not designed to hold loosely associated butwidely spread items. It would be inappropriate, for instance, to use a relationto group ‘All footpaths in East Anglia’.

A.3.2 Size

It is recommended to use not more than about 300 members per relation. Ifyou have to handle more than that amount of members, create several relationsand combine them with a Super-Relation. Reason: The more members arestuffed into a single relation, the harder it is to handle, the easier it breaks, theeasier conflicts can show up and the more resources it consumes at databaseand server.

Note: ‘super-relations’ is a good concept on paper but none of the manyOSM software applications is working with them.

A.3.3 Roles

A role is an optional textual field describing the function of a member of therelation. For example, in North America, role:east indicates that a way wouldbe posted as East on the directional plate of a route numbering shield. Or,multipolygon relation, role:inner and role:outer are used to specify whether away forms the inner or outer part of that polygon.

A.3.4 Types of relation

There are many types of relation including:

• Relation:route is used to describe routes of many types, including majornumbered roads like E26, A1, M6, I 80, US 53; or hiking routes, cycleroutes and bus routes.

30

Page 38: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX A. OPENSTREETMAP

• Relation:multipolygon, used for defining larger Areas such as river banksand administrative boundaries.

• Relation:boundary to exclusively define administrative boundaries

• Relation:restriction to describe a restrictions such as ‘no left turn’, ‘noU-turn’ etc.

A.3.5 Examples

Multipolygon

In the multipolygon relation, the role:inner and role:outer roles are used tospecify whether a member way forms the inner or outer part of that polygonenclosing an area. For example, an inner way could define an island in a lake(which mapped as relation).

Bus route

A bus route might have a relation with type=route, route=bus and ref=* andoperator=* tags. The ways over which the bus travels would be members,along with bus stop nodes. The ways would have role:forward or role:backwardroles, depending on whether the buses operate in the direction of the way, orthe opposite way (or the role might be left blank, meaning the bus route usesthe way in both directions).

A.4 Tag

A Tag consists of ‘Key’ and a ‘Value’. Each tag describes a specific featureof a data element (nodes, ways and relations) or changesets. Both the keyand value are free format text fields. In practice, however, there are agreedconventions of how tags are used for most common purposes.

Key can be modified with a prefix, infix or suffix namespace to further qual-ify it. Common namespaces are language specification and a date namespacespecification for name keys.

A.4.1 Keys and values

Each tag has only a key and value. Tags are written in OSM documentationas key=value.

31

Page 39: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX A. OPENSTREETMAP

The key describes a broad class of features (for example, highways ornames). The value details the specific feature that was generally classifiedby the key (e.g. highway=motorway). If multiple values would be needed forone key the semi-colon value separator may be used in some situations.

Here are a few examples of how keys and values are used in practice:

• highway=residential a tag with a key of ‘highway’ and a value of ‘resi-dential’ which should be used on a way to indicate a road along whichpeople live.

• name=* a tag for which the value field is used to convey the name ofthe particular street

• maxspeed=* a tag whose value is a numeric speed in km/h (or in milesper hour if the suffix ‘mph’ is provided). Metric units are the default(and do not need to be mentioned explicitly). Other units, such as milesper hour, knots, yard or pounds must be stated after the value. Where aregulation is specified in a particular unit then that unit should be usedwithin the value field.

• maxspeed:winter=* a key that includes a namespace for ‘maxspeed’identifies a different value for maxspeed that applies only in winter.

• name:de:19531990="Ernst-Thälmann-Straße" name key with suffixed names-paces to specify the German name of a street which was valid from 1953to 1990,

32

Page 40: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

B ROS

This appendix describe what ROS is, and part of it works.

B.1 Robot Operating System

ROS [24] means Robot Operating System. The officicial website explains whatit consists of: “ROS is a exible framework for writing robot software. It is acollection of tools, libraries, and conventions that aim to simplify the task ofcreating complex and robust robot behavior across a wide variety of roboticsplatforms.”. The ROS Wiki page [25] proposes a more technical definition:“ROS is an open-source, meta-operating system for your robot. It providesthe services you would expect from an operating system, including hardwareabstraction, low-level device control, implementation of commonly-used func-tionality, message-passing between processes, and package management. Italso provides tools and libraries for obtaining, building, writing, and runningcode across multiple computers.”

B.2 ROS Concepts

If a robot has to accomplish a global task, this global task can be split intoelementary tasks like image processing, sound processing, environment anal-ysis, moving, etc. These tasks require a computation from a computer. Wecall a ROS node ‘a process that performs computation’. In order not to letthese processes live alone, the computer needs something to inventory andlink them. That is the role of the Master. Master is a managing superstruc-ture. Nodes can communicate with their pairs. The communication worksas follows: A node can publish a message (i.e. variables), on what is calleda topic. Another node is free to subscribe to this topic or not. It launchesan operation just after data is published on the topic. Here the publisher hasthe initiative. But nodes can interact directly by using a service, that is tosay a client/server structure. One client asks one server to do something and

33

Page 41: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX B. ROS

waits for its response. This is less rigid than using a topic because the clienthas the initiative. With a service, a client node asks directly for somethingto a server node and waits for the response contrarily to the first case wherethe node having the information control the reaction of the subscribers. Fig-ure B.1 schematizes the relations between nodes using services or topics tocommunicate.

Figure B.1: ROS concepts

34

Page 42: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

C Platform

This appendix technicaly describes the equipments of the platform.

C.1 Car

The experimental platform is built on a Lexus LS600h car, seen on figure 4.1and equipped with:

• 2 IBEO Lux Lidars.

• 1 TYZX Stereo camera.

• 1 Monocular RGB camera.

• 1 GPS Xsens MTi-G Inertial sensor.

• DELL computer with GPU and SSD memory.

• CAN bus.

C.2 Sensors

C.2.1 Stereo camera

The stereo camera is a TYZX Aptina MT9V022 CMOS, and its caracteristicsare:

• 22cm baseline.

• 62◦ HFOV.

• Depth of 1.8 − 23m.

• Resolution 512 × 320 pixels.

• PCI board for the disparity calculation in real time, and Linux drivers.

35

Page 43: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

APPENDIX C. PLATFORM

C.2.2 RGB camera

The RGB canera is a IDS UI-5240CP-C color camera, and its caracteristicsare:

• Resolution 1280 × 1024 pixels.

• Gigabit Ethernet interface GigE.

• 50 fps max rate in Freerun mode.

36

Page 44: Visual Map-based Localization applied to Autonomous Vehicles859759/FULLTEXT01.pdf · is a fully autonomous vehicle, which allows to totally overcome human er-rors. It will be achieve

www.kth.se