6
Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub´ e 1 , Markus Hahn 2 , Markus Sch¨ utz 2 , J¨ urgen Dickmann 2 and Denis Gingras 1 Abstract— For autonomous parking applications to become possible, knowledge about the parking environment is required. Therefore, a real-time algorithm for detecting parked vehicles from radar data is presented. These data are first accumulated in an occupancy grid from which objects are detected by applying techniques borrowed from the computer vision field. Two random forest classifiers are trained to recognize two categories of objects: parallel-parked vehicles and cross-parked vehicles. Performances of the classifiers are evaluated as well as the capacity of the complete system to detect parked vehicles in real world scenarios. I. I NTRODUCTION Object detection and scene understanding are key com- ponents in Advanced Driver Assistance Systems. In order to enable autonomous driving in semi-structured environ- ment such as parking lots, low level mapping must be further processed to achieve a high level of awareness [1] and [2]. According to Thrun [3] object maps offer several advantages over their metric and topological counterparts. Among others, they better represent situations where static objects can become dynamic and they are closer related to the human perception of the environment. Object maps particularly adapt well to situations where many instances of objects of the same type are present in the environment [4], which supports the application for detecting parked vehicles in a parking lot. A parked vehicle detection algorithm could be particularly useful for applications such as autonomous valet parking. As an example, a landmark-based localization algorithm could reduce its dependence on a particular landmark, knowing that it represents a vehicle which may not always be static. Similarly, this information could be useful for collision mitigation knowing that a vehicle, compared to a tree, can potentially move. The recognition of parked vehicles can also lead to the detection of free parking spots as demonstrated by [2]. Previous work has been done in order to detect parked vehicles from laser range data. Keat et al. [5] proposed an algorithm to extract vehicles from a SICK laser scan assuming that vehicles will be represented by an “ L ” shape. This assumption does not hold in dense cross-parking area where only vehicles front and back-ends bumpers are visible from laser data. Also, this strategy could not distinguish between a parked vehicle and the corner of a wall. Zhou et al. [2] demonstrated that it is possible to extract vehicle bumper from a laser scan. This solution solves the problem 1 Electrical and Computer Engineering Department, Universit´ e de Sher- brooke, Sherbrooke, Canada. {renaud.dube, denis.gingras}@usherbrooke.ca 2 Group Research and Advanced Engineering, Daimler AG, Ulm, Ger- many. {markus.hahn, markus.m.schuetz, juergen.dickmann}@daimler.com of dense cross parking but it is not applicable in parallel parking situations. Sun et al. [6] make an overview of the state of the art on camera based vehicle detection. Scharw¨ achter et al. [7] introduce a camera based bag-of-features approach for scene segmentation of cluttered urban traffic scenarios. In this paper we present a real-time algorithm capable of detecting both parallel and cross-parked vehicles from radar data. A radar-based occupancy grid is built according to Elfes [8]. Parked vehicle candidates are then extracted from this grid. These candidates are described and classified in order to assert the presence of vehicles. The proposed method addresses the challenge of detecting both parallel and cross- parked vehicles. A distinction is made between cross-parked vehicles, which stand perpendicular to the lane direction, and parallel-parked vehicle, which are obviously parallel to the lane direction. Finally, radar sensors are considered in our work because this technology is already present in the high- end automotive market. Radar technology is also popular due to the reliable performances offered in different weather conditions and its relatively low price compared to laser scanners. The paper is structured as follows: Section II introduces the proposed algorithm for detecting parked vehicles from radar data. The experiment described in section III measures the system performances. Finally, section IV concludes the discussion and makes a brief outlook for future work. II. ALGORITHM DESCRIPTION The proposed parked vehicle detection algorithm consists of four main steps: occupancy grid generation, candidates selection, features extraction and classification. A. Occupancy grid generation The goal of this first step is to compute an occupancy grid, as introduced by Elfes [8], in order to accumulate the radar data over time. Our occupancy grid map M k = {m 1 ,m 2 ,...,m N } consists of N grid cells m i , which represent the environment as a 2D-space with equally sized cells. For the current parked vehicle detection application, the grid cell size is fixed to 0.1m x 0.1m. Each cell is a probabilistic variable describing the occupancy of this cell. Assuming that M k is the grid map at time k and that the grid cells are independent one to another, the occupancy grid map can be modeled as a posterior probability: P (M k |Z 1:k ,X 1:k )= Y i P (m i |Z 1:k ,X 1:k ) (1) where P (m i |Z 1:k ,X 1:k ) is the inverse sensor model, which describes the probability of occupancy of the ith cell, given 2014 IEEE Intelligent Vehicles Symposium (IV) June 8-11, 2014. Dearborn, Michigan, USA 978-1-4799-3637-3/14/$31.00 ©2014 IEEE 1415

Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

Detection of Parked Vehicles from a Radar Based Occupancy Grid

Renaud Dube1, Markus Hahn2, Markus Schutz2, Jurgen Dickmann2 and Denis Gingras1

Abstract— For autonomous parking applications to becomepossible, knowledge about the parking environment is required.Therefore, a real-time algorithm for detecting parked vehiclesfrom radar data is presented. These data are first accumulatedin an occupancy grid from which objects are detected byapplying techniques borrowed from the computer vision field.Two random forest classifiers are trained to recognize twocategories of objects: parallel-parked vehicles and cross-parkedvehicles. Performances of the classifiers are evaluated as well asthe capacity of the complete system to detect parked vehiclesin real world scenarios.

I. INTRODUCTION

Object detection and scene understanding are key com-ponents in Advanced Driver Assistance Systems. In orderto enable autonomous driving in semi-structured environ-ment such as parking lots, low level mapping must befurther processed to achieve a high level of awareness [1]and [2]. According to Thrun [3] object maps offer severaladvantages over their metric and topological counterparts.Among others, they better represent situations where staticobjects can become dynamic and they are closer relatedto the human perception of the environment. Object mapsparticularly adapt well to situations where many instances ofobjects of the same type are present in the environment [4],which supports the application for detecting parked vehiclesin a parking lot.

A parked vehicle detection algorithm could be particularlyuseful for applications such as autonomous valet parking. Asan example, a landmark-based localization algorithm couldreduce its dependence on a particular landmark, knowingthat it represents a vehicle which may not always be static.Similarly, this information could be useful for collisionmitigation knowing that a vehicle, compared to a tree, canpotentially move. The recognition of parked vehicles can alsolead to the detection of free parking spots as demonstratedby [2].

Previous work has been done in order to detect parkedvehicles from laser range data. Keat et al. [5] proposedan algorithm to extract vehicles from a SICK laser scanassuming that vehicles will be represented by an “ L ” shape.This assumption does not hold in dense cross-parking areawhere only vehicles front and back-ends bumpers are visiblefrom laser data. Also, this strategy could not distinguishbetween a parked vehicle and the corner of a wall. Zhouet al. [2] demonstrated that it is possible to extract vehiclebumper from a laser scan. This solution solves the problem

1Electrical and Computer Engineering Department, Universite de Sher-brooke, Sherbrooke, Canada. {renaud.dube, denis.gingras}@usherbrooke.ca

2Group Research and Advanced Engineering, Daimler AG, Ulm, Ger-many. {markus.hahn, markus.m.schuetz, juergen.dickmann}@daimler.com

of dense cross parking but it is not applicable in parallelparking situations.

Sun et al. [6] make an overview of the state of the arton camera based vehicle detection. Scharwachter et al. [7]introduce a camera based bag-of-features approach for scenesegmentation of cluttered urban traffic scenarios.

In this paper we present a real-time algorithm capable ofdetecting both parallel and cross-parked vehicles from radardata. A radar-based occupancy grid is built according to Elfes[8]. Parked vehicle candidates are then extracted from thisgrid. These candidates are described and classified in orderto assert the presence of vehicles. The proposed methodaddresses the challenge of detecting both parallel and cross-parked vehicles. A distinction is made between cross-parkedvehicles, which stand perpendicular to the lane direction, andparallel-parked vehicle, which are obviously parallel to thelane direction. Finally, radar sensors are considered in ourwork because this technology is already present in the high-end automotive market. Radar technology is also populardue to the reliable performances offered in different weatherconditions and its relatively low price compared to laserscanners.

The paper is structured as follows: Section II introducesthe proposed algorithm for detecting parked vehicles fromradar data. The experiment described in section III measuresthe system performances. Finally, section IV concludes thediscussion and makes a brief outlook for future work.

II. ALGORITHM DESCRIPTION

The proposed parked vehicle detection algorithm consistsof four main steps: occupancy grid generation, candidatesselection, features extraction and classification.

A. Occupancy grid generationThe goal of this first step is to compute an occupancy

grid, as introduced by Elfes [8], in order to accumulatethe radar data over time. Our occupancy grid map Mk ={m1,m2, . . . ,mN} consists of N grid cells mi, whichrepresent the environment as a 2D-space with equally sizedcells. For the current parked vehicle detection application,the grid cell size is fixed to 0.1m x 0.1m. Each cell is aprobabilistic variable describing the occupancy of this cell.Assuming that Mk is the grid map at time k and that the gridcells are independent one to another, the occupancy grid mapcan be modeled as a posterior probability:

P (Mk|Z1:k, X1:k) =∏i

P (mi|Z1:k, X1:k) (1)

where P (mi|Z1:k, X1:k) is the inverse sensor model, whichdescribes the probability of occupancy of the ith cell, given

2014 IEEE Intelligent Vehicles Symposium (IV)June 8-11, 2014. Dearborn, Michigan, USA

978-1-4799-3637-3/14/$31.00 ©2014 IEEE 1415

Page 2: Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

the measurements Z1..k and the dynamic object state X1..k.Each measurement consists of n radar detections Zj ={z{1,j}, z{2,j}, . . . , z{n,j}}.

The occupancy value of each cell is calculated by a binarybayesian filter. In practice, the log posterior is used to inte-grate new measurements efficiently. Instead of performingmultiplications, the usage of the log odds ratio simplifies thecalculation to additions and avoids instabilities of calculatingprobabilities near zero or one. The log odds occupancy gridmap is formalized as:

Lk(mi) = logP (mi|Z1:k, X1:k)

1− P (mi|Z1:k, X1:k)(2)

The recursive formulation of map update in log odds ratioform is given by [9] :

Lk(mi) = Lk−1(mi)+ logP (mi|Z1:k, X1:k)

1− P (mi|Z1:k, X1:k)−L0(mi)

(3)where Lk−1(mi) and L0(mi) are the previous and prior logodds values of grid cell i. Assuming that no prior knowledgeis available, the prior probability of unknown cells is set toP0(mi) = 0.5, the above equation produces the prior logodds ratio L0 = 0. The log odds formulation of equation (2)can be inverted to obtain the corresponding probability ofmap Mk. A radar based occupancy map is displayed in Fig1. The map is projected into the image of our documentationcamera so that one can appreciate how the radars perceivethe environment.

B. Object detection and candidate selection

This second step selects parked vehicles candidates whichwill be described and classified in later steps. Two lists ofcandidates are built: one for cross-parked vehicles and onefor parallel-parked vehicles. This step can be further dividedinto three substeps: the detection of interest objects, theestimation of these objects directions and the final choiceof candidates.

1) Object detection: In order to detect objects of interest,the map M is converted to a binary grid B using thresholdα as described in equations (4) and (5).

bi = 1 if mi ≥ α (4)

bi = 0 if mi < α (5)

This grid b is clustered by a 8-n connected componentanalysis providing an initial list of objects of interest. Objectscontaining too few cells are removed from this list sincethey can hardly represent a parked vehicle leading to the listO = {O1, O2, . . . , ON}.

2) Direction estimation: From the analysis of differentparking scenarios, it is clear that the direction of parkedvehicles is often correlated to the direction of the trajectoryor to the main orientation of the lane structure. The problemof determining the vehicle direction is thus simplified bymaking the hypothesis that they are either perpendicular orparallel to the trajectory direction. Therefore, a direction ψi

which is perpendicular to the trajectory and pointing towardthe object Oi will be associated to every object.

0%

Lik

eli

ho

od

of

occ

up

an

cy

50%

100%

Fig. 1. Radar occupancy grid (top) and its projection into the documentationcamera image (bottom). The map colors represent the probabilities ofoccupancy. The driven path is plotted as dashed white line.

Fig. 2. Cutout map ci of a cross-parked vehicle. The zoomed map isobtained by rotating an object Oi according to its direction ψi which isperpenticular to the vehicle trajectory represented by a green line.

3) Candidate selection: The candidate selection proce-dure differs for cross-parked and parallel-parked vehicles.The description is divided accordingly.

Cross-parked vehicles: In radar based occupancy gridsrepresenting dense parking lots, most often only one part ofa cross-parked vehicle is observable. Knowing the directionψi of an object Oi, a rotation is made toward a normalizeddirection, yielding a cutout map ci as displayed in Fig 2.

1416

Page 3: Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

Note that the cutout map of each candidate is such orientedin order to provide the classifiers with consistent inputs. Inorder to determine if an object is a candidate for cross-parkedvehicle, binary rules are applied to the dimensions of thecutout map ci. This decision yields a list C of Q cross-parked vehicles candidates:

C = {C1, C2, . . . , CQ} where Cu = {Oi, ci}, i ∈ 1 . . . N(6)

Parallel-parked vehicles: Extracting candidates forparallel-parked vehicles is more challenging since twodifferent situations need to be considered. In its simplestform, the shape of a parallel vehicle can be completelycovered by a single object in the occupancy grid. Applyingsimilar dimension decision on the cutout map of each objectgives an initial list of parallel-parked vehicle candidates.

P = {P1, P2, . . . , PR} where Pu = {Oi, ci}, i ∈ 1 . . . N(7)

A parallel-parked vehicle can also be represented by two L-shaped distinct objects in the grid : one for the front of thevehicle and one for the back. This situation occurs at higherdriving speeds or when driving very close to the vehicle,therefore reducing the number of radar detection of the target.In this case, every pair of close objects is considered. Thesame set of dimension rules determine if the cutout mapsresulting from these object pairs represent parallel-vehiclecandidates.

Pu, u = R+ 1, R+ 2, . . . , S (8)

where Pu = {Oi, Oj , cu}, i 6= j, i ∈ 1 . . . N, j ∈ 1 . . . N(9)

At this point, it is relevant to illustrate in Fig 3 some cutoutmap samples extracted from the radar data. Note in Fig 3 (d)that two cross-parked vehicles make a parallel-parked vehiclecandidate. It is important to understand that the classifierfor parallel-parked vehicles will also see instances of cross-parked vehicles and will need to classify those as “falseparallel-parked vehicles”. Similarly, the top left sample ofFig 3 (b) represents a part of a parallel-parked vehicle andthe cross-parked vehicle classifier will have to assign the“false cross-parked vehicle” label to it.

C. Feature extraction

In order to determine if a candidate is effectively a parkedvehicle, its cutout map first needs to be processed. Thepurpose of feature extraction is to compress the raw dataand build candidates signatures suitable for classification.Given the input cutout map c associated to each candidate,four descriptors are computed resulting in feature vectorf = [ f1 f2 f3 f4 ]. Both types of candidates aredescribed using the same set of features.f1 Candidate size : The height h and the width w of the

cutout map (in meters).

f1 = [ h w ] (10)

(a) (b)

(c) (d)

Fig. 3. Cutout map samples of true cross-parked vehicles (a), false cross-parked vehicles (b), true parallel-parked vehicles (c) and false parallel-parked vehicles (d). Colors in cutout maps represent occupancy probabilities.

2 4 6 8 100

0.1

0.2

Bin number

Norm

alisedsum

Fig. 4. Representation of a cross-parked vehicle cutout map c (top) andits corresponding border projection (bottom)

f2 Candidate response strength : The average value ofevery cell of the n x m cutout map c.

f2 =1

mn

m∑i=1

n∑j=1

cij (11)

f3 Candidate shape : This feature, along with f4, is usedto represent the frequent “U” or “L” shape of parked vehiclesin a radar based occupancy grid. The sum of every cell ofthe cutout map along its horizontal dimension is computed:

b =(b1 b2 . . . bn

)where bu =

m∑i=1

ciu (12)

The resulting vector b is sub-sampled to a vector f3 ofdefinite length (1x10) and normalized. The sub-sampling to

1417

Page 4: Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

2 4 6 8 100

0.5

1

Number of bin

Average

orientation[π]

Fig. 5. Local orientation map of a cross-parked vehicle (top) and itscorresponding candidate shape (bottom)

a definite length vector ensures the homogeneity betweenfeatures extracted from cutout maps of different sizes. Fig.4 illustrates this feature for a cross-parked vehicle.f4 Candidate local orientation : This feature is a measure

of the orientation of the object within the cutout map. Byconsidering the cutout map as a textured image one cancompute the local orientation at each point of the mapaccording to Hong et al. [10].

The matrices Gx(i, j) and Gy(i, j) are first computed.They represent the gradients at every cell (i, j) of map cin the x and y axis respectively. Two 5x5 Gaussian gradientfilters with σ = 1 are used for this purpose. Differently from[10], the local orientation is calculated at full resolution, i.e.at every cell (i, j):

θ(i, j) = tan−1(Gy(i, j)

Gx(i, j)

)(13)

The resulting local orientation map θ needs to be filtereddue to the uncertainty in the measurements. An averagingfilter cannot be directly applied to the local orientation mapθ because of the nonlinearity between 0◦ and 180◦ whichboth represent horizontal orientation. The orientation mapcan be filtered by vector averaging [11]:

θ(i, j)′ =1

2tan−1

(W (i, j) ∗ sin (2θ(i, j))W (i, j) ∗ cos (2θ(i, j))

)(14)

Where W (i, j) is a two-dimensional low-pass filter. A 7x7Gaussian filter with σ = 3 is used. Similarly to f3, the localorientation grid θ′ is averaged along its horizontal axis:

d =(d1 d2 . . . dn

)where du =

1

m

m∑i=1

θ′iu

(15)Following the principle from f3, d is subsampled to the 1by 10 vector f4.

An example of this feature is shown in Fig. 5. Note thenonlinearity between 0 and π observable at bins 5 and 6 ofFig 5. The classifier presented in the following section willdeal with this nonlinearity.

D. Classification

The candidates are classified using two trained randomforest classifiers as introduced by Breiman [12]. Two clas-sifiers are used: one for categorizing the candidates fromlist C and one for categorizing the candidates from list P .The classifiers are first presented followed by the supervisedtraining process and the mean of combining the classificationresults.

1) Random forest classifier: The idea behind this classifieris to construct a multitude of different decision trees and tohave them vote for the winning class. Randomness should beintroduced in the training of each tree in order to lessen thecorrelation between them. Aside from this correlation level,classification performances of the forest are also defined bythe individual strength of each tree.

According to [12], random forest offers classificationperformances similar to the AdaBoost algorithm. Also, it isless sensitive to noise in the output label (such as a miss-labeled candidate) since it does not concentrate its effortson misclassified candidates. Finally, random forest classifierscan be used in order to evaluate the importance of particularfeatures. In this manner, a longer list of considered featureswas reduced to the four features described in the previoussection.

2) Training: In order to balance the correlation level andthe individual strength of each tree, Breiman [12] proposestwo solutions for inserting randomness during the supervisedtraining process. First, each tree should be trained using abootstrapped subset of the training data set. Second, everytree node should be trained using a random subset of features.

The actual implementation of the random forest classifieris based on work from [13]. Some parameters are adjusted in-cluding the number of trees in the forest (50) and the numberof random features for training each node (2). Also duringbootstrapping of the training data, a weighted sampling isperformed in order to balance between the two classes. Thatis, candidates from a class with lower a priori probabilityhave higher chances to be selected during training.

Given a set of features f the random forest classifiersassign a classification score w of being a parked vehicle toeach candidate of lists C and P .

3) Classification results combination: In order to combinethe results between the two classifiers, it is verified if candi-dates with a classification score w above 50% overlap witheach other. In the case two positively classified candidatesoverlap, the candidate with the highest classification scoreis confirmed to be the parked vehicle at this position. Infuture work, we are interested to explore the application ofDemptser-Shafer theory to resolve this kind of ambiguity inthe decision process.

1418

Page 5: Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

Fig. 6. Mercedes-Benz S 500 INTELLIGENT DRIVE research vehicle(top) and its radar configuration (bottom)

III. EXPERIMENT

A. Vehicle presentation

The Mercedes-Benz S 500 INTELLIGENT DRIVE re-search vehicle is considered for this experiment. It isequipped with four short range radar sensors at the vehi-cle corners, providing a 360◦ environment perception. Thesensors operate at 76 GHz and have a coverage up to 40m with an accuracy below 0.15 m. The sensors single fieldof view is 140◦ with an accuracy below 1◦. The researchvehicle and its sensors configuration are illustrated in Fig 6.

B. Classification performances

Radar data have been recorded during six different se-quences ranging from a drive in a parking crowded withcross-parked vehicles to a drive on an urban street withdispersed parallel-parked vehicles. From these data, lists ofcandidate have been pre-processed and carefully labeled.As a result, two data sets were created. The cross-parkedvehicles set contains 1119 samples (268 true vehicles and851 false vehicles) while the parallel-parked vehicles set

TABLE ICROSS-PARKED VEHICLES CLASSIFICATION RESULTS

Classified as vehicle Classified as non-vehicleTrue vehicle 95.9% ± 2.0% 4.1% ± 2.0%

True non-vehicle 1.5% ± 0.7% 98.5% ± 0.7%

TABLE IIPARALLEL-PARKED VEHICLES CLASSIFICATION RESULTS

Classified as vehicle Classified as non-vehicleTrue vehicle 95.1% ± 2.0% 4.9% ± 2.0%

True non-vehicle 2.5% ± 0.9% 97.5% ± 0.9%

0 0.05 0.1 0.15 0.20.8

0.85

0.9

0.95

1

False positive rateT

rue

posi

tive

rate

Cross−parked vehicles

Parallel−parked vehicles

Fig. 7. Average ROC curves for cross-parked and parallel-parked vehiclesclassification

contains 1342 samples (300 true vehicles and 1042 falsevehicles). Several samples are illustrated in Fig 3.

In order to predict the performance of the classifiers, arepeated random sub-sampling validation was performed.Each data set was randomly sampled to a training set (2/3)and a validation set (1/3). Classifiers performances wereevaluated using this division and the process is repeated fora total of 100 rounds. This type of cross-validation offers thepossibility to select the training / validation ratio. However,some samples may never be considered in the validationphase.

Over these 100 rounds, the classification results wereaveraged and summarized in Table I and Table II. Also, therespective averaged ROC curves were computed as shownin Fig. 7. Finally, the average accuracy of the classifiers is97.9% ± 0.7% for cross-parked vehicles and 96.9% ± 0.7%for parallel-parked vehicles.

C. Experiment scenarios

Fig 8 depicts a parking scenario where both cross-parkedvehicles and parallel-parked vehicles surround the trajectory.The green line represent the moving vehicle trajectory whilethe red and green boxes identify parked vehicles whichare detected by the system. It is interesting to note thatthe classifiers correctly distinguished the two cross-parkedSMART vehicles from the larger parallel-parked vehicle atthe top right.

1419

Page 6: Detection of Parked Vehicles from a Radar Based Occupancy Grid · 2016-03-13 · Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dub e´1, Markus Hahn 2, Markus

Fig. 8. Scenario 1: Parked vehicle detection in a parking lot. Vehicletrajectory is represented by a green line. Red and green boxes respectivelyrepresent parallel-parked and cross-parked vehicles which are detected bythe system.

Fig. 9. Scenario 2: Parked vehicle detection in urban area. Vehicle trajectoryis represented by a green line. Red and green boxes respectively representparallel-parked and cross-parked vehicles which are detected by the system.

Fig 9 illustrates a situation on urban road with severalstreet parked vehicles. The higher vehicle speed in thisscenario results in a different appearance for parallel-parkedvehicles having lower probability of occupancy on the sideof the vehicle.

IV. CONCLUSION

This paper presented an algorithm for detecting bothparallel and cross-parked vehicles from radar data. Out ofthese data is built an occupancy grid where candidates areextracted. Two random forest classifiers are used to assert thepresence of parked vehicles in two different modes: cross-parked and parallel-parked.

During the algorithm development, two hypotheses weremade. First, the parked vehicles orientation was assumed tobe correlated to the direction of the trajectory. Instead ofusing the trajectory as an input, one could determine thestructure of the parking such as presented by [1] and deter-mine the vehicle orientation according to the lane orientation.This situation will not cover every case but important apriori knowledge can be obtained this way. Future work willexplore L-shape fitting and orientation estimation by classi-fication in order to improve vehicle detection performance.

Second, only the parked vehicles behind the current mov-ing vehicle were detected in order to make use of the radarsinstalled at the back of the vehicle. The parked vehicleswere labeled after a passage of the moving vehicle so that

the classifiers were trained with prolonged integrated data.This issue could be solved in a next step by including timedependant features as well as information concerning theobjects of interest positions relative to the moving vehicleposition.

Finally, this vehicle detection approach could be adapted inorder to detect other common objects found in parking lotssuch as curbstones, posts and charging stations for HEVs.This could lead to the implementation of a cognitive mapsuch as presented by [14]. A cognitive map could enablethe vehicle to better interpret and understand the parking lotenvironment. As introduced, knowledge about the presenceof parked vehicle could be particularly useful for severaltasks of autonomous parking such as collision mitigation,localisation and parking spot detection.

ACKNOWLEDGMENT

We are thankful to Daimler AG who supported this work.

REFERENCES

[1] D. Dolgov and S. Thrun, “Autonomous driving in semi-structuredenvironments: Mapping and planning,” in Robotics and Automation,2009. ICRA’09. IEEE International Conference on. IEEE, 2009, pp.3407–3414.

[2] J. Zhou, L. E. Navarro-Serment, and M. Herbert, “Detection of parkingspots using 2d range data,” in Intelligent Transportation Systems(ITSC), 2012 15th International IEEE Conference on. IEEE, 2012,pp. 1280–1287.

[3] S. Thrun, “Robotic mapping: A survey,” Exploring artificial intelli-gence in the new millennium, pp. 1–35, 2002.

[4] D. Anguelov, R. Biswas, D. Koller, B. Limketkai, and S. Thrun,“Learning hierarchical object maps of non-stationary environmentswith mobile robots,” in Proceedings of the Eighteenth conference onUncertainty in artificial intelligence. Morgan Kaufmann PublishersInc., 2002, pp. 10–17.

[5] C. Keat, C. Pradalier, and C. Laugier, “Vehicle detection and carpark mapping using laser scanner,” in Intelligent Robots and Sys-tems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on.IEEE, 2005, pp. 2054–2060.

[6] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,”Pattern Analysis and Machine Intelligence, IEEE Transactions on,vol. 28, no. 5, pp. 694–711, 2006.

[7] T. Scharwachter, M. Enzweiler, U. Franke, and S. Roth, “Efficientmulti-cue scene segmentation,” in Pattern Recognition. Springer,2013, pp. 435–445.

[8] A. Elfes, “Using occupancy grids for mobile robot perception andnavigation,” Computer, vol. 22, no. 6, pp. 46–57, 1989.

[9] S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT pressCambridge, 2005, vol. 1.

[10] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement:algorithm and performance evaluation,” Pattern Analysis and MachineIntelligence, IEEE Transactions on, vol. 20, no. 8, pp. 777–789, 1998.

[11] S. Chikkerur, A. N. Cartwright, and V. Govindaraju, “Fingerprintenhancement using stft analysis,” Pattern Recognition, vol. 40, no. 1,pp. 198–211, 2007.

[12] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp.5–32, 2001.

[13] P. Dollar, “Piotr’s Image and Video Matlab Toolbox (PMT),” http://vision.ucsd.edu/∼pdollar/toolbox/doc/index.html.

[14] S. Vasudevan, S. Gachter, V. Nguyen, and R. Siegwart, “Cognitivemaps for mobile robotsan object based approach,” Robotics andAutonomous Systems, vol. 55, no. 5, pp. 359–371, 2007.

1420