6
REAL - TIME LANE DETECTION FOR AUTONOMOUS VEHICLE Seung Gweon Jeong*, Chang Sup Kim * , Dong Youp Lee*, Sung Ki Ha * , Dong Hwal Lee*, M a n Hyung Lee * * , and Hideki Hashimoto * Dept. Mechanical & Intelligent Systems Engineering , Pusan National University ** School of Mechanical Engineering, Pusan National University * * * University of Tokyo Pusan National University, 30, Jangjeon-Dong, Kumjung-Ku, 609-735, orea Tel:82-5 1-5 10-1456, :Fax: 82-5 1-9835, sg c hung 1 @ h yow on. pusan. ac . kr [email protected] ABSTRACT A lane detection based on a road model or feature all needs correct acquirement of information on the lane in an image. It is inefficient to implem ent a lane detection algorithm through the full range of an image when it is applied to a real road in real time because of the calculating time. This paper defines two searching ranges of detecting lane in a road. First is searching mode that is searching the lane without any prior information of a road. Second is recognition mode, which is able to reduce the size and change the position of a searching range by predicting the position of a lane through the acquired information in a previous frame. It is allow to extract accurately and efficiently the edge candidate points of a lane conducting without any unnecessary searching. By means of. inverse perspective transform that removes the perspective effect on the edge candidate points, we transform the edge candidate information in the Image Coordinate System (ICs) into the plane-view image in the World Coordinate System (WCS). We define linear approximation filter and remove faulty edge candidate points by using it. This paper aims to approximate more correctly the lane of an actual road by applying the least-mean square method with the fault-removed edge information for curve fitting. Index terms-lane detection, inverse perspective transform, autonomous navigation 1. INTRODUCTION As the time a man uses a car is longer considerably, the driver feels tired in the current passive driving system that a driver gives and takes a command. So as the result of this system, too many accident occurs. And then unimaginable damage happens by the traffic congestion. By the reason, ITS(Intel1igent Transportation System) - the complex system of electricity, computer, information, communication, control - is introduced in order to consider efficiency and safety of the traffic system. Intelligent Transportation System is has been studied in the laboratory and 0-7803-7090-2/01/$10.00 0 001 IEEE the university in the whole world from the traffic control, the offer of information, the public transportation to express transportation. Intelligent Transportation System there are many field such as multifarious AHS(Advanced Highway System), CNS(Car Navigation System), AVCS(Advanced Vehicle Control System). Advanced Vehicle the Control System is the active safety device and automatic driving device. Through this system the driver drive in the comfortable condition and it decreases car accident. An Active safety device and An automatic driving device have been studied, which are the combination of many sensor - vision sensor, radar sensor, laser sensor, ultra high hequency high frequency, infrared rays infrared rays - , he motor, the actuator, computer, and the spearhead control method. In this method, road information is obtained by using CCD camera. We estimate the edge detection of a road, a lane, an object from this information and we manage the information in order to obtain the wanted purpose - he lane and edge detection for the unmanned vehicle dnving system, the object recognition for the avoidance collision. Machine vision - automatic driving control system and danger warning system - is one of important technique for car intelligent system Since the study has been started, the alarm system of the traffic line separation and the unmanned traveling equipment using CCD have demanded the strict reliance. But, the unexpected changes (shadow, tire mark, load wear, line occlusion etc.) of the lighting condition and road condition existed and so the reliable extraction of the necessary feature using the image information was considerably difficult[ 11-[4]. S o, this paper purposes to realize the algorithm, of the strong and flexible line detection. And, this algorithm’s aim is the strong, flexible detection of line in which CCD for seeking the relative velocity of vehicles is used one of the most important elements of the auto-driving and the traffic line separation alarm system. Edge points are defined from line information. Then, the searchmg range is divided between the searching mode and the detection mode. As the relative velocity between vehicles easily is obtained. An inverse perspective converting the edge information of 2D plane-view into those of 3D plan-view is used. An approximation filter is defined and is applied to remove the noise element from 1466 ISIE 2001, Pusan, KOREA

Real - Time Lane Detection for Autonomous Vehicle

Embed Size (px)

Citation preview

Page 1: Real - Time Lane Detection for Autonomous Vehicle

8/8/2019 Real - Time Lane Detection for Autonomous Vehicle

http://slidepdf.com/reader/full/real-time-lane-detection-for-autonomous-vehicle 1/6

REAL - TIME LANE DETECTION FOR AUTONOMOUS VEHICLE

Seung G weon Jeong*, Chang Sup Kim *, Dong Youp Lee*, Sung Ki Ha *,Dong Hwal Lee*, M an Hyung Lee * *, and Hideki Hashimoto

* Dept. of Mechanical & Intelligent Sy stems Engineering ,Pusan National Un iversity

** School of Mecha nical Engineering, Pusan National University

*** University of Tokyo

Pusan National Un iversity,30, Jangjeon-Don g, Kum jung-Ku, 609-735, KoreaTel:82-51-510-1456, :Fax: 82-5 1-9835,

sgchung 1@hyow on.pusan. ac .kr

[email protected]

ABSTRACT

A lane detection based on a road model or feature all needs

correct acquirement of information on the lane in an image. It is

inefficient to implem ent a lane detection algorithm throug h the fullrange of an image when it is applied to a real road in real timebecause of the calculating time. This paper defines two searching

ranges of detecting lane in a road. First is searching mode that issearching the lane without any prior information of a road. Second

is recognition mode, which is able to reduce the size and changethe position of a searching range by predicting the position of a

lane through the acquired information in a previous frame. It is

allow to extract accurately and efficiently the edge candidate

points of a lane conducting without any unnecessary searching. Bymeans of . inverse perspective transform that removes the

perspective effect on the edge candidate points, we transform the

edge candidate information in the Image Coordinate System (IC s)

into the plane-view image in the W orld Coordinate System (WCS).We define linear approximation filter and remove faulty edge

candidate points by using it. This paper aims to approximate more

correctly the lane of an actual road by applying the least-mean

square method with the fault-removed edge information for curvefitting.

Index terms-lane detection, inverse perspective transform,

autonomous navigation

1. INTRODUCTION

As the time a man uses a car is longer considerably, the driverfeels tired in the current passive driving system that a d river gives

and takes a command. So as the result of this system, too many

accident occurs. And then unimaginable damage happens by the

traffic congestion. By the reason, ITS(Intel1igent Transportation

System) - the complex system of electricity, computer,information, communication, control - is introduced in order toconsider efficiency and safety of the traffic system. Intelligent

Transportation System is has been studied in the laboratory and

0-7803-7090-2/01/$10.000 001IEEE

the university in the whole world from the traffic control, the offer

of information, the public transportation to express transportation.

Intelligent Transportation System there are many field such as

multifarious AHS(Advanced Highway System), CNS(CarNavigation System), AVCS(Advanced Vehicle Control System).

Advanced Vehicle the Control System is the active safety deviceand automatic driving device. Through this system the driver drive

in the comfortable condition and it decreases car accident. AnActive safety device and A n automaticdriving device have been studied, which are the combination ofmany sensor - vision sensor, radar sensor, laser sensor, ultra high

hequency high frequency, infrared rays infrared rays - , he motor,

the actuator, computer, and the spearhead control method. In this

method, road information is obtained by using CCD camera. Weestimate the edge detection of a road, a lane, an object from this

information and we manage the information in order to obtain the

wanted purpose - he lane and edge detection for the unmanned

vehicle dnving system, the object recognition for the avoidance

collision. Machine vision - automatic driving control system and

danger warning system - is one of important technique for car

intelligent system

Since the study has been started, the alarm system of the traffic

line separation and the unmanned traveling equipment using CCD

have demanded the strict reliance. But, the unexpe cted changes

(shadow, tire mark, load wear, line occlusion etc.) of the lightingcondition and road condition existed and so the reliable

extraction of the necessary feature using the image informationwas considerably difficult[11-[4].

So, this paper purposes to realize the algorithm, of the strong and

flexible line detection. A nd, this algo rithm’s aim is the strong ,flexible detection of line in which CCD for seeking the relativevelocity of vehicles is used one of the most important elements of

the auto-driving and the traffic line separation alarm system. Edge

points are defined from line information. Then, the searchmg

range is divided between the searching mode and the detection

mode. As the relative velocity between vehicles easily is obtained.

An inverse perspective converting the edge information of 2Dplane-view into those of 3D plan-view is used. An approximationfilter is defined and is applied to remove the noise element from

1466 ISIE 2001, Pusan, KOREA

Page 2: Real - Time Lane Detection for Autonomous Vehicle

8/8/2019 Real - Time Lane Detection for Autonomous Vehicle

http://slidepdf.com/reader/full/real-time-lane-detection-for-autonomous-vehicle 2/6

the edgy information of the load line or boundary though aninverse perspective transform. And, the road line modeled from a

circular are used the more uniform load information is curve-

approximated with the least square method.

2. BACKGROUND

A . Roadmodel

The lane model could play a role of a guide who guessed andpursued the line edge points from the continuous input image

information. Several assumptions are showed for setting up load

model.

1) It is possible to predict Road surroundings

2) The lane and boundary has the continuity of time and space.

3) The Road bound ary is continuous. if it is cut, we consider, new

situation appears.

4) The R oad curvature has the continuity of time. C urvature dose

not change not suddenly but continuously.

5) The lane and boundaries are parallel. The Road line and the

boundary are parallel in world coordinate systems.

Two arcs that have a common center of lane are defied by the

referred road characteristics.

at the former frame is able to estimate the position of the edge

substitute point at the present frame from assum ption 2) an d 3). If

it was not to search the lane at the continuous mode, the system

applys the information of the former frame to the present frame

and converts into the search mode, and then find the lane. Also, if

there are so many curvatures on the road, the one side lane can get

out of the imag e. In this case, using the lane width of the former

frame based on the one side lane that can detect the line substitute

point, we can estimate the other side lane that can not detect the

lane substitu te point.

? Reduce thesize 01 an image data? Hirtoorsm equalization

Sobel om~ta10r

? Binarization

Preprocessing

Extract the lanecandidate points

Lane fitting

Verification

Left. Aiaht laneposition

Fig. 1 Overview of the prop osed lane recogmtion algorithmB. Preprocessing

The Sobel operator is used to detect the edge point from the

characteristic of the road line. Generally, the differential operator

tendes to make the noise too conspicuous but the So bel operator

has the smoo th effect as well as making the difference of image

brightness conspicuously. besides, the uniform ity of h istogram are

applied to improve the quality of the input image. Its ultimate

object is to create the histogram to hav e the u niform distribution.

Finally, the results maintain the proper brightness value that thebright image to ex cess become dark slightly and the dark image

become bright, and then the smoo th effect of the histogram is to

carry out when we have the close part effectively. The total

contrast valance of the image is improved into amending thedistributionof the brightness value at the im age[5].

C. Definition of a search range

The vision system for the line detecting calculates the parameter

in need of the relative position grasp and autono mous driving is

extracted from the information of the input image. but it is

advantage of the real time management when being defined as a

part of the image together with performing the image processing

because of being too large in quantity of the image inform ation to

treat the whole input image with the real time. Here we define a

part of the image as Region of Interest, ROI. In this paper, we

define the search region to find the edge substitute point of the

line as tw o part. First, it is to search the whole region of the ROI.Secon d, it is to app ly the informa tion of the form er frame at the

converted recognition mode as the information to search the edge

substitute point from the present image. The position information

/8

! \

300 pixelI I

320 pixel

Fig. 2 Setup of search window in search mode

D. Inverse perspective transform

From the in put image which has perspective effect, the perspective

effect of an image can be rem oved by using inverse perspective

transformation, and position inform ation of the image plane can

be transformed into world coordinates system. So we can easily

applicate the assumption of road l), 2) to the lane detection

algorithm. And then traffic lane position lnformation whch

representeds by world coordinates system has an advantage that

the relative position which is defined that the tangential distance

from the lane center to the origin of a veh icle and the direction of

a vehicle can be expressed simply. The equation of the perspectivetransformation is followed[6].

1467 ISIE 2001, Pusan, KOREA

Page 3: Real - Time Lane Detection for Autonomous Vehicle

8/8/2019 Real - Time Lane Detection for Autonomous Vehicle

http://slidepdf.com/reader/full/real-time-lane-detection-for-autonomous-vehicle 3/6

A(-x cos8 in a +f s in a )

-BcosB(-ysina +f c o s a )

f ( y s i n a - f c o s a )

B s in 8( y s ina - c os a )

-A(xs inB s ina + c o s a )

f ( y s i n a - f c o s a )

A = y Z o c o s a + yf + f Z , s i n aB = xZ, cosa +xf

(1 )=

(2 )=

Where A and B are as follows.

(3)

Fig. 3 shows an outline of inverse perspective transformed data

from o riginal data.

(a) Input Image

(c) Transformed edge points

Fig. 3 The effect of inverse perspective transform

Fig.4 shows inverse perspective transformation via a straight lane.

@) Acquired edge points

(d ) Plan-view

E. Application of a linear approximation fil ter

Because edge points which are extracted fiom the initial searching

mode search the both sides of the defined boundary without the

previous road information, they are influenced by some noise

elements such as a shadow, a back light etc. Because the lane andthe boundary position are predicted in the recognition mode, the

noise elements can be reduced by reducing the searching boundary,

but in this case, some false information such that road surface

conditio n or lane hiding is included . Becau se the uniformless edge

information causes the large erro r in the lane estim ation, these

need to be removed. In this paper, we get more uniform edge

information by defining the f ollow ing linear approxim ation filter

to eliminate these noise elements. We define the vertical slope gi,

kxi w hich is the difference between xi and xi-1, and A2xi wh ch is

the difference between Axi and Axi-1 we remove the edge poin ts

that are influenced by the noise or rem ove the unwanted edge

points. Finally we define the equation (9 , which is the

approximation function of the real lane and boundary edges.(Q<w<l: is constant)

(4)

( 5 )

Applying the critical value to the slope defined by equation (4),

we remove the unwanted edge points by assumption 2) and we

substitute approximated values to the original real edge points by

equation (5).

ry,= tan-' ( g ,

xi=(xi-hi)+w*6 xi

(a) Acquired edge points

)Y I*>

(b)The value of gradient ang le

)Y I*>

(b)The value of gradient ang le

I . ., . .

xw..,"-"I .

(c) Detection of fault edge (d) Processed edge points

Fig.5 shows the application result of linear approximation filter.

F. Curve approximation

The Curve approximation guaranteed the flexibility and the course

of loolung for m ulti-order equation that passed most near to the

points.

The representative curve approximation method is the least mean

square method.

x =a, + a 2 Y + a , Y 2

3. EXPERIMENTAL RESULT

A . Organization of system and calibration

1468 ISIE 2001. Pusan. KOREA

Page 4: Real - Time Lane Detection for Autonomous Vehicle

8/8/2019 Real - Time Lane Detection for Autonomous Vehicle

http://slidepdf.com/reader/full/real-time-lane-detection-for-autonomous-vehicle 4/6

The method is a simple algorithm and have the merit of fastcomputation speed. by the previous road model recorded, edgepoints whlch removed the noise components approximate road

curve and boundary in 2nd order curve. In road model, circle

equation, if the circle equation is expressed by the 2nd order

equation, Eq(6) is showed and approxim ate the curve. To take aphotograph of road image, CCD camera is fit up in the center ofthe experimental vehicle and connected by the video camera to

store the image with taking. The specifications of the employed

CCD camera and lens are followed by Table 1. Input image is

implemented by the 32 0x 24 0 size and algorithm and software areimplemented by Visual C++ and 586 PC. The total organization

of system is followed by Fig. 6. to know correctly the relativeposition of vehicle to road , calibration should be carried out. In

the course of calibration, Intrinsic parameter of camera and

Extrinsic parameter by coordinate transform should be well

defined.

As errors by Intrinsic parameter are little , parameter values are

used accord ing to the manufacturing company param eters and that

errors are ignored. The camera Intrinsic parameters are followedby Table 1. In this performance, Extrinsic parameters according tocoordinate transform are only considered and calibration is carriedout. Extrinsic parameters to transform the coordin ate are followedby Table 2.  Errors are occurred by several factors in the coursethat transform 2D image coordinate into 3D w orld coordinate by

.use of Intrinsic and Extrinsic parame ters.

16"7 6 8 ( H ) 8.4pm( H) x494(V ) 9 . 8 p m ( V )

.4 1.2"

Unit cell sizehip

ApertureI length I 1 size 1 (resolution)

Lookaheaddistance

~

Error rate(%)

rrorefore

calibration

To compensate for the errors, real length and width to the road aremeasured , compared with calculated value in the proposed

algorithm and then calibration is done. In Table 3 and Table 4, by

use of Extrinsic parameters, compared before calibration errors

with after calibration. Fig.7 described the simple calibrationmethod.

11.58 ,

21 .40

28.36

Vehccle I I

20 .769 9 .189 79.35

37.338 15.398 74.47

48 .107 . 19.747 69.63

Fig. 6 Schematic diagram of the system setup

4 7

Aftercalibration

3.1555

3.15

World coordina te Systemrig . 7 Processing of the calibration and the extrinsic parameter of

a camera in vehicle-relative coordinate system

Before Error rateError

(%)alibration .

2.685 -0 .47 -14.9

2.71 -0 .44 -13.96

Table 2 Extrinsic parameters

3 .144 I 2 .742

I H e i g G H ) 1 Pan angle(8 ) I Title ang le(a ) 1

-0 .42 I -12.78

I 1.68m I 180degree I 91 degree I

fa) Straieht road (b) Plan-view

~~

(c) Curved road (d) Plan-view

Fig. 8 Lane detection in the d ayhme

Table 3 The results of the calibration

(a 91 demees 1

1469 ISIE 2001, Pusan, KOREA

Page 5: Real - Time Lane Detection for Autonomous Vehicle

8/8/2019 Real - Time Lane Detection for Autonomous Vehicle

http://slidepdf.com/reader/full/real-time-lane-detection-for-autonomous-vehicle 5/6

3.212

3.195

B: Performance on the road

0.062 1 .96

0.055 1.74

@) The road with bad co nditi on on the road

In order to gene ralize the line detectio n, we get the image w hile

driving on an express highway or a national road at 80- 10 0

km/h without considering any constraint of a light, shadow,

moving vehicle etc. We made the experiment considering a

various road and a road cond ition in the way of reliability test for

the line detection.The following figures show the results recognized a road line or

boundary through the proposed algorithm. Fig.8(a) shows the

recognized result in case one side line of an express hghway iscut periodically. Fig.8(b) shows the transformed result through an

inverse perspective transform of the recogn ized line. Fig. 8(c) and

Fig 8(d) show the line detection on a curve road and the effect of

an inverse perspective transform respectively. Fig. 9 (a) show s the

result of lane recognition in case of a large curvature of a road and

Fig. 9(b) shows the result of lane recognition under bad co ndition

on the road surface. Fig. 9(c) of the result of lane recognition intunnel and Fig. 9(d) of existing characters on the road surface atnight are good lane recognitions. Also, we can see the correct linerecognitions under the shadow o f trees and a vehicle, or piecewise

lane, and see quick processing performance of 20 framelsec over.

(c) Lane detection in tunnel

(a) The road with big curvature

(d) Lane detection in the nighttime

Fig. 9 Results of lane detection

4. CONCLUSIONThis paper make a hypothesis for characteristics of a road. and

then considering the hypothesis we intended to recog nize a lane orroad boundary more robustly by setting a changeable searching

range. we showed satisfactory performance under a generalcondition such as a curved road, the shadow o f trees and a v ehicle,

a piecewise lane, or existing characters on the road surface. And

by m eans of remov ing the perspective effect by using the inverseperspective transformation, we can check optically easily the

relative position of a veh cle on the road. However if one side lanedeviate from the image frame when the vehicle turn on a sharp

curve road, the algorithm can't recognize the lane correctly. But itcan recognize the one side lane again in quick speed. Hereby, we

1470 ISIE 2001, Pusan, KOREA

Page 6: Real - Time Lane Detection for Autonomous Vehicle

8/8/2019 Real - Time Lane Detection for Autonomous Vehicle

http://slidepdf.com/reader/full/real-time-lane-detection-for-autonomous-vehicle 6/6

can see that the proposed algorithm satisfies the requirement of

real time process, proves the robustness of the recognition

algorithm, a nd show s sufficiently the probabil i ty o f application to

an unmanned running o r lane devia t ion a la rm sys tem.

5. ACKNOWLEDGMENT

This work was suppor ted in par t by ERC/Net Shape and

Manufac tur ing and in p a r t b y the B r a i n Korea 2 1 Project .

6. REFERENCES

[ I ] Chen, M., Jochem, T., and Pomerleau, D., "AURORA: a vision-based roadway departure waming system," Proceedings of the 1995IEEE/RSJ Intemational Conference on Intelligent Robots and Systems,

[2] Pomerleau, D. , "RALPH: rapidly adapting lateral position

handler," Proceedings of the 1995 Intelligent Vehicles Symposium,

Detroit, USA, pp. 506-5 1 , 1995.[3] Broggi A., " A Massively Parallel Approach to Real-TimeVision-Based Road Marking Detection," Proceedings of the 1995Intelligent Vehicles Symposium, Detroit, USA, pp. 84-89, 1995.

[4] Dickmanns, E. D., and Mysliwetz, B. D., "Recursive 3-D Roadand Relative Ego-State Recognition," IEEE Transactions on Pattem

Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 199-21 3, 1992.

[5] Gonzalez, R. C., and Woods, R. E., Digital Image Processing,Addison-Wesley, 1992.

Vol. I , pp.243-248, 1995.

1471 ISIE2001, Pusan, KOREA