6
23 Science, Engineering & Education, 3, (1), 2018, 23-28 Computer vision for self-driving vehicles Phuong Thao Cao * , Hau Nguyen Thi Faculty of Civil Engineering, University of Transport and Communications Hanoi, Vietnam ABSTRACT Identifying lane and obstacles is one of the most difficult issues in self-driving vehicles. Any change in road conditions, and even light changes, also affected lane and obstacle identification. This paper presents a technique for detecting lane and obstacles applied to self-driving vehicles based on computer vision. The obstacles and lanes will be determined by the edge detetection and hough algorithm. The experimental results show that the system accurately identifies lane and road obstacles in real time. Keywords: self-driving vehicles, computer vision, hough, lane detection. Received 17 April 2018, Accepted 21 May 2018 *Correspondence to: Phuong Thao Cao, Faculty of Civil Engineering, University of Transport and Communications, No 3 Cau Giay str., Lang Thuong Ward, Dong Da District, Hanoi, Vietnam, E-mail: [email protected] INTRODUCTION Automatic reconization technology, including self-driving vehicles, is one of the latest technology trends that have been receiving the attention of many scientists and technology companies around the world. Google has introduced a self-driving car, using the Lidar technology which used laser to map the region‘s terrain in 3D. This map will inform the traffic information system such as lights, lanes and obstacles. Other big technology companies, such as Apple, Toyota and GM, have also developed the self-driving vehicles based on radar to recognize the obstacles. Vis-Lab have developed several vehicle systems including ARGO, TerraMax and BRAIVE [1, 2]. Their systems allow to detect obstacles, lane marking, ditches, berms and identify the presence and position of a preceding vehicle [3]. Another self-vehicle project from PROUD can run in urban roads based on the map with information about the maneuver such as pedestrian crossing, traffic light [4]. The V-Charge project introduces an electric automated car outfitted with close- to-market sensors. A fully operational system is proposed including vision-only localization, mapping, navigation and control [5]. Computer vision is a field of computer science that operates on the basis of modeling input objects, which enables the system to automatically recognize objects from an image or sequence of images as the same way that human eyesight

Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

Phuong Thao Cao, Hau Nguyen Thi

23

Science, Engineering & Education, 3, (1), 2018, 23-28

Computer vision for self-driving vehicles

Phuong Thao Cao*, Hau Nguyen Thi

Faculty of Civil Engineering, University of Transport and CommunicationsHanoi, Vietnam

ABSTRACT

Identifying lane and obstacles is one of the most difficult issues in self-driving vehicles. Any change in road conditions, and even light changes, also affected lane and obstacle identification. This paper presents a technique for detecting lane and obstacles applied to self-driving vehicles based on computer vision. The obstacles and lanes will be determined by the edge detetection and hough algorithm. The experimental results show that the system accurately identifies lane and road obstacles in real time.

Keywords: self-driving vehicles, computer vision, hough, lane detection.

Received 17 April 2018, Accepted 21 May 2018

*Correspondence to: Phuong Thao Cao, Faculty of Civil Engineering, University of Transport and Communications, No 3 Cau Giay str., Lang Thuong Ward, Dong Da District, Hanoi, Vietnam, E-mail: [email protected]

INTRODUCTION

Automatic reconization technology, including self-driving vehicles, is one of the latest technology trends that have been receiving the attention of many scientists and technology companies around the world. Google has introduced a self-driving car, using the Lidar technology which used laser to map the region‘s terrain in 3D. This map will inform the traffic information system such as lights, lanes and obstacles. Other big technology companies, such as Apple, Toyota and GM, have also developed the self-driving vehicles based on radar to recognize the obstacles. Vis-Lab have developed several vehicle systems including ARGO, TerraMax and BRAIVE [1, 2]. Their

systems allow to detect obstacles, lane marking, ditches, berms and identify the presence and position of a preceding vehicle [3]. Another self-vehicle project from PROUD can run in urban roads based on the map with information about the maneuver such as pedestrian crossing, traffic light [4]. The V-Charge project introduces an electric automated car outfitted with close-to-market sensors. A fully operational system is proposed including vision-only localization, mapping, navigation and control [5].

Computer vision is a field of computer science that operates on the basis of modeling input objects, which enables the system to automatically recognize objects from an image or sequence of images as the same way that human eyesight

Page 2: Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

Science, Engineering & Education, 3, (1), 2018

24

can do. In the self-driving vehicles, with the images sequence obtained from the acquisition equipment on the vehicles, the system using the computer vision algorithms will perform analysis and determine the necessary objects such as lane, obstacles to create sets of instructions that lead to the controller to operate self-driving vehicles. Fig. 1 shows the model and processing scheme of the recognition system in self-driving vehicles.

In computer vision systems, object detection is the most important task for object recognition applications. Once the object have been identified, many different tasks can be processing such as object recognition, object tracking, feature object extraction in different contexts. Most object detection systems are on the same basic principle of sliding windows to find objects that appear in the image at any location or scale [6, 7]. Researchers have also been developing a variety of object-detection methods, which divided into three main groups: the first method is based on bag-of-words [8], this method often check the existence of object or repeatedly search the image domain that contains the object. The second method is to search on the areas that most likely object [9]. The third method is to find the key points and match the object to be searched [10]. In this paper, we present the techniques for detecting objects such as lanes and obstacles

based on hough and edges detection algorithms.

MAIN CONTENTS

The most important part of self-driving vehicles is the identity and route control. The system consists of cameras attached on the vehicels which receive images and analyze and recognize paths and obstacles. The information about pathways and obstacles are passed to the driver controller to go with corect lane and avoid obstacles. This papers presents two main methods of lane and obstacles detection based on background subtraction and algorithms. The general algorithm is illustrated in Fig. 2.

Edge detectionTo ensure real-time processing and increased

accuracy, the images sequences captured by the camera in the vehicle will be preprocessed into gray-scale. From this image, the system will handle edge detection to identify all objects along the way including lane and obstacles. The boundary in an image can generally be defined as the contour separating the adjacent image region from a relatively distinct characteristic, according to a number of factors. One of the characteristics is the suddenly change in gray level. One of the most efficient image-processing algorithms commonly used is the Canny algorithm [11].

Fig. 1 . Structure and diagram of reconizing system in self-driving vehicles.

Page 3: Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

Phuong Thao Cao, Hau Nguyen Thi

25

Canny operator selects two-dimensional Gaussian distribution function for image smoothing filter:

2 2

2 2

1( , ) exp2 2

x yG x yπσ σ

+= −

where σ is standard deviation of Gaussian function which is used to control the degree of smoothing. Convolution of filter template and the original image can realize smoothing image:

( , ) ( , )* ( , )I x y G x y f x y=

Next we can calculate gradient by partial

derivative. Canny operator adopts finite difference of first-order partial derivate to calculate the magnitude ( , )M i j and direction

( , )i jθ of the gradient:2 2( , ) ( , ) ( , )x yM i j f i j f i j= +

and ( , )

( , ) arctan( , )

y

x

f i ji j

f i jθ =

where ( , )yf i j and ( , )xf i j are the partial derivative of the value y and x directions respectively. ( , )M i j reflects the edge strength of

Fig. 2. Lane and object detection algorithm.

Fig. 3. Calculation model of Canny detection.

Page 4: Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

Science, Engineering & Education, 3, (1), 2018

26

image and ( , )i jθ reflects direction of the edge.Compare each pixel and two adjacent pixels

along the gradient direction, if gradient magnitude of the pixel is larger than two adjacent pixels, then the pixel is determined as possible edge point, otherwise the gradient magnitude of the pixel is set to 0. Finally, dual-threshold method is used for detecting and connecting edge. The amplitude and direction of the above formula are illustrated in Fig. 3.

Lane detection The edge detection algorithm finds the edges

based on large changes in brightness in the image. However, the image edges detected, the lanes have appeared many mising lines as in Fig. 4, thus leading in error calculation of the lane central. This needs to connect the missing point between two lines of lane. In this paper, we use the Hough

algorithm to redraw the lane [12].Hough transformation is a technique that can

be used to isolate the characteristics of a particular shape on a binary image. In the lane detection problem, hough transformation is used to determine straight lines based on binary lane image separated by canny method. For hough transformation, we will represent the line in a polar coordinate system, a line equation can be written as:

cos( ) ( )sin sin

ry xθθ θ

= − + with

cos sinr x yθ θ= +Thus, for each point ( ,x yθ θ ), we can define

a family of lines passing through that point, that is cos sinr x yθ θ= + . We have a sin graph of line line family as shown in Fig. 5b. Using hough transformation, the lines have redrawn as

Fig. 4. Edge detection (a) and the missing point in lane (b).

(a) (b)

Fig. 5. Line representation of r and θ .

Page 5: Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

Phuong Thao Cao, Hau Nguyen Thi

27

in Fig. 6.

Obstacle detection The obstacles on the road will be determined

by the background subtraction method. The current frame or image is compared with the same scene but with the exclusion of any objects in the scene. The subtraction from the original scene results in the difference of the two images. The difference highlights the areas with significant change and hence identifies the areas of object.

This paper uses the background subtraction method based on the Gaussian Mixture Model (GMM) [13]. In GMM, every pixel in a frame is modelled into Gaussian distribution. First, every pixel is divided by its intensity in RGB color space. Every pixel is calculated for its probability whether it is included in the foreground or

background with:

, , ,1

( ) . ( , , )K

t i t t i t i ti

P X w Xη µ=

= Σ∑where tX is current pixel in frame t, K is the number of the distributions, ,i tw is the weight of kth distribution at time t. ,i tµ and ,i tΣ is mean and standard deviation of kth distribution in frame t. And , ,( , , )t i t i tXη µ Σ is probability density function as following form:

11/2( ) ( )1/2/2

1( , , ) exp(2 )

t tX Xt n

X µ µη µπ

−− − Σ −Σ =Σ

For every Gaussian that is bigger than the predefined threshold, it is classified as background. The other distribution that is not included in the previous category is classified as

Fig. 6. The lane drawing results using the Hough algorithm.

Fig. 7. Four parts used to detect object (a) and Object detection (b).

Page 6: Computer vision for self-driving vehiclesmmu2.uctm.edu/see/node/jsee2018-1/4_SEE_18_03_p_23-28.pdf · equipment on the vehicles, the system using the computer vision algorithms will

Science, Engineering & Education, 3, (1), 2018

28

foreground.

,1

arg min ( )b

b i ti

B w T=

= >∑ In the self-driving vehicle, the camera

moving and we get the frame sequences. The frame will be divided into four parts as shown in Fig. 7a. The top part will be used to identify the object. The frame is employed the background subtraction algorithm and we get the binary image with the object is the white pixels. If the sum of the pixels of the object is greater than the predefined threshold, it will be considered as the object. If the distance from the object to the two lanes is greater than the vehicle’s width, the center between the object and the lane is recalculated. The result of object detection is illustrated in Fig. 7.

CONCLUSIONSThis paper presents an overview of self-driving

vehicles, in which two main algorithms are lane and obstacle detection. In lane detection, the hough algorithm has redefined the straight line from the missing line in edge detection. By dividing the frame in to four parts, obstacles will be detected and it is enough time to control the vehicle to avoid obstacles. The results demonstrated the accuracy and real time of the algorithm.

REFERENCES1. D. Braid, A. Broggi, G. Schmiedel, The terramax

autonomous vehicle, Journal of Field Robotics (JFR), 2006.

2. P. Grisleri, I. Fedriga, The braive platform, In IFAC, 2010.

3. M. Bertozzi, L. Bombini, A. Broggi, M. Buzzoni, E. Cardarelli, S. Cattani, P. Cerri, A. Coati, S. Debattisti, A. Falzoni, R. I. Fedriga, M. Felisa, L. Gatti, A. Giacomazzo, P. Grisleri, M. C. Laghi, L. Mazzei, P. Medici, M. Panciroli, P. P. Porta, P. Zani, P. Versari, VIAC: an out of ordinary experiment, in Proc. IEEE Intelligent Vehicles Symposium (IV), 2011, pp. 175-180.

4. A. Broggi, P. Cerri, S. Debattisti, M.C. Laghi, P.

Medici, D. Molinari, M. Panciroli, A. Prioletti, PROUD - public road urban driverless-car test, IEEE Trans. on Intelligent Transportation Systems (TITS), 16, 2015, 3508-3519.

5. P.T. Furgale, U. Schwesinger, M. Rufli, W. Derendarz, H. Grimmett, P. Muhlfellner, S. Wonneberger, J. Timpner, S. Rottmann, B. Li, B. Schmidt, T. Nguyen, E. Cardarelli, S. Cattani, S. Bruning, S. Horstmann, M. Stellmacher, H. Mielenz, K. Koser, M. Beermann, C. Hane, L. Heng, G.H. Lee, F. Fraundorfer, R. Iser, R. Triebel, I. Posner, P. Newman, L.C. Wolf, M. Pollefeys, S. Brosig, J. Effertz, C. Pradalier, & R. Siegwart, 2013. Toward automated driving in cities using close-to-market sensors: An overview of the v-charge project, in Proc. IEEE Intelligent Vehicles Symposium (IV).

6. N.I. Glumov, E.I. Kolomiyetz, V.V. Sergeyev, Detection of objects on the image using a sliding window mode, Optics & Laser Technology, 27, (4), 1995, 241-249.

7. Jinsu Lee, Junseong Bang, Seong-Il Yang, Object de-tection with sliding window in images including mul-tiple similar objects, Information and Communication Technology Convergence (ICTC), 2017.

8. Aly, Mohamed & Munich, Mario & Perona, Pietro, Bag of Words for Large Scale Object Recognition - Properties and Benchmark, 2011, 299-306.

9. T. Deselaers, B. Alexe, V. Ferrari, Localizing objects while learning their appearance, Proc. Eur. Conf. Comput. Vis. (ECCV), 2010, 452-466.

10. W.M.D.B. Wan Zaki, A. Hussain, M. Hedayati, J. Image Video Proc., 2011, https://doi.org/10.1186/1687-5281-2011-13.

11. J. Canny, A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8, (6), 1986, 679-698.

12. D. Ballard,C. Brown, Computer Vision, Prentice-Hall, 1982.

13. Thierry Bouwmans, Fida El Baf, Bertrand Vachon, Background Modeling using Mixture of Gaussians for Foreground Detection - A Survey, Recent Patents on Computer Science, Bentham Science Publishers, 1, (3), 2008, 219-237.