8

Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

Scientia Iranica B (2015) 22(6), 2126{2133

Sharif University of TechnologyScientia Iranica

Transactions B: Mechanical Engineeringwww.scientiairanica.com

Lane departure warning system using front-view andtwo mirror-view cameras

J.-S. Sheu�, C.-H. Chang, T.-L. Huang and H.-J. Lin

Department of Computer Science, National Taipei University of Education, Taipei 10671, Taiwan, ROC.

Received 22 May 2014; received in revised form 15 April 2015; accepted 16 June 2015

KEYWORDSLane departurewarning system;Advanced driverassistance system;Mirror-view cameras;Image processing.

Abstract. This study proposes a vehicle Lane Departure Warning System (LDWS) thatsends a warning to drivers using the road marking recognition system of two camerasinstalled on the left and right sides of the vehicle. LWDS aims to solve various problems thatmay possibly arise when driving on the road, including, mainly, unexpected conditions. Thissystem generates warnings using cameras to capture images continuously by identifyingthe movement direction of a vehicle using left and right road markings and predicting thedriving direction of the vehicle. Two cameras are installed on the right and left side mirrorsof the vehicle to promote Advanced Driver Assistance Systems (ADAS). The left mirror-view camera detects movement in the right lane, whereas the right mirror-view cameradetects movement in the other. These two cameras detect the movement of a vehicleusing algorithms and send warning signals to the driver independently. Our algorithmcombines these signals to analyze environmental conditions. The used algorithms includebrightness adjustment, binarization, dilation, erosion, and edge detection image processingtechniques. We tested the LUXGEN M7 in an experiment to verify the feasibility of ourproposed system.© 2015 Sharif University of Technology. All rights reserved.

1. Introduction

With an increasing number of individuals showing agrowing dependence on transport, it obviously servesan important function in our daily lives However,such increasing dependence has also increased tra�caccident occurrence and fatality rates. The Fatal-ity Analysis Reporting System data of the NationalHighway Tra�c Safety Administration (NHTSA) showthat tra�c accidents caused by driver negligence andunintentional lane deviations account for 41% of alltra�c accidents in the United States, as shown inFigure 1 [1]. To prevent these unintentional lane

*. Corresponding author. Tel.: +886 2-27321104;Fax: +886 2-27375457E-mail addresses: [email protected] (J.-S. Sheu);[email protected] (C.-H. Chang);[email protected] (T.-L. Huang);[email protected] (H.-J. Lin)

deviations, as well as to reduce tra�c accidents andcasualty rates, this paper proposes a lane departurewarning system (LWDS) that alerts and guides thedriver in navigating his or her vehicle when he or sheis facing such a situation.

Driver assistance systems reduce vehicle crashesby helping to avoid such accidents preemptively. ALane Departure Detection System (LDS) is an alarmsystem that helps drivers avoid switching lanes unin-tentionally. Many vehicle roadway departure crashesoccur during light tra�c and favorable weather condi-tions. A survey by Mercedes-Benz suggests that 90% ofroad collisions can be avoided if the driver can be e�ec-tively warned one second before the accident [2]. Theseaccidents can be e�ectively prevented by installing anactive safety system in the vehicle to warn the driverof upcoming danger. Many automobile manufacturershave installed the LDWS in their vehicles to reducethe frequency of accidents caused by the inattention

Page 2: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133 2127

Figure 1. NHTSA data.

of the driver. The main function of LDWS is to warnand allow the driver to take responsive measures inadvance when his or her vehicle is about to depart fromthe correct lane, hence, preventing vehicular accidentscaused by unintentional lane departures. In February2007, ISO issued ISO 17361 [3], which was the �rstversion of the LDWS standard [4]. This standardregulates the vehicle and road lane relative to thelocation and rate of lane departure in order to preventthe system from warning the driver too early or toolate. In 2005, the U.S. Federal Motor Carrier SafetyAdministration also set reference standards regardingthe LDWS operating requirements [5] with reference toISO 17361.

This paper is organized as follows: Section 1 in-troduces the importance of LDWS. Section 2 describesthe system model, LDWS, Hough transform, and roadmarking recognition system. Section 3 discusses theexperimental procedures, and provides and veri�es theresults. Section 4 o�ers several conclusions.

2. Advanced driver assistance system

The Advanced Driver Assistance System (ADAS) isactively developed by major automobile manufacturersworldwide for intelligent vehicle technology. ADASis considered the �rst step in the future developmentof unmanned intelligent vehicles. Currently, ADAScannot control a vehicle independently from the driver.The system provides several working and environmen-tal situations around a vehicle which can be analyzedand determined by a microprocessor. ADAS warnsdrivers of tra�c accidents in advance. The systemhas more than nine functions, including a blind spotsystem (BSD), a Backup Parking Aid System (BPAS),a Rear Crash Collision Warning System (RCCWS), aLane Departure detection System (LDS), a CollisionMitigation System (CMS), an Adaptive Front-lighting

System (AFS), a Night Vision System (NVS), anAdaptive Cruise Control System (ACCS), a Pre-CrashSystem (PCS), and a Parking Aid System (PAS). Thissystem has three main processes, which are discussedin Subsections 2.1 and 2.2 [6-8].

2.1. Information collectionThe �rst process in ADAS is the information collectionprocess. Di�erent systems must use di�erent typesof vehicle sensor, such as image sensors, wheel speedsensors, Millimeter Wave (MMW) radar, ultrasonicwave radar, infrared ray radar, and laser radar, todetermine the function and parameter-varying situa-tion of a vehicle. LDS, NVS, ACCS, and PAS useCMOS image sensors, infrared ray sensors, and radar.The second process of ADAS is the electronic controlunit, which aims to collect information from the sensorsfor data analysis and processing. The data processinggenerates controlling signals that are transmitted tovarious devices installed in the vehicle. The �nalprocess of ADAS is the operating process. The devicesthat are installed on a vehicle are operated by controlsignals. Vehicle radar is classi�ed into several types,such as laser radar, MMW radar, ultrasonic wave radar,and infrared ray radar, in terms of their transitionmedium. The laser radar is known for its highmeasuring precision, long measuring distance, and highdirection. However, this radar generates inaccuratemeasurement data under poor road environments andconditions, such as blizzards, storms, and fogs. Thelaser radar is always used for BSD and CMS. TheMMW radar can sort pulse frequency modulation andfrequency modulation continuous waves by applying ameasuring method. The radar uses frequency band-widths of 23 GHz to 24 GHz, 60 GHz to 61 GHz, and76 GHz to 77 GHz. This radar has been termed the\millimeter wave radar" because of its usage of an aver-age millimeter wavelength. The MMW radar has a farmeasuring distance, a high operating stability to resistenvironmental e�ects, and the capability of measuringthe distance and relative velocity between two vehicles.However, this radar has a poor penetrating capability.The MMW radar is always applied for BPAS, RCCWS,CMS, and ACCS. The ultrasonic wave and infrared rayradar have a shorter measuring distance compared tothe other radar. The ultrasonic wave radar is used forBPAS, while the infrared ray radar is used for NVS. Anincreasing number of major vehicle sellers, includingLUXGEN U6/U7 [9], Volvo, Mercedes-Benz, BMW,and Audi, are beginning to promote ADAS to increasethe utility rate of their vehicles.

2.2. Lane departure detection and warningsystem

In ADAS terminology, LDS is a mechanism that isdesigned to warn a driver when position detection,

Page 3: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

2128 J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133

Figure 2. Warning thresholds and placement zones inISO 17361.

lane departure warning, status monitoring, and alerthis or her vehicle begins to move out from its lanewithout showing a turn signal. LDS aims to minimizeaccidents by addressing the three primary causes ofcollision, namely, driver error, distractions, and drowsi-ness. In 2009, NHTSA began to investigate whetherthey should mandate lane departure detection andfrontal collision warning systems on automobiles. LDShas also become a standard technology of the Euro-pean New Car Assessment Program (EURO-NCAP).This paper designs an LDS according to the speci�edguidelines of the ISO standard 17361:2007(E). All ourtests are validated under ISO 17361, as shown inFigure 2.

The LDWS architecture follows the recommenda-tions of ISO 17361, as shown in Figure 3. The proposedsystem consists of several components, such as lanesystems. The lane position detection system continu-ously detects the relative location of the vehicle and thelane, and then inputs the detection results in the LDSfor recognition. This system also determines whetherthe vehicle has departed the lane unintentionally orwithout showing turn signals, according to pre-de�nedrules for di�erentiating the warning area from the non-warning area. If the vehicle is determined by the LDSas having entered the warning area, the system willautomatically warn the driver to respond immediately

Figure 3. System structure of LDWS.

to the situation or to change the direction of his or hervehicle [10,11].

In this study, we propose a safer LDS for moni-toring road conditions and for warning drivers aboutimpending danger. A Canny edge detection algorithmand Hough transform are implemented in this systemto detect the lane that is located nearest to the vehicle.Developed by John F. Canny in 1986, the Canny edgedetector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges inimages. Canny also introduced a computational theoryof edge detection that explains how his techniqueworks. Given that an edge in an image may pointtowards several directions, the Canny algorithm usesfour �lters to detect horizontal, vertical, and diagonaledges in a blurred image. The edge detection operator(i.e., Roberts, Prewitt, and Sobel) returns a value forthe �rst derivative in the horizontal direction (S1) andthe vertical direction (S2). The edge gradient anddirection can be determined as follows:

Edge magnitude =qS2

1 + S22 ; (1)

Edge direction = tan�1�S1

S2

�: (2)

Hough transform is a technique that isolates the fea-tures of a particular shape within an image. Giventhat this technique requires the desired features to bespeci�ed in a parametric form, the classical Houghtransform is most commonly used for the detectionof regular curves, such as lines, circles, and ellipses.A generalized Hough transform can be employed inapplications where a simple analytic description ofa feature is impossible. Given the computationalcomplexity of the generalized Hough algorithm, werestrict the main focus of this study to the classicalHough transform. The main advantages of the Houghtransform technique are its tolerance of gaps in featureboundary descriptions and its resistance of image noise.

An LDW system aims to monitor the risk of aninadvertent deviation from a lane, resulting from theinattention of the driver. Current systems either use anaudible beep or a rumbling stripe noise, which soundssimilar to the noise that a vehicle makes when runningon a lane divider. The system typically uses a forward-looking camera that compares the lane marking withthe direction of the vehicle to evaluate a possibledeviation. A forward-looking camera is also used in ourproposed system to help monitor the front situationof the vehicle. We have added side-view cameras,including a left side-view camera and a right side-view camera, to increase the reliability of ADAS. Ourproposed system aims to prevent drivers from drivingon curved lanes or branch roads. The front cameracaptures the basic view for the proposed system. The

Page 4: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133 2129

basic view is then transformed into a Hough parameterthrough Hough translation. The side-view camerasof the vehicle capture the views from the left andright lanes. These views are also transformed intoHough parameters through Hough translation. Weset the boundary conditions for choosing the correctparameters to detect the lanes. These boundaryconditions ignore extremely large values, limit thetraining number of objections, and o�set the laneposition [12,13].

In the next step, the system computes the dis-tance between the right lane and the left lane from thefront camera, the distance between the right lane andthe right vehicle door, and the distance between theleft lane and the left vehicle door. Hough parameterscan draw the lane and fetch the coordinates of thetop and bottom points. We apply a point-to-pointdistance theorem to compute the distance. We design athreshold to con�rm a vehicle lane departure, and morethan �ve occurrences of lane departures are observed.

Finally, the system combines the three departurewarning signals to con�rm the departure of a vehiclefrom its lane. The system will issue a right departurewarning when departure warnings are generated in theright side and front views. The system will issue aleft departure warning when departure warnings aregenerated in the left side and front views. The systemwill not issue a departure warning when departurewarnings are generated in either the right or left sides.The system combines all signals to produce the outputimage that is shown in Figure 4.

2.3. Hough transformThe Hough transformation [14,15] owchart was pro-posed in 1962 to determine possible lines in an image.By using the common x � y plane as an example,the linear equation, by the inclined interception ofa point (x; y) in the x � y plane, is computed asy = ax + b. An in�nite number of lines pass through(x; y) and all the a; b values can satisfy the equation,y = ax + b. Therefore, rewriting the equation intob = �ax+y and using the plane of a; b as the parameterspace can produce a single line equation of the �xedcoordinates of (x; y). All the points in the plane of x�y(x; y) correspond to their counterparts in the parameterspace. The main lines of the plane have intersectionpoints in the parameter space. Given that a (linearslope) will approach in�nity when the straight linepoints towards the vertical direction, this study usesthe following normal line representation of a straightline, as Eq. (3):

� = x cos � + y sin �; (3)

where � is the angle of the straight normal line andof the X-axis, while � is the normal length. Thecomputation is con�ned to a limited range of angles

Figure 4. Block diagram of ADAS.

Figure 5. Schematic diagram of normal linerepresentation.

and uses the characteristics of the triangular functionto solve the in�nite problem, as shown in Figure 5.

Hough transformation maps each point, (x; y), tomultiple parameter space coordinates, (�i; �j). There-fore, the cumulative matrix value, A(i; j), is usedto record the occurrence of (�i; �j). The range ofHough transformation in the �� parameter space canbe expected as �90� � � � 90� and �D � � � D,where D is the maximum distance of the diagonal linein the image. This diagonal line is located in thecoordinates of (i; j), with the cumulative value array,A(i; j), corresponding to the accompanying parameterspace coordinates, (�i; �j). These cumulative arrays

Page 5: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

2130 J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133

are initially set as zero. With regard to all points,(xk; yk), on the plane of x � y, if � is equivalentto the segmentation value on the axis of �, equation� = xk cos � + yk sin � can be used to determine thecorresponding �. The produced � will be rounded tothe tolerable value of the axis of �. The two pointsof the same straight line on plane x � y will convergein a single point when the value of two curves in theparameter space, (�; �), namely, the cumulative matrix,is equivalent to 2.

After the Hough transformation, the value of Bin A(i; j) shows B points on the straight line, �i =x cos �j + y sin �j , on the plane of x � y. The imagesare segmented before the markings on the lane areidenti�ed. Given that the camera is installed in thevehicle facing the road in a head-on position, the partof the road is captured in the lower part of the image.Given that lane markings often appear in pairs, andthe vehicle is assumed to be driving in the center ofthe road, the two sub-images in the lower part ofthe image can capture the markings on the left andright sides of the road. Therefore, the Y - and X-axes of the coordinate system are set in the centerand bottom of the image, respectively, to segment theimage into two parts for the Hough transformationof 0� � �r � 90� at the right part and for theHough transformation of �90� � �l � 0� at the leftpart. The Hough transformation of the two sub-imagescan determine multiple groups of possible (�r; �r) and(�l; �l). The top �ve groups of cumulative values areselected as the candidate road marking groups. Giventhat lane markings are usually found in pairs, thisstudy compares (�r; �r) with (�l; �l) to determine whichof these two groups has the closest � angle to the lanemarkings in the left and right sub-images. The twogroups of (�i; �j) are re-matched with the road edgesthat are produced by the plane of x � y, as shown inFigure 6.

3. Experimental result

Given that the side-view cameras are standard equip-ment in LUXGEN MPV, we can easily obtain the

Figure 6. De�nitions of the lane marking parameter.

mounted data from the cameras by measurement.Figure 7 shows the original view in our testing vehicle.The left image is our left-view camera output, themiddle image is our front camera output, and rightimage is our right-view camera output. Each image hasa value in the left-up position. These values indicatea distance between two lanes. Each image can detectand mark their two lanes. Figures 8 and 9 show the leftside-view camera detecting the attempt of the vehicleto depart from the left lane. In other words, theleft body of the vehicle is over the earliest warningline. Figures 10 and 11 show the vehicle attemptingto depart from the right lane.

The experiments are conducted in both daytimeand nighttime environments. According to the Houghtransform algorithms, candidates (�r; �r) and (�l; �l)can be found in the daytime and nighttime environ-ments, respectively. The parameters are shown inTables 1 and 2.

In the daytime environment, by comparing thedi�erence between �r and �l, the right side of the secondgroup and the left side of the second candidate areselected for the marking group. The standard linedrawing images are shown in Figure 12 to obtain theidenti�cation results and to further o�set the vehiclewarning. In the nighttime environment, Table 2 showsthat the �rst group to the right and left sides isselected for the marking group. Figure 13 shows theresults.

By obtaining the marking detection results, theproposed system is placed in the marking point of error,

Table 1. The parameters (�r; �r) and (�l; �l) in daytimeenvironment.

Left part Right part

�l �l �r �r

No. 1 174 -35 144 34

No. 2 175 -35 145 34

No. 3 171 -36 139 36

No. 4 169 -38 138 36

No. 5 170 -36 143 33

Table 2. The parameters (�r; �r) and (�l; �l) in nighttimeenvironment.

Left part Right part

�l �l �r �r

No. 1 152 -45 162 33

No. 2 154 -48 159 33

No. 3 230 -4 163 33

No. 4 231 -4 162 32

No. 5 153 -48 163 32

Page 6: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133 2131

Figure 7. Output image on the testing vehicle.

Figure 8. Left side-view departure warning test 1.

Figure 9. Left side-view departure warning test 2.

Figure 10. Right side-view departure warning test 1.

particularly side-by-side in the double yellow or whitelines, to detect the two parallel lines as a straight line.From a macroscopic perspective, the lead angle errorcan a�ect the o�set warning of the other vehicles whenthese vehicles move forward when the o�set angle isless than the error, and when the yaw error is muchlarger than the error. In this way, such error can beavoided by using an o�set threshold.

4. Conclusion

This study proposes a new LDS that uses side-viewcameras. We use the LUXGEN MPV as a test vehicleto validate our proposed system. The results of ourexperiment are similar to those of the LDS that usesa front camera. The proposed system can successfullydetect road markings under adequate daytime lighting

Page 7: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

2132 J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133

Figure 11. Right side-view departure warning test 2.

Figure 12. Daytime image road edge detection results.

Figure 13. Nighttime image road edge detection results.

conditions. However, the road marking detectionaccuracy of the system is a�ected by several factors,such as dirt, vehicle type, or aging of the road markings.Our proposed system enhances the unilateral markingdetection and assists in the virtual marking processto improve the detection rate of the complete o�setwarning. However, limited ambient lighting duringnighttime, even with the aid of headlamps, signi�cantlya�ects the detection of road markings. Adjustingthe brightness and enhancing the glare �lter can helpaddress this problem.

The proposed system can successfully detect roadmarkings during both daytime and nighttime, as longas a su�cient amount of light is present. Futureendeavors can be undertaken to improve this sys-tem, particularly in terms of identifying environmental

masking, detecting road markings under low lightduring nighttime, adjusting brightness, and increasingthe system response speed.

Author contributions

The authors have worked together, and their contribu-tion to this paper is equal. Jia-Shing Sheu developedthe modeling and performed the algorithms. Chih-Han Chang and Tsong-Liang Huang discussed andinvestigated the research work, in particular for designof the simulation and test procedures. Hsin-Jung Linhas been responsible for the simulation and testing.All the authors have read and approved the �nalmanuscript.

Con ict of interest

The authors declare that there is no con ict of interestregarding the publication of this article.

References

1. National Highway Tra�c Safety Administration(NHTSA) [Online] http://www.nhtsa.gov/

2. An, X., Wu, M. and He, H. \A novel approach toprovide lane departure warning using only one forward-looking camera", IEEE Intelligent Vehicles Symp., pp.356-362 (2006).

3. \ISO 17361:2007", International Organization forStandardization (2007).

4. \Technical committee ISO/TC 204, ISO 17361: Intel-ligent transport systems - Lane departure warning sys-tems - Performance requirements and test procedures",ISO Copyright O�ce, Switzerland (2007).

5. Houser, A., Pierowicz J. and Fuglewicz, D. \Conceptof operations and voluntary operational requirementsfor lane departure warning system (LDWS) on-boardcommercial motor vehicles", Technical Report of Fed-eral Motor Carrier Safety Administration, U.S. De-partment of Transportation (2005).

6. Grace, R. \Drowsy driver monitor and warning sys-tem", Proc. Int. Driving Symp. Human Factors DriverAssessment, Training and Vehicle Design, pp. 64-69(2001).

Page 8: Lane departure warning system using front-view and two ...scientiairanica.sharif.edu/article_3761_7769ad04677f69629d9db4d222339b73.pdfsignals. Vehicle radar is classi ed into several

J.-S. Sheu et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 2126{2133 2133

7. Horne, J.A. and Reyner, L.A. \Sleep related vehicleaccidents", Proc. British Medical Journal, 310(6979),pp. 565-567 (1995).

8. Mannering, F.L. and Bhat, C.R. \Analytic methods inaccident research: methodological frontier and futuredirections", Analytic Methods in Accident Research, 1,pp. 1-22 (2014).

9. LUXGEN motor Co. Ltd. [Online] http://www.luxg-en-motor.com/

10. Cario, G., Casavola, A, Franze, G. and Lupia, M.\Data fusion algorithms for lane departure warningsystems", American Control Conference (ACC), Bal-timore, MD, pp. 5344-5349 (2010).

11. Bernsteiner, S., Lindvai-Soos, D., Holl, R. and Eich-berger, A. \Subjective evaluation of advanced driverassistance by evaluation of standardized driving ma-neuvers", SAE Technical Paper 2013-01-0724 (2013).

12. Hsiao, P.Y., Yeh, C.W., Huang, S.S. and Fu, L.C. \Aportable vision-based real-time lane departure warningsystem: day and night", IEEE Trans. Veh. Technol.,58(4), pp. 2089-2094 (2009).

13. Dimitrakopoulos, G. and Demestichas, P. \Intelligenttransportation systems", IEEE Veh. Technol. Maga-zine, 5(1), pp. 77-84 (2010).

14. Gonzalez, R.C. and Woods, R.E., Digital Image Pro-cessing, Prentice Hall, 3rd Edn. (2008).

15. Herout, A., Dubsk�a, M. and Havel J. \Review ofHough transform for line detection", Real-Time Detec-tion of Lines and Grids, SpringerBriefs in ComputerScience, pp. 3-16 (2013).

Biographies

Jia-Shing Sheu received MS and PhD degrees fromthe Department of Electrical Engineering at NationalCheng Kung University, Tainan, Taiwan, in 1995 and2002, respectively. Currently, he is Associate Professorin the Department of Computer Science at the National

Taipei University of Education, Taipei, Taiwan. Hisresearch interests include pattern recognition and im-age processing, especially real time face recognition,advanced driver assistance systems and embedded sys-tems.

Chih-Han Chang received BS and PhD degreesin Electrical Engineering from Tamkang University,Taiwan, in 2001 and 2007, respectively. He is currentlyan engineer at Hua-chuang Automobile InformationTechnical Center Co. (HAITEC) in the Yulon group,Taiwan, and manager in charge of the AdvancedDriver Assistance System (ADAS). He is also AdjunctAssistant Professor with the National Taipei Universityof Education, Taipei, Taiwan. His research interestscover the design and analysis of sensor fusion, ADASand infotainment systems.

Tsong-Liang Huang received BS, MS, and PhDdegrees in Electrical Engineering from the NationalTaiwan University, Taipei, in 1980, 1982, and 1991,respectively. He is currently Distinguished Professor inthe National Taipei University of Education, Taiwan,and President of Nan Jeon University of Technology.His research interests include IC design, control sys-tems, power systems, electronic systems, fuzzy theory,decision modeling, neurology systems, arti�cial intelli-gence systems, �ngerprint IC systems, decision supportsystem modeling, educational evaluation, school mar-keting, and risk management in education.

Hsin-Jung Lin received a BS degree from the Na-tional Taiwan Normal University, Taipei, Taiwan, in2002, and is currently in her second year of graduatestudies at the National Taipei University of Education,Taiwan, concentrating on computer sciences. Hermain research interests include library and informationsciences.