6
Snap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform Ravi Kumar Satzoda, Sean Lee, Frankie Lu, and Mohan M. Trivedi Abstract— In the recent years, mobile computing platforms are becoming increasingly cheaper and yet more powerful in terms of computational resources. Automobiles provide a suitable environment to deploy such mobile platforms in order to provide low cost driver assistance systems. In this paper, we propose Snap-DAS which is a vision-based driver assistance system that is implemented on a Snapdragon TM embedded platform. A forward facing camera combined with the Snapdragon TM platform constitute Snap-DAS. The com- pute efficient implementation of the LASeR lane estimation algorithm in [1] is exploited to implement a set of lane related functions on Snap-DAS, which include lane drift warning and lane change event detection. A detailed evaluation is performed on live data and Snap-DAS is also field tested on freeways. Furthermore, we explore the possibility of using Snap-DAS for analyzing drives for online naturalistic driving studies. I. I NTRODUCTION Mobile computing processors and chipsets have made great advances in the past decade in both computing speed and power consumption [2]. These advances have resulted in the increased usage of embedded electronic systems in modern automobiles. In particular, the use of embedded intelligent driver assistance systems has seen significant popularity [3], [4]. One key requirement in the realization of advanced driver assistance systems (ADAS) is that they must be highly accurate and dependable. Higher accuracy often requires computationally more complex algorithms [5] resulting in higher power consumption. However, embedded processing platforms are resource constrained particularly in areas of battery power and computational speed/frequency [5]. Therefore, the design of the ADAS for embedded platforms requires an understanding of the trade-off between computational performance (speed and power consumption) and accuracy. Among ADAS, vision-based sensing in particular is gain- ing popularly in recent times [6]. This is because of the ever decreasing costs in camera sensors and the miniaturization of cameras that enables ubiquitous incorporation of the cam- eras in vehicles [7]. However, vision-based algorithms also involve data intensive processing, which are a challenge to be implemented on power and speed constrained embedded platforms [5]. Most vision based algorithms for driver assis- tance are usually prototyped on powerful personal computers, and later implemented and translated to embedded hardware systems. This approach can lead to mismatch of the actual 1 All authors are with Laboratory of Safe and Intelligent Vehicles, University of California San Diego, La Jolla, CA [email protected], [email protected], [email protected], [email protected]. performance of the ADAS. Therefore, there is an increasing need to innovate the vision-based driver assistance systems (DAS) at the algorithmic level such that they are architecture- aware for implementation on embedded platforms [2], [8]. In this paper, we present an ADAS platform that is implemented on the Snapdragon TM embedded computing processor [9], which is widely used in mobile platforms such as mobile phones, tablets etc. We call the proposed drive assistance solution “Snap-DAS platform”, which is designed to assist the driver by issuing warnings during lane drifting and lane changes. The work presented in this paper is an initial work in the direction of developing embedded platforms for ADAS. To the best of our knowledge from available literature (academic and non-academic), this is the first work on implementing ADAS on a Snapdragon TM mobile processor. Snap-DAS is evaluated during real-world field trials and it shows promising possibilities in terms of embedded realization of ADAS. We also explore the possibility of analyzing driving semantics during the drive as part of online naturalistic driving studies [4], [10]. II. SNAP-DAS PLATFORM:HARDWARE SETUP The proposed Snap-DAS platform is primarily an em- bedded platform for driver assistance systems (DAS) that employs the Snapdragon TM 600 processor. The Snap-DAS platform uses an Inforce 6410 development board whose main processing unit is a Qualcomm Snapdragon TM 600 processor running at 1.7 GHz. Unlike PCs that have large computing processors and high clock speeds of more than 2.5 GHz, Snapdragon TM is a more resource constrained pro- cessor in terms of computing capabilities and clock speeds. However, this particular processor is found on many mobile phones and is representative of commercially available con- sumer product. The Snap-DAS uses a Linaro based flavor of Linux as the operating system, which is optimized for ARM processors such as Snapdragon TM ˙ The vision algorithms are written in C/C++ and are optimized to run on multiple threads in order to utilize the full computing speed of the processor’s four cores. In terms of the sensing modalities, Snap-DAS is fixed with a forward camera (Logitech C920 webcam) that is connected to the development board using the on-board USB ports. Although the camera is capable of capturing video frames up to a resolution of 1920 x 1080, Snap-DAS currently captures an input video with a resolution of 640 × 480 in order to provide data capture frame rates of about 17 frames per second (real-time data capture). The entire hardware setup of Snap-DAS is shown in Fig. 1. IEEE Intelligent Vehicles Symposium, June 2015 (to appear)

Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

Snap-DAS: A Vision-based Driver Assistance Systemon a SnapdragonTM Embedded Platform

Ravi Kumar Satzoda, Sean Lee, Frankie Lu, and Mohan M. Trivedi

Abstract— In the recent years, mobile computing platformsare becoming increasingly cheaper and yet more powerfulin terms of computational resources. Automobiles providea suitable environment to deploy such mobile platforms inorder to provide low cost driver assistance systems. In thispaper, we propose Snap-DAS which is a vision-based driverassistance system that is implemented on a SnapdragonTM

embedded platform. A forward facing camera combined withthe SnapdragonTM platform constitute Snap-DAS. The com-pute efficient implementation of the LASeR lane estimationalgorithm in [1] is exploited to implement a set of lane relatedfunctions on Snap-DAS, which include lane drift warning andlane change event detection. A detailed evaluation is performedon live data and Snap-DAS is also field tested on freeways.Furthermore, we explore the possibility of using Snap-DAS foranalyzing drives for online naturalistic driving studies.

I. INTRODUCTION

Mobile computing processors and chipsets have madegreat advances in the past decade in both computing speedand power consumption [2]. These advances have resultedin the increased usage of embedded electronic systems inmodern automobiles. In particular, the use of embeddedintelligent driver assistance systems has seen significantpopularity [3], [4]. One key requirement in the realizationof advanced driver assistance systems (ADAS) is that theymust be highly accurate and dependable. Higher accuracyoften requires computationally more complex algorithms [5]resulting in higher power consumption. However, embeddedprocessing platforms are resource constrained particularly inareas of battery power and computational speed/frequency[5]. Therefore, the design of the ADAS for embeddedplatforms requires an understanding of the trade-off betweencomputational performance (speed and power consumption)and accuracy.

Among ADAS, vision-based sensing in particular is gain-ing popularly in recent times [6]. This is because of the everdecreasing costs in camera sensors and the miniaturizationof cameras that enables ubiquitous incorporation of the cam-eras in vehicles [7]. However, vision-based algorithms alsoinvolve data intensive processing, which are a challenge tobe implemented on power and speed constrained embeddedplatforms [5]. Most vision based algorithms for driver assis-tance are usually prototyped on powerful personal computers,and later implemented and translated to embedded hardwaresystems. This approach can lead to mismatch of the actual

1 All authors are with Laboratory of Safe and IntelligentVehicles, University of California San Diego, La Jolla, [email protected], [email protected],[email protected], [email protected].

performance of the ADAS. Therefore, there is an increasingneed to innovate the vision-based driver assistance systems(DAS) at the algorithmic level such that they are architecture-aware for implementation on embedded platforms [2], [8].

In this paper, we present an ADAS platform that isimplemented on the SnapdragonTM embedded computingprocessor [9], which is widely used in mobile platformssuch as mobile phones, tablets etc. We call the proposeddrive assistance solution “Snap-DAS platform”, which isdesigned to assist the driver by issuing warnings during lanedrifting and lane changes. The work presented in this paperis an initial work in the direction of developing embeddedplatforms for ADAS. To the best of our knowledge fromavailable literature (academic and non-academic), this isthe first work on implementing ADAS on a SnapdragonTM

mobile processor. Snap-DAS is evaluated during real-worldfield trials and it shows promising possibilities in termsof embedded realization of ADAS. We also explore thepossibility of analyzing driving semantics during the driveas part of online naturalistic driving studies [4], [10].

II. SNAP-DAS PLATFORM: HARDWARE SETUP

The proposed Snap-DAS platform is primarily an em-bedded platform for driver assistance systems (DAS) thatemploys the SnapdragonTM 600 processor. The Snap-DASplatform uses an Inforce 6410 development board whosemain processing unit is a Qualcomm SnapdragonTM 600processor running at 1.7 GHz. Unlike PCs that have largecomputing processors and high clock speeds of more than2.5 GHz, SnapdragonTM is a more resource constrained pro-cessor in terms of computing capabilities and clock speeds.However, this particular processor is found on many mobilephones and is representative of commercially available con-sumer product. The Snap-DAS uses a Linaro based flavor ofLinux as the operating system, which is optimized for ARMprocessors such as SnapdragonTMThe vision algorithms arewritten in C/C++ and are optimized to run on multiplethreads in order to utilize the full computing speed of theprocessor’s four cores.

In terms of the sensing modalities, Snap-DAS is fixed witha forward camera (Logitech C920 webcam) that is connectedto the development board using the on-board USB ports.Although the camera is capable of capturing video frames upto a resolution of 1920 x 1080, Snap-DAS currently capturesan input video with a resolution of 640× 480 in order toprovide data capture frame rates of about 17 frames persecond (real-time data capture). The entire hardware setupof Snap-DAS is shown in Fig. 1.

IEEE Intelligent Vehicles Symposium, June 2015 (to appear)

Page 2: Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

Fig. 1. Snap-DAS platform setup: (a) Snapdragon development board, (b)Camera (top of windshield) and Snadragon setup, (c) Overall Snap-DASplatform setup with the camera, processing board and display unit.

In terms of the functionality of Snap-DAS, it currentlycatered for operations that are related to ego-vehicle localiza-tion within the lane. It is to be noted that this is the first workin the context of ADAS on SnapdragonTM processors, andwork is currently underway to incorporate more functionalityon this processor. In the current setup, we particularly look atthe forward view from the ego-vehicle and driver assistanceis provided with regards to lane detection on input videostreams.

III. LANE ANALYSIS ON SNAP-DAS

Snap-DAS platform is equipped a variety of driver as-sistance operations that are related to lane analysis fromforward view of the ego-vehicle. Lane detection is firstapplied on the input image, and the results lane positions areused to perform the following different functions: lane driftdetection and lane change detection. In order to perform theabove functions, we employ the lane estimation algorithmcalled LASeR (lane analysis using selective regions) thatwas particularly designed for embedded realization in [1],[2]. Although LASeR was designed as a lane detectionalgorithm, we use it in different ways to perform the fourdriver assistance operations listed above in Snap-DAS.

A. Lane Estimation on Snap-DAS

Before going into the details of the functions of Snap-DAS, the LASeR algorithm [1] is briefly discussed below

Fig. 2. Block diagram illustrating LASeR algorithm.

for the sake of completeness. However, more details aboutLASeR can be found in [1]. Unlike most existing laneestimation methods which process the entire image (or aregion of interest (RoI) below the vanishing line in theimage), selected bands are used in LASeR to detect lanefeatures as shown in Fig. 2. An image I is first convertedinto its inverse perspective mapping (IPM) [11] image IWwhich provides a top view of the road scene. In this IPMimage, NB scan bands, each of height hB pixels, are selectedalong the vertical y-axis, i.e. along the road from the ego-vehicle. Each band Bi is then convolved with a vertical filterthat is derived from steerable filters [1]. The vertical filter isrepresented by the following equation:

G0◦(x,y) =−2xσ2 e−

x2+y2

σ2 (1)

where G(x,y) = e−x2+y2

σ2 is a 2D Gaussian function. InLASeR G0◦(x,y) is a 5× 5 2D filter. Therefore, the filterresponse BS

i for each scan band Bi is given by

BSi = Bi ∗G0◦(x,y). (2)

BSi from each band is then analyzed for lane features by

thresholding using two thresholds to generate two binarymaps E+ and E− for each band. Vertical projections p+ andp− are computed using E+ and E−, which are then used todetect lane features using a shift-and-match operation (SMO)in the following way:

K = (p+ << δ )�p− (3)

where � represents point-wise multiplication, δ is theamount of left shift (denoted by << above) that the vectorp− undergoes. The SMO allows us to detect adjacent lightto dark and dark to light transitions that characterize lanefeatures. Applying a suitable threshold on K and the roadmodel, we get the positions of left and right lanes in the

IEEE Intelligent Vehicles Symposium, June 2015 (to appear)

Page 3: Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

Bi-th scan band denoted by

PBi = {PL(xL,yi),PR(xR,yi)} (4)

Therefore, we get NB such positions in NB scan bands (asshown in Fig. 3) resulting in P = {PBi}, which are associatedwith each other using a road model, and are also trackedusing extended Kalman tracking. More details are explainedin [1].

Fig. 3. IPM image IW of the input image showing the scan bands.

Considering the resource constraints of the Snapdragonprocessor as described previously in Section II, LASeR isbest suited for such platforms because unlike conventionallane estimation algorithms, it processes selected bands ofthe image only to detect lane features. Additionally, the bandbased approach also enables scalability of the algorithm, i.e.,LASeR can be used to function with a lesser number ofscan bands and yet detect the lanes. In Section IV-B, wewill demonstrate the trade offs between accuracy and com-putational times of implementing LASeR on SnapdragonTM

processor. Also, the band based approach in LASeR enablesparallelism of the lane estimation algorithm on the multi-ple computing cores of SnapdragonTM processor. This isachieved by processing groups of bands in parallel on thefour cores that are available on a SnapdragonTM processor.

Fig. 4. Lane departure warning in conventional ADAS. Notice that thesystem does not keep track if the vehicle is within the original lane or thenext lane.

B. Lane Drift and Change Warnings in Snap-DAS

A lane departure warning is an integral part of manycommercial ADAS systems such as Mobileye [12]. In mostof such systems, a warning is issued after the vehicle crosses

the lane without the turn indicator switched on. Referring toFig. 4, when the vehicle departs from the lane, the warningsystem issues a warning as shown in Fig. 4. There are twoissues with such warnings. First, the warning is issued afterthe departure occurs and hence such a warning could be a latewarning if there are vehicles in the neighboring lane. Second,lane departures do not mean lane changes, i.e. lane departuresare unintentional and the driver often turns the steering wheelto enter into the original ego-lane as depicted in Fig. 4.The warnings in most existing systems do not differentiatebetween lane change and lane departure. Therefore, theycannot keep track of the vehicle entering into the originalego-lane.

Fig. 5. Lane drift warning (a) and lane change warning (b) in Snap-DAS.Notice that during drifting/departure in (a) Snap-DAS keeps track of whichlane the ego-vehicle is located.

Snap-DAS issues a warning for lane drift and includes lanedeparture as part of lane drift warning. Additionally, Snap-DAS detects lane change events separately. This is illustratedin Fig. 5(a) & (b). When a lane departure happens as shownin Fig. 5(a), Snap-DAS issues a warning before the departurefor the lane. The warning is issued for lane drifting whichis inclusive of the lane departure also. Therefore, when thedriver steers the vehicle back into the ego-lane as shownin Fig. 5, the warning is still for lane drifting because thevehicle is not in the middle of the original ego-lane. However,when a lane change occurs, Snap-DAS identifies it as a lanechange in addition to lane drifting as shown in Fig. 5(b).In this way, Snap-DAS keeps track of the lane positioninformation also (which is missing the conventional lanedeparture warning).

IEEE Intelligent Vehicles Symposium, June 2015 (to appear)

Page 4: Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

In order to issue the above two warnings, Snap-DASemploys the LASeR algorithm in a conservative manner.LASeR is first used to detect if the ego-vehicle is drifting.This is performed using the lane drift detection methoddescribed in [13]. Given the position of the left and rightlane features in the nearest k bands, the x-coordinates of thelane features from LASeR are used in the following way fordetecting the lane drifts using the following formulations:

event = le f t dri f t if{∀xL j : L−L < xL j < L+

L∀xR j : L−R < xR j < L+

R(5)

event = right dri f t if{∀xL j : R−L < xL j < R+

L∀xR j : R−R < xR j < R+

R(6)

event = in lane if{∀xL j : R+

L < xL j < L−L∀xR j : R+

R < xR j < L−R(7)

where 0 < j < k and L−L and L+L indicate the lower and

upper bounds of the x-coordinates of the left drift regionwith respect to the left lane, L−R and L+

R indicate the lowerand upper bounds of the x-coordinates of the left drift regionwith respect to the right lane. The other variables in theabove equations denote similar bounds for the right drift withrespect to the left and right lanes. The values for the boundsare set based on the sensitivity that is needed for lane driftdetection. They are set to 0.5m in our studies.

Fig. 6. State machine that combines left/right drifts to warn during lanechange maneuvers.

If a lane drift event is detected at time ti based on theabove conditions, a lane drift warning (depending on rightor left drift) is issued by Snap-DAS. Simultaneously, a statemachine is initiated to track the vehicle’s maneuvers anddetermine if the drifting is leading to lane change events.The state machine is shown in Fig. 6. It consists of five states- SIL, SLD, SRD, SLC and SRC corresponding to within lane,left drift, right drift, left lane change and right lane change

respectively. The state transition conditions are indicatedon the edges of the state machine shown in Fig. 6. Thestate transition conditions are dependent on the lane driftestimation formulations that were listed in (5), (6) and (7),i.e. depending on the type of lane drift that is detected, thestate transition occurs.

Fig. 7. An example to illustrate the state machine transitions.

Let us explain the state machine in more detail using anexample shown in Fig. 7. If the vehicle is within the lane(State SIL) and if drift is not detected (condition no dri f t),then state machine remains in SIL, i.e., in lane state. If a rightdrift is detected using (6), then the condition r dri f t becomesactive and puts the state machine flow in State SRD. If rightdrift continues to exist, then the state machine remains in thesame state SRD.

If the vehicle continues to drift right, at a particular timeinstance tk (shown in Fig. 7) the vehicle will cross into theright lane. In that scenario, the right lane in the previousego-lane will now become left lane at tk in the new ego-lane(see Fig. 7). In that scenario, a lane change is said to haveoccurred and the vehicle will now be seen as drifting left.Therefore, the light drift condition becomes active and Snap-DAS goes into right lane change state SRC till no lane driftcondition becomes active.

Snap-DAS uses the lane drift conditions to accuratelyposition the vehicle such that it detects the lane changeevents eventually if a lane change event actually occurs. If thevehicle drifts to the right and does not make a lane change,instead it just crosses the lane marking but immediately steersback into its original lane, the state transition from right driftto lane change (referring to the example above) does notoccur. Therefore, Snap-DAS continues to issue a lane driftwarning.

IV. PERFORMANCE EVALUATION

In this section, we perform a detailed performance anal-ysis of Snap-DAS. The evaluation is performed at multiplelevels. First, the performance of SnapdragonTM processor incapturing data is presented. Then, LASeR’s lane detectionperformance on Snap-DAS in terms of both accuracy and

IEEE Intelligent Vehicles Symposium, June 2015 (to appear)

Page 5: Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

computational time is discussed in detail. Thereafter, weshow some sample results of the Snap-DAS’s functionality,i.e. lane drift and change warnings, and lane detection results.

TABLE IVIDEO CAPTURE RATES USING SNAPDRAGONTM PROCESSOR

Camera setup Frames per second (Fps)1 camera 16.632 cameras 12.19

A. SnapdragonTM Processor Performance

The Snap-DAS platform involves one camera connected tothe development board. We implemented a simple programto capture data using the Snap-DAS system. In addition tothe single camera setup that discussed in Section II, wealso implemented a capture system using two cameras. Thisexercise helped to find the timing limits that one shouldwork with if multiple cameras are used. Table I shows thetiming in terms of frames per second for the two differentconfigurations of camera setup. In both cases the videoresolution is 640×480. It can be seen that one camera setupachieves near real-time performance in terms of visual datacapture. However, adding another camera reduces the framerate by 25%.

B. LASeR on Snap-DAS

It was shown in Section III-A that LASeR offers optionsfor scalability in terms of the number of bands that canbe chosen to detect lane features. Lower number of scanbands results in lesser computations, which is more suitablefor implementation on SnapdragonTM processor. However,lowering the number of scan bands also affects the accuracy.Table II lists the time per frame in milliseconds that isrequired by LASeR to detect lanes on SnapdragonTM proces-sor. Three different configurations of the LASeR algorithmare listed by varying the number of scan bands from 16 to8 to 4. It can be seen that there is 2 ms advantage in timingbetween 16-band LASeR and 4-band LASeR.

TABLE IICOMPUTATIONAL TIMES OF LASER ON SNAP-DAS

Number of scan bands Time per frame (ms)16 bands 49.268 bands 47.194 bands 45.31

We will now see the accuracy evaluation of LASeR onSnap-DAS for different configurations of LASeR algorithm.The lane position deviation (LPD) metric [14] is used toevaluate the lanes detected by LASeR using three differentconditions for the number of scan bands. Table III lists meanand standard deviation for the LPD in centimeters. It can beseen from Table III that LASeR performs most accuratelywith minimum mean LPD with 16 scan bands looking outfor lane features. This is expected because LASeR is moreaccurate with more number of scan bands. However, the error

increases by less than 2cm only if the number scan bands isreduced to 4, thereby reducing the amount of computationson Snap-DAS.

TABLE IIICOMPUTATIONAL TIMES OF LASER ON SNAP-DAS

Number of scan bands µLPD cm σLPD cm4 bands 8.03 3.758 bands 7.17 4.0216 bands 6.52 4.01

C. Snap-DAS Warnings

Snap-DAS is designed to issue a variety of warningsrelated to ego-vehicle maneuvers within and across lanes.The list of icons are shown in Fig. 8which are related toego-vehicle maneuvers, i.e. lane drifts and lane changes.

Fig. 8. Set of warnings that Snap-DAS issues.

Snap-DAS was connected to the on-board camera asdescribed in Section II and a display monitor was connectedto Snap-DAS that showed the warnings/information iconson the input video stream. In Fig. 9, warnings due to ego-vehicle maneuvers by Snap-DAS are shown. A right lanedrift warning is indicated in Fig. 9(a) and lane changewarning is issued for the ego-vehicle maneuver in Fig. 9(b).

We briefly compare the functionality of Snap-DAS withone of the commercial vision-based ADAS. MobileyeTM

560 [12] is used as the reference commercial system tocompare. Mobileye provides a range of functions that includelane departure, vehicle detection in ego-lane with headwaymonitoring, pedestrian detection etc. Snap-DAS is currentlydesigned for a set of functions that are related to lanes infront of the ego-vehicle. Snap-DAS detects both lane drifts(that include lane departures also) and lane change events,which is not directly seen in the case with Mobileye.

V. SNAP-DAS FOR NATURALISTIC DRIVING STUDIES

Given the mobility of the Snap-DAS platform, it canbe used for drive analysis of drivers in naturalistic drivingstudies (NDS) [4]. The operations of Snap-DAS can be usedto create a log of the different events such as the number oflane changes, lane drift events etc. These events can then beused to create a drive analysis report which summarizes thesemantics about the drive. Drive analysis report generationfor naturalistic driving studies is explored for offline dataanalysis of pre-recorded naturalistic driving data in detailin [4]. However, the same can be generated for online datausing Snap-DAS mobile platform while driving. This is anadditional functionality that can be added as the final logging

IEEE Intelligent Vehicles Symposium, June 2015 (to appear)

Page 6: Snap-DAS: A Vision-Based Driver Assistance System on a ...cvrr.ucsd.edu/publications/2015/SatzodaLeeLuTrivedi_IV2015.pdf · in terms of computational resources. Automobiles provide

Fig. 9. Lane drift (a) and lane change (b) being detected during the driveby Snap-DAS.

operation in Snap-DAS. Furthermore, applications could bedeveloped that generate some measures about the drivingstyles, driver behaviors etc. based on the drive analysisreport. We introduce this as part of this paper to indicatefuture possibilities that can be explored using the mobileSnap-DAS platform.

VI. CONCLUDING REMARKS

In this paper, we presented the mobile platform for vision-based driver assistance called Snap-DAS. The hardwaresetup, the underlying algorithms to provide a set of functionsand the evaluation on live driving trials was elaborated.Snap-DAS opens a new set of possibilities in the area ofmobile platforms for driver assistance. This paper primarilyfocuses on the operations related to lanes using a singleforward looking camera in the ego-vehicle. Snap-DAS couldbe explored further to include more operations related tomultiple perspectives and multiple objects. However, additionof more functions also implies a higher computational load.The Snapdragon TM processor must be carefully investigatedat a processor level to include higher computational require-ments. Finally, the mobility offered by Snap-DAS could alsobe explored to analyze drivers and driving styles as part ofnaturalistic driving studies to develop measures that can aidin developing safety systems.

REFERENCES

[1] R. K. Satzoda and M. M. Trivedi, “Selective Salient Feature basedLane Analysis,” in 2013 IEEE Intelligent Transportation SystemsConference, 2013, pp. 1906–1911.

[2] ——, “Vision-based Lane Analysis : Exploration of Issues and Ap-proaches for Embedded Realization,” in 2013 IEEE Conference onComputer Vision and Pattern Recognition Workshops on EmbeddedVision, 2013, pp. 604–609.

[3] B. M. Wilamowski, “Recent advances in in-vehicle embedded sys-tems,” IECON 2011 - 37th Annual Conference of the IEEE IndustrialElectronics Society, pp. 4623–4625, Nov. 2011.

[4] R. Satzoda and M. Trivedi, “Drive analysis using vehicle dynamicsand vision-based lane semantics,” Intelligent Transportation Systems,IEEE Transactions on, vol. 16, no. 1, pp. 9–18, Feb 2015.

[5] F. Stein, “The challenge of putting vision algorithms into a car,” 2012IEEE CVPR Workshops, pp. 89–94, June 2012.

[6] H. Liu, S. Chen, and N. Kubota, “Intelligent Video Systems andAnalytics : A Survey,” IEEE Transactions on Industrial Informatics,vol. 9, no. 3, pp. 1222–1233, 2013.

[7] A. Doshi, B. T. Morris, and M. M. Trivedi, “On-road predictionof driver’s intent with multimodal sensory cues,” IEEE PervasiveComputing, vol. 10, no. 3, pp. 22–34, 2011.

[8] R. K. Satzoda and M. M. Trivedi, “On enhancing lane estimation usingcontextual cues,” IEEE Transactions on Circuits and Systems for VideoTechnology, vol. 99, 2015.

[9] “Snapdragon Processors, howpublished =https://www.qualcomm.com/products/snapdragon.”

[10] R. K. Satzoda, S. Martin, M. V. Ly, P. Gunaratne, and M. M. Trivedi,“Towards Automated Drive Analysis : A Multimodal SynergisticApproach,” in IEEE Intelligent Transportation Systems Conference,2013, p. To appear.

[11] A. Broggi, “Parallel and local feature extraction: a real-time approachto road boundary detection.” IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, vol. 4, no. 2,pp. 217–23, Jan. 1995.

[12] “Mobileye, howpublished = http://www.mobileye.com/.”[13] R. K. Satzoda, P. Gunaratne, and M. M. Trivedi, “Drive Analysis using

Lane Semantics for Data Reduction in Naturalistic Driving Studies,”in 2014 IEEE Intelligent Vehicles Symposium (IV), no. Iv, 2014, pp.293–298.

[14] R. K. Satzoda and M. M. Trivedi, “On Performance Evaluation Metricsfor Lane Estimation,” in 22nd International Conference on PatternRecognition, 2014, pp. 2625–2630.

IEEE Intelligent Vehicles Symposium, June 2015 (to appear)