1
d b hl b f l C C RoadLab: An In Vehicle Laboratory for Developing Cognitive Cars RoadLab: An In-Vehicle Laboratory for Developing Cognitive Cars RoadLab: An In Vehicle Laboratory for Developing Cognitive Cars SS B h i MA B D L d T K i J Ch MH t K Ch b dOMC th S.S. Beauchemin, M.A. Bauer, D. Laurendeau, T. Kowsari, J. Cho, M. Hunter, K. Charbonneau, and O. McCarthy Abstract Abstract W ld id d h f ij i j d i f illi i 8 illi i ih ffi l d i id h j f hi i I lli Worldwide deaths from injuries are projected to rise from 5.1 million in 1990 to 8.4 million in 2020, with trafficrelated incidents as the major cause for this increase. Intelligent, Advanced Driving Assistance Systems (iADAS) provide a number of solutions to these safety challenges We developed a scalable invehicle mobile iADAS research platform for the Advanced Driving Assistance Systems (i ADAS) provide a number of solutions to these safety challenges . We developed a scalable in vehicle mobile i ADAS research platform for the f t ffi t t l i d bh i l di ti d i d f d t di f d tl i i i t lli t hi l W tli l h d d ib th i purpose of traffic context analysis and behavioral prediction designed for understanding fundamental issues in intelligent vehicles . We outline our general approach and describe the invehicle instrumentation. We present a number of research challenges and early results, as we outline future directions . L dA ht I t lli tV hi l Layered Approach to Intelligent Vehicles O d t ti l dl it f f l ith i i l l f dt bt ti Our proposed computational model consists of four layers, with increasing levels of data abstraction The innermost layer consists of the hardware and software required to capture vehicle odometry, sequences from visual sensors, and The innermost layer consists of the hardware and software required to capture vehicle odometry, sequences from visual sensors, and driver behavioral data driver behavioral data. h d l h d h lb l d h d d The second layer pertains to hardware synchronization, calibration, realtime data gathering, and vision detection processes . The third layer is where the data is transformed and fused into a single 4dimensional space (x y z t) The third layer is where the data is transformed and fused into a single 4 dimensional space (x,y,z,t) Th l t l k f th f d dt t di bh i l dt ith dl f bh i th t it i t The last layer makes use of the fused data to compare driver behavioral data with models of behavior that are appropriate given current odometry and traffic conditions. odometry and traffic conditions. Camera Calibration Stereo Depth Computation Camera Calibration l l b lb h l Stereo Depth Computation All th i f f i l h i d t ithi O th For visual sensors, it is critical to obtain precise calibration parameters such as lens All the image frames from visual sensors are synchronized to within 125 μs . Once the distortion the optical center and the external orientation of sensors with respect to synchronized frames are obtained, stereo depth maps are computed at frame rate, based distortion, the optical center, and the external orientation of sensors with respect to h th Th R dL b t lib ti it f d i d f thi synchronized frames are obtained, stereo depth maps are computed at frame rate, based on the calibration parameters (see Figure below) each other . The RoadLab stereo calibration interface was designed for this process on the calibration parameters (see Figure below) (see Figure below). (see Figure below). The calibration process consists of two steps Intrinsic parameters are first estimated The calibration process consists of two steps . Intrinsic parameters are first estimated f h d h b d h h f ll bl for each sensor and then, based on these, the extrinsic parameters for all possible sensor pairs are obtained sensor pairs are obtained. Predictive Behavioral Model Predictive Behavioral Model ( ) Our conjecture is that the analysis of driver gaze direction (and other facial features) fused with the knowledge of the environment surrounding the vehicle (and its odometry) lead to with the knowledge of the environment surrounding the vehicle (and its odometry) lead to h ibili f di i dii bh i f h i f the possibility of predicting driving behavior for short time frames . For this purpose, we devise a RealTime Descriptor (RTD) for a moving vehicle essentially For this purpose, we devise a Real Time Descriptor (RTD) for a moving vehicle essentially consisting of a CFS a CSD and a VSO descriptor consisting of a CFS, a CSD, and a VSO descriptor . The Figure on the right shows the retroactive mechanism in which both the current and predicted descriptors (CSD CFS and VSO) assist in determining the safety level of the context predicted descriptors (CSD, CFS, and VSO) assist in determining the safety level of the context d i d f h d di d RTD (CFS i C F S d i derived from the current and predicted RTD (CFS is a Context Feature Set descriptor, including lanes, vehicles, pedestrians, and signs properties, VSO is the Vehicle State and including lanes, vehicles, pedestrians, and signs properties, VSO is the Vehicle State and Odometry descriptor and CSD is the Cognitive State of Driver descriptor) Odometry descriptor, and CSD is the Cognitive State of Driver descriptor). h h fh bh l d dl h h k h d At the heart of the behavioral prediction engine is a Bayesian model which takes the current CSD, CFS, and VSO as inputs and predicts actuation behavior of the driver in the next few seconds and predicts actuation behavior of the driver in the next few seconds. It l th t ti ti l if ti b t dii d ii d i Di St ti ti l R d (DSR) hi h b It also gathers statistical information about driving decisions and errors in a Driver Statistical Record (DSR) which can be used over time to improve the prediction accuracy . The current CSD and CFS are in turn used to establish a Driver Memory used over time to improve the prediction accuracy . The current CSD and CFS are in turn used to establish a Driver Memory of Surroundings (DMS) based on the attention level and gaze direction analysis of the driver A General Forgetting Factor of Surroundings (DMS) based on the attention level and gaze direction analysis of the driver . A General Forgetting Factor ( ) l d h l fl h f h l dd (GFF) is applied to the DMS as time elapses to reflect common characteristics of shortterm visual memory . In addition, a Driver Cognitive Load factor (DCL) is inferred based on the activities engaged by the driver which in turn impacts the Driver Cognitive Load factor (DCL) is inferred, based on the activities engaged by the driver, which in turn impacts the DMS th thi DMS, among other things . h l b InVehicle Laboratory In Vehicle Laboratory The design of the instrumented ehicle follo s principles of The design of the instrumented vehicle follows principles of sensor portability and computing scalability . Sensor portability is achieved by using vacuum devices to attach the portability is achieved by using vacuum devices to attach the i i i h i i l f f instrumentation equipment to the interior glass surfaces of the vehicle (see Figure on the right) such as stereo camera the vehicle (see Figure on the right), such as stereo camera rigs LCD screens and GPS units ithout the need to perform rigs, LCD screens and GPS units without the need to perform permanent modifications to the vehicle. The odometry is obtained from the OBD II outlet located under the obtained from the OBDII outlet located under the d hb d h di id f h hi l dashboard on the driver’s side of the vehicle. C l i d Di ti Conclusion and Directions We have developed a vehicleindependent portable and scalable invehicle instrumentation for iADAS Our motivation to develop this invehicle research platform stems from the We have developed a vehicleindependent, portable and scalable invehicle instrumentation for iADAS. Our motivation to develop this invehicle research platform stems from the b i h hil ij i di kil i d li i d l d i d d b b d l h i h ld T h l i h i ADAS h h observation that while injuries per driven kilometer are in decline in developed countries, a reversed trend can be observed elsewhere in the world . Technologies such as iADAS have the potential to significantly reduce the burden of vehicle accidents and their consequences. potential to significantly reduce the burden of vehicle accidents and their consequences.

RoadLabdb: An In-Vehicle Laboratory for Developing ...sbeauche/PAPERS/HIG-Poster10.pdf · RoadLabdb: An In-Vehicle Laboratory for Developing Cognitive Carshl b f l C Cypgg SSS.S

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: RoadLabdb: An In-Vehicle Laboratory for Developing ...sbeauche/PAPERS/HIG-Poster10.pdf · RoadLabdb: An In-Vehicle Laboratory for Developing Cognitive Carshl b f l C Cypgg SSS.S

d b h l b f l C CRoadLab: An In Vehicle Laboratory for Developing Cognitive CarsRoadLab: An In-Vehicle Laboratory for Developing Cognitive CarsRoadLab: An In Vehicle Laboratory for Developing Cognitive Carsy p g g

S S B h i M A B D L d T K i J Ch M H t K Ch b d O M C thS.S. Beauchemin, M.A. Bauer, D. Laurendeau, T. Kowsari, J. Cho, M. Hunter, K. Charbonneau, and O. McCarthy, , , , , , , y

AbstractAbstractW ld id d h f i j i j d i f illi i 8 illi i i h ffi l d i id h j f hi i I lliWorld‐wide deaths from injuries are projected to rise from 5.1 million in 1990 to 8.4 million in 2020, with traffic‐related incidents as the major cause for this increase. Intelligent,j p j 5 99 4 j gAdvanced Driving Assistance Systems (i‐ADAS) provide a number of solutions to these safety challenges We developed a scalable in‐vehicle mobile i‐ADAS research platform for theAdvanced Driving Assistance Systems (i ADAS) provide a number of solutions to these safety challenges. We developed a scalable in vehicle mobile i ADAS research platform for the

f t ffi t t l i d b h i l di ti d i d f d t di f d t l i i i t lli t hi l W tli l h d d ib th ipurpose of traffic context analysis and behavioral prediction designed for understanding fundamental issues in intelligent vehicles. We outline our general approach and describe the in‐vehicle instrumentation. We present a number of research challenges and early results, as we outline future directions.p g y ,

L d A h t  I t lli t V hi lLayered Approach to Intelligent Vehiclesy pp g

O d t ti l d l i t f f l ith i i l l f d t b t tiOur proposed computational model consists of four layers, with increasing levels of data abstraction• The innermost layer consists of the hardware and software required to capture vehicle odometry, sequences from visual sensors, andThe innermost layer consists of the hardware and software required to capture vehicle odometry, sequences from visual sensors, anddriver behavioral datadriver behavioral data.

h d l h d h l b l d h d d• The second layer pertains to hardware synchronization, calibration, real‐time data gathering, and vision detection processes.y p y , , g g, p• The third layer is where the data is transformed and fused into a single 4‐dimensional space (x y z t)The third layer is where the data is transformed and fused into a single 4 dimensional space (x,y,z,t)Th l t l k f th f d d t t d i b h i l d t ith d l f b h i th t i t i t• The last layer makes use of the fused data to compare driver behavioral data with models of behavior that are appropriate given current

odometry and traffic conditions.odometry and traffic conditions.

Camera Calibration Stereo Depth ComputationCamera Calibrationl l b l b h l

Stereo Depth ComputationAll th i f f i l h i d t ithi O thFor visual sensors, it is critical to obtain precise calibration parameters such as lens All the image frames from visual sensors are synchronized to within 125 μs. Once thep p

distortion the optical center and the external orientation of sensors with respect to synchronized frames are obtained, stereo depth maps are computed at frame rate, baseddistortion, the optical center, and the external orientation of sensors with respect toh th Th R dL b t lib ti i t f d i d f thi

synchronized frames are obtained, stereo depth maps are computed at frame rate, basedon the calibration parameters (see Figure below)each other. The RoadLab stereo calibration interface was designed for this process on the calibration parameters (see Figure below)

(see Figure below).(see Figure below).The calibration process consists of two steps Intrinsic parameters are first estimatedThe calibration process consists of two steps. Intrinsic parameters are first estimatedf h d h b d h h f ll blfor each sensor and then, based on these, the extrinsic parameters for all possiblep psensor pairs are obtainedsensor pairs are obtained.

Predictive Behavioral ModelPredictive Behavioral Model( )Our conjecture is that the analysis of driver gaze direction (and other facial features) fusedj y g ( )

with the knowledge of the environment surrounding the vehicle (and its odometry) lead towith the knowledge of the environment surrounding the vehicle (and its odometry) lead toh ibili f di i d i i b h i f h i fthe possibility of predicting driving behavior for short time frames.For this purpose, we devise a Real‐Time Descriptor (RTD) for a moving vehicle essentiallyFor this purpose, we devise a Real Time Descriptor (RTD) for a moving vehicle essentiallyconsisting of a CFS a CSD and a VSO descriptorconsisting of a CFS, a CSD, and a VSO descriptor.The Figure on the right shows the retroactive mechanism in which both the current andg gpredicted descriptors (CSD CFS and VSO) assist in determining the safety level of the contextpredicted descriptors (CSD, CFS, and VSO) assist in determining the safety level of the contextd i d f h d di d RTD (CFS i C F S d iderived from the current and predicted RTD (CFS is a Context Feature Set descriptor,including lanes, vehicles, pedestrians, and signs properties, VSO is the Vehicle State andincluding lanes, vehicles, pedestrians, and signs properties, VSO is the Vehicle State andOdometry descriptor and CSD is the Cognitive State of Driver descriptor)Odometry descriptor, and CSD is the Cognitive State of Driver descriptor).

h h f h b h l d d l h h k h dAt the heart of the behavioral prediction engine is a Bayesian model which takes the current CSD, CFS, and VSO as inputs p g y , , pand predicts actuation behavior of the driver in the next few seconds  and predicts actuation behavior of the driver in the next few seconds. It l th t ti ti l i f ti b t d i i d i i d i D i St ti ti l R d (DSR) hi h bIt also gathers statistical information about driving decisions and errors in a Driver Statistical Record (DSR) which can beused over time to improve the prediction accuracy. The current CSD and CFS are in turn used to establish a Driver Memoryused over time to improve the prediction accuracy. The current CSD and CFS are in turn used to establish a Driver Memoryof Surroundings (DMS) based on the attention level and gaze direction analysis of the driver A General Forgetting Factorof Surroundings (DMS) based on the attention level and gaze direction analysis of the driver. A General Forgetting Factor( ) l d h l fl h f h l dd(GFF) is applied to the DMS as time elapses to reflect common characteristics of short‐term visual memory. In addition, a( ) pp p y ,Driver Cognitive Load factor (DCL) is inferred based on the activities engaged by the driver which in turn impacts theDriver Cognitive Load factor (DCL) is inferred, based on the activities engaged by the driver, which in turn impacts theDMS th thiDMS, among other things.

h l bIn‐Vehicle LaboratoryIn Vehicle LaboratoryThe design of the instrumented ehicle follo s principles ofThe design of the instrumented vehicle follows principles ofsensor portability and computing scalability. Sensorp y p g yportability is achieved by using vacuum devices to attach theportability is achieved by using vacuum devices to attach thei i i h i i l f finstrumentation equipment to the interior glass surfaces ofq p gthe vehicle (see Figure on the right) such as stereo camerathe vehicle (see Figure on the right), such as stereo camerarigs LCD screens and GPS units ithout the need to performrigs, LCD screens and GPS units without the need to performpermanent modifications to the vehicle. The odometry isp yobtained from the OBD II outlet located under theobtained from the OBD‐II outlet located under thed hb d h d i ’ id f h hi ldashboard on the driver’s side of the vehicle.

C l i   d Di tiConclusion and DirectionsWe have developed a vehicle‐independent  portable and scalable in‐vehicle instrumentation for i‐ADAS  Our motivation to develop this in‐vehicle research platform stems from the We have developed a vehicle‐independent, portable and scalable in‐vehicle instrumentation for i‐ADAS. Our motivation to develop this in‐vehicle research platform stems from the b i   h   hil  i j i    d i  kil    i  d li  i  d l d  i     d  d   b   b d  l h  i   h   ld   T h l i   h   i ADAS h   h  observation that while injuries per driven kilometer are in decline in developed countries, a reversed trend can be observed elsewhere in the world . Technologies such as i‐ADAS have the 

potential to significantly reduce the burden of vehicle accidents and their consequences.potential to significantly reduce the burden of vehicle accidents and their consequences.