11

Click here to load reader

Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

  • Upload
    a-perez

  • View
    217

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 2, JUNE 2010 463

Argos: An Advanced In-Vehicle Data Recorder ona Massively Sensorized Vehicle for Car Driver

Behavior ExperimentationAntonio Pérez, M. Isabel García, Manuel Nieto, José L. Pedraza, Santiago Rodríguez, and Juan Zamorano

Abstract—A crucial factor in traffic safety is driver behavior. Abetter understanding of driver actions will help in determining themost common reasons for car accidents. Therefore, research in thisfield helps to reduce accidents due to driver distraction. This paperpresents Argos, which is a complex and powerfully computerizedcar to help researchers in the study of car driver behavior. TheArgos system is an improved in-vehicle data recorder (IVDR) thatallows recording many kinds of alphanumerical data such as thespeed (vehicle data), the point of gaze (driver data), or the currentdistance to lateral road marks (environmental data). In addition,Argos can record up to nine simultaneous video images whichare synchronized with the alphanumerical data. Argos can alsogenerate and record different kinds of in-car light and audio stim-uli, allowing an experiment supervisor to interact or to schedulespecific actions to take place during an experiment.

Index Terms—Advanced Driver-Assistance Systems (ADAS),Advanced Vehicle Control and Safety Systems (AVCSS), driverbehavior, feedback, in-vehicle data recorder (IVDR).

I. INTRODUCTION

FOR THE last 20 years, improving safety in vehicle traffichas been an important objective that has led many insti-

tutions and companies (governmental traffic agencies, vehiclemanufacturers, etc.) to invest significant amounts of resources,mainly in improving road infrastructure [1], [2] and vehiclecomputerized subsystems [3], [4], with the purpose of reducingthe loss of lives and the financial impact of car crashes. How-ever, another crucial area of research is focused on analyzingdriver behavior.

In Spain, factors such as distraction, drowsiness, and ex-cessive speed are reported to have the strongest influence onaccident statistics. The increasing use of devices such as mobilephones and GPS navigation systems has magnified the problembecause they introduce additional sources of distraction [5].Understanding driver behavior is therefore one of the most

Manuscript received March 24, 2009; revised October 14, 2009 andFebruary 1, 2010; accepted March 12, 2010. Date of publication April 19, 2010;date of current version May 25, 2010. The Associate Editor for this paper wasA. Broggi.

The authors are with the Department of Computer System Architectureand Technology, Technical University of Madrid, Facultad de Informática,Campus de Montegancedo, 28660 Madrid, Spain (e-mail: [email protected];[email protected]; [email protected]; [email protected]; [email protected];[email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TITS.2010.2046323

important research issues to improve traffic safety. This areainvolves research both on psychological aspects (human behav-ior) and automotive embedded system design and development,the latter aiming at providing behavior researchers with power-ful and flexible experimental vehicles.

There is a large body of research on characterizing driverbehavior, part of which uses data collected from driving sim-ulators. Sekizawa [6] uses these data to model human drivingbehavior based on a stochastic model. Liang [7] proposesa method to detect driver distraction in real time using eyemovements and driving data, mainly the steering wheel angleand lane position, while drivers interact with an auditory in-vehicle system. In the same way, Giusti [8] detects driversleep-attacks by using the data acquired from the steeringwheel movements. Comte [9] focuses his work on attemptingto reduce the drivers’ tendency to exceed the speed limits. Inhis work, targets are presented randomly in a visual scene,providing a further analysis of driver response times to thesetargets as well as their attention to surprise events.

Other studies are based on actual data acquired in specificdriving situations such as roadway intersections. Chan’s work[10] focuses on the analysis and synthesis of data acquiredwith radar sensors from several intersections to improve safetyat roadway intersections. Results are verified by means ofvideo images when the radar data are ambiguous or erroneous.Rakha [11] uses an instrumented vehicle equipped with adifferential global positioning system (DGPS) and a data-acquisition system to characterize driver behavior based onthe perception-reaction time (PRT) at high-speed signalizedintersections. The impact of driver age, gender and time tointersection on PRT are also analyzed. The experiment consistsin a controlled-road study with no surrounding vehicles. Thedata acquired are mainly the vehicle speed, the pressing forceon the accelerator and brake pedals, and the signal indicationcoming from a controller box installed at the intersection.Other authors like Ma [12] and Doshi [13] use instrumentedvehicles to acquire data to characterize certain aspects ofdriver behavior. The former builds a simulation model of driverbehavior in car-following, and the latter tries to predict intentionto change lanes by analyzing the driver’s head movements andeye gaze.

Other kinds of systems that use actual driving data are eventdata recorders (EDRs). They have been used for many yearsto record information related to vehicle crashes. These systemsrecord certain information from a vehicle over a short period of

1524-9050/$26.00 © 2010 IEEE

Page 2: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

464 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 2, JUNE 2010

time (some seconds) before, during, and after a crash. However,they do not usually include audio or video data, location, orother external conditions. In a crash event, some on-boardcomputers can record information such as the vehicle speed,the brake force application, the accelerator position, the safetybelt usage, and the airbag performance. The main purposeof EDRs is to record data for retrieval after a crash. Thisinformation is useful to auto designers in improving the safetyequipment and performance-related functions of the vehicle, aswell as to accident investigators in determining the cause of anaccident.

A variety of EDRs have been developed in the automotiveindustry. Chidester [14] and Correia [15] report the EDRs in-stalled in GM and Ford cars, as well as their crash-data-retrievalsystems. The United States National Highway Traffic SafetyAdministration (NHTSA) has been using EDRs to support itscrash-investigation program for several years, having engagedin developing a uniform standard for EDRs.

Recently, in-vehicle data recorders (IVDR) have emergedthat monitor driver behavior under more diverse conditionsand not only in crash events. Toledo [16] has developed asystem that records vehicle data to identify various maneuvertypes that the vehicle performs. These maneuvers are used tocalculate several risk indices such as individual risk index ordriver speeding behavior. This system can provide some kind offeedback to the driver through a text message (SMS) or usingan in-vehicle display unit and has been installed in 191 vehiclesused on the job in a single company.

A different approach to IVDRs is proposed by Intemporawith the commercial name “RT-Maps” [17]. RT-Maps is ageneral purpose, flexible, and scalable development environ-ment aimed at applications handling multiple asynchronoussensors and actuators, and it is not restricted to the automotiveindustry. Instead of providing a system adapted to the final userrequirements, it is a powerful framework that helps to developnew multisensor-based applications.

The NHTSA has sponsored the 100-car naturalistic drivingproject [18] in which an IVDR has been installed in 100 vehi-cles. The system continuously records driver and vehicle data,as well as video images, and uses radar sensors and lane track-ers. The goal of this project is to obtain detailed informationfrom a large volume of driving data on crashes, near-crashes,and other incidents to create an event database. The video anddata can be replayed multiple times in order to fully understandthe nature of each event.

From 1987 to 1991, the Argos project was developed by theDepartment of Computer System Architecture and Technology(DATSI) at the Technical University of Madrid (UPM), whichwas funded by the Spanish Traffic Agency (DGT) [19]. Theaim of that project was to promote experimental research aboutdriver behavior under realistic driving conditions in combina-tion with controlled artificial environments. An instrumentedcar was developed consisting of an embedded computer thatrecorded mechanical and environmental parameters as well asvideo images synchronized on a time code basis [20]. Thesystem also included several software applications allowing theanalysis of the collected data a posteriori. Argos was used inmany research projects to study the relationship between driver

attention and speed control [21] or the impact of using mobilephones on road safety [22].

In this paper, we describe the details of the new Argossystem developed to replace the old one used until 2005. Thissystem continuously records not only vehicle data but driverinformation, such as the point of gaze, and environmentalinformation, such as the lateral distance to other vehicles andto the lateral road markings. At the same time, several videocameras showing the interior and exterior of the vehicle pro-vide synchronized information about the driving scene, anda cursor superimposed on the main road video scene revealsthe gaze direction online. A major improvement of the newsystem is the ability to generate light and audio stimuli atspecific or scheduled time intervals in order to interact withthe driver. Comparing Argos with the previously mentionedIntempora RT-Maps, the latter system lacks direct support forseveral of the subsystems currently available in the new Argos,mainly those related to data or image processing, such as theradar scanner, point-of-gaze estimation subsystem, or lateralpositioning system. However, an important difference betweenRT-Maps and the new Argos system is the availability of asimple but powerful Experiment Specification Language (ESL),allowing the development of complex experiments in a veryefficient way.

This paper is organized as follows: Section II discusses theArgos system requirements and architecture and the differentsubsystems that constitute Argos. Section III describes someimplementation issues. Section IV describes the system us-age. Section V describes some examples of real applications.Finally, Section VI presents some conclusions based on theauthors’ experience.

II. SYSTEM REQUIREMENTS AND ARCHITECTURE

The Argos system was intended to be used in specific longexperiments with a maximum duration of half a working day.Each experiment must be well defined, be carried out by avehicle driver, and be directed by an experiment supervisor. Theexperiment usually consists of several parts or stages linked in asequential, iterative, or complex way. The supervisor can decideon the fly to introduce small changes to the experiment initiallyscheduled. Once the experiment has finished, the supervisoror another specialized professional (usually a psychologist)analyzes and processes the collected data.

The new Argos had to be implemented in a standard but con-veniently instrumented medium-sized vehicle fulfilling certainrequirements.

• It had to allow multiple signal acquisition and multiple on-line images (videos). Images and signals had to be wellsynchronized in order to ensure a precise correlation.

• The system had to be able to generate and present lightand audio stimuli at scheduled time intervals or specificinstants in order to interact with the driver.

• The system had to provide methods for the supervisor tointeract with the system to modify the underlying logic ofthe current experiment. Specifically, it had to provide thesupervisor with a flexible language to cancel experiment

Page 3: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

PÉREZ et al.: AN ADVANCED IVDR FOR CAR DRIVER BEHAVIOR EXPERIMENTATION 465

stages and modify the sequential order, the number ofiterations, or other parameters of the experiment stages.

Starting from the above set of requirements, the new Argossystem was initially devised as an in-car multicomputer systembuilt on top of standard PC subsystems interconnected througha conventional Ethernet for ease of upgrading. The system isbased both on commercial subsystems and barebone PCs withthe appropriate project-developed software. The barebone PCsare based on a Linux OS to ensure full software control and anexpected long working life.

From a hardware point of view, Argos consists of eightmain computers. Two are complex customized commer-cial subsystems devoted to acquiring information about theenvironment—including other cars or pedestrians—and tocomputing and recording the car driver point of gaze basedon his/her ocular movements. Two other subsystems aremicrocontroller-based computers devoted to acquiring infor-mation from specific sensors and from the vehicle controllerarea network (CAN) busses and to control a light-emittingdiode (LED) array which generates visual stimuli. Finally, theother four computers are conventional rack form-factor PCcomputers running a Fedora-Core Linux distribution and thesoftware modules developed for the Argos project.

Argos hardware has been designed based on functionalityand interface requirements. The following equipment—sensorsand actuators—are controlled by Argos:

• Standard vehicle hardware sensors feeding data to thevehicle CAN-busses: They are already installed in aconventional vehicle (speed, engine variables, ambienttemperature, wheel drive turn, etc).

• Additional specific sensors to measure the lateral distanceto other vehicles, the gear shift position, the accuratespeed, and the xyz accelerations.

• Vehicle driver and experiment supervisor interface de-vices. Both the supervisor and the vehicle driver needto communicate with the Argos system. Supervisor com-munication is supported by a wireless keyboard, a three-display system, and a small set of specific push buttons.The driver is informed through a small monitor installedon top of the original instrument panel and respondsvia a set of push buttons (see Fig. 1). Some peripheralsare shared between the supervisor and the car driver:a speaker, a small monitor, and a set of tachistoscopicglasses.1

• Video cameras. Up to eight medium-resolution phase-alternating line (PAL) cameras and one high-resolution(1280 × 1024) camera are controlled by Argos. Anotherset of cameras is directly connected to specific subsystems,such as the webcam from the Radar Scanning System(RSS) and the eye camera from the Eye Tracking Subsys-tem (ETS).

Argos software consists of 20 interconnected software mod-ules running on the eight computers shown in Fig. 2, whosemain functions are described in the following subsections.

1These are glasses equipped with a rapid shutter (liquid-crystal) mechanismthat can be triggered separately for each eye.

Fig. 1. Argos driving post. Emulated instrument panel and other elements.

A. Stimuli Display System (SDS)

The SDS is a small computer designed specifically for Argos.It is based on a Motorola 8-bit microcontroller with an I2Cbus to control a large high-luminosity LED array and con-nected through an RS-232 interface to the Control and Ac-quisition System (CAS) to receive lighting instructions. Thehigh-luminosity LEDs are arranged in two rows and eightcolumns of 5 × 7 cells, each one controlled independently anddevoted to projecting a character or a shape on the windshield.

The SDS contains a single software module, i.e., WindshieldStimuli Presentation, that is responsible for interpreting andexecuting the commands received from the CAS subsystem.Instructions from the CAS specify the shapes to be displayedon each cell according to the Roadmap Program being executedin the User Interface and Storage System (UISS). A powerfuland flexible specification language has been defined in Argosto be used in the Roadmap Program. It allows the display oftext, moving arrows, spot lights, etc., and can be configuredto display the programmed stimuli at specific time instants orwhenever certain data fulfill a given condition.

B. Sensor Management System (SMS)

The SMS is a microcontroller-based computer with hardwarefor communicating with special sensors and for adapting theirelectrical signal levels. It is based on a Fujitsu 32 bit RISCmicrocontroller and directly manages 32 digital and 16 analogIO lines. It has several timers and a flexible communicationsystem with several CAN-Bus controllers, one I2C bus, andthree serial lines.

The SMS contains a single software module, i.e., SensorManagement, that is responsible for gathering most of the datamanaged in driving session experiments. It collects digital aswell as analog data supplied by all the sensors spread across thevehicle—over 60 different signals. Some of the data are directlyobtained by snooping the high speed traction CAN-Bus—theposition of the steering wheel and its rotating speed, acceleratorposition, engine throttle, use of indicator lights or handbrake,oil and coolant temperature, battery voltage, etc. Other data areobtained from the comfort CAN-Bus, such as actions on several

Page 4: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

466 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 2, JUNE 2010

Fig. 2. Argos hardware architecture.

controls (wipers, air conditioning, doors, windows . . .), outdoorluminosity, etc.

The SMS is equipped with several sensors to obtain addi-tional data not supplied by the vehicle, as well as data with moredemanding requirements according to Argos specifications. Italso performs all the signal conditioning required to uniformlyadapt all the sensor magnitudes to the ranges of the analog-to-digital converters. The sensors added to the vehicle include thefollowing.

• Doppler sensor: A 24-GHz Doppler sensor is attached tothe underbody of the car to measure the instantaneousspeed more accurately than the one snooped from theCAN-Bus. It also measures the distance covered by thevehicle.

• Lateral distance: A set of two ultrasonic sensors are at-tached to the rear doors to measure the distance to objectson both sides of the vehicle.

• Accelerometer: Three accelerometers monitor the vertical,horizontal, and longitudinal instantaneous acceleration ofthe vehicle. This information is also available via theCAN-Busses but without the required accuracy.

• Braking pressure: An effort load cell is installed on thebrake pedal to measure the pressure exerted by the driver.These data complement the on-off information alreadyprovided by the vehicle.

• Environmental conditions: Three additional sensors mea-sure environmental conditions such as luminosity, temper-ature, and noise inside the vehicle. They are connected tothe SMS through the I2C bus.

Additionally, the SMS monitors six push buttons installedin the vehicle to register interactions with the driver and thesupervisor. Four push buttons are located on the steering wheelto be easily pressed by the driver according to the conditions es-tablished in the experiments. For example, he/she must activatea button whenever some prefixed speed is being exceeded orwhen some colored light is projected on the windshield. Boththe condition being satisfied and the action on the button areregistered by the system.

Two other push buttons are located near the supervisor post,and they can be used to mark whatever instants, driver reac-tions, events, etc. the supervisor may consider relevant to theexperiment.

Data acquired by the SMS have very different periods ofvariation, ranging from centiseconds to hours, or they may showno variation during the whole experiment. To avoid sending andstoring irrelevant information, the SMS maintains the currentvalue of every signal, as well as a flag signaling whether it haschanged.

The SMS is connected to the CAS by a serial line to send allthe data gathered to be stored in the system. A communicationprotocol with error-detection and correction capabilities hasbeen defined to allow periodic sending of the packets with thenew values of signals showing variation. Usually, only a fewvalues have to be sent every centisecond.

C. Eye Tracking System (ETS)

The ETS is a customized version of commercially avail-able equipment (CAR-ETS-PC, by “Cornelia Scherbarth—Eyetracking Devices,” Germany) based on a standardrack-mounted MS-Windows PC.

The ETS contains a single software module, i.e., Point ofGaze Identification, which is responsible for keeping track ofthe gaze of a person and marking it on a video image acquiredfrom a position close to the observer’s viewpoint. It has beenadapted to the vehicle to follow the driver point of gaze in such away as to alter the driving conditions as little as possible. Somemodifications were made to the commercial version of the ETSto match the Argos requirements.

D. Control and Acquisition System (CAS)

The CAS is based on a standard rack-mounted Linux-PCwith a dual graphics adapter and an additional serial linecontroller. It is one of the four main Argos computers, and itmanages the acquisition of numerical data, generates a global

Page 5: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

PÉREZ et al.: AN ADVANCED IVDR FOR CAR DRIVER BEHAVIOR EXPERIMENTATION 467

timing used to synchronize all the information stored in thesystem, and presents the stimuli to the driver.

The CAS is connected by serial lines to the SDS, to theSMS, and to the ETS. It also uses a serial line to receive theglobal position of the vehicle that is obtained very accuratelyby a differential GPS. Whenever the CAS receives a data packetfrom a serial line, it marks it with a timestamp and resends thedata through the Ethernet to be stored by the UISS.

The time base used to synchronize all the systems is gen-erated by using the real-time features of RT-LINUX, whichallows the periodic delivery of time messages without driftor jitter. In order to preserve this deterministic behavior, anadditional serial line is used to broadcast time messages everycentisecond to all the other systems. In this way, time messagesdo not suffer the nondeterministic delays of Ethernet and arereceived by the other systems with a constant delay of 2.7 ms.The absolute date of messages has a resolution of 10 ms and isused to time stamp data and stored frames. As a result, data aresynchronized with a global centisecond-resolution clock.

The CAS is also responsible for presenting the stimuli to thedriver according to the Roadmap Program and supervisor deci-sions. There are different kinds of stimuli: predefined sounds,shapes to be projected by the SDS, video images obtained bysome of the installed cameras, and graphics reproducing eithera part or the whole instrument panel. The CAS can also occludethe passenger’s vision by means of the tachistoscopic glassesconnected to its parallel port in order to carry out experimentsto estimate speed or time.

The CAS manages two LCD displays on which graphicsand video images can be displayed. One is placed in frontof the vehicle instrument panel so that it can be replaced bygraphics generated in the UISS. The other is placed over theGPS navigator. The graphics are directly displayed by the UISSusing the XWindow X-protocol. Video images received by theCAS are compressed by the system capturing it: either theImage Storage and Management System (ISMS) or the LateralPosition System (LPS). Video transport is based on a protocolspecifically designed in Argos to reduce the transmission la-tency. The CAS has to reconstruct, decompress, and render onthe selected display every frame of the video images.

The software of the CAS consists of the following modules.

• Control and Signal Acquisition. This is responsible forcollecting and synchronizing signals from the vehicle(through the SMS) from the ETS and from a differen-tial GPS.

• Time Base Generation. This is responsible for generatingand distributing the Argos system global time base.

• Video Image Visualization. This is responsible for de-compressing and rendering images received from the twoimage capture systems (ISMS and LPS in Fig. 2).

E. Lateral Position System (LPS)

The LPS is based on a standard rack-mounted Linux-PCwith specific hardware modules: two raw video frame grabbersand a compressing 4-input video frame grabber. It contains thefollowing software modules.

• Lateral Position Estimation. This is responsible for esti-mating the actual distance of the vehicle to the lateral roadmarks. This module works with the images received in realtime from two dedicated cameras.

• Beacon Detection. This is responsible for detecting avisual beacon in real time. The beacon is a small geometricpicture designed specifically for this task. A syntheticsignal is generated when the beacon, which is detected bymeans of image processing software, is sufficiently closeto the vehicle trajectory.

• Video Image Acquisition. This is responsible for organiz-ing, tagging, compressing, and storing the high-bandwidthdigital video streams received from all the Argos cameras.

• Video Image Management and Distribution. This is re-sponsible for extracting and distributing, through the on-board Gigabit network, digital video images from theVideo Image Acquisition module to the Video Image Vi-sualization modules.

The LPS is devoted to performing two kinds of video imagemanagement. Up to four PAL cameras can be connected tothe LPS. Two are so-called lateral position cameras. They areinstalled in the place of the front fog lamps in order to capturethe road lane marks on both sides of the vehicle. The other twocan be selected among the various cameras distributed acrossthe vehicle.

Distance estimation to the lane marks is done in real-timeby means of image processing algorithms applied to the un-compressed lateral position images. The LPS sends packets tothe UISS with the estimated distance at a rate of 20 times persecond. Each packet includes the global timestamp in orderto allow integration of the data it contains within the sessiondatabase. It can also perform image processing algorithms todetect beacons as required in some experiments. In such cases,the LPS sends timestamped packets to the UISS to signal thatthe vehicle is passing a beacon.

Additionally, the video images captured from the four cam-eras have to be stored in the LPS as driving session information.Thus, the cameras are not only connected through uncom-pressing frame-grabbers but also through a four-channel framegrabber with PAL resolution (704 × 576) that has compressionfeatures and stores the images with a reduction factor of upto 20 : 1. A timestamp is added to every stored frame. Also, atimestamp-based index is generated and stored in the LPS foreach captured video image in order to allow quick location ofvideo sequences. The supervisor can decide to show one ormore of these images on the displays connected to the CAS andto the UISS. The images can be shown or hidden dynamicallyduring the driving session. When needed, the LPS sends thecompressed video images to the requiring system through theEthernet using the Argos protocol.

F. Image Storage and Management System (ISMS)

The ISMS is based on a standard rack mounted Linux-PC, whose main purpose is to capture, store, and distributevideo images obtained from up to five video cameras. It has afirewire port and a compressing four-input video frame grabber.It contains the following software modules:

Page 6: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

468 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 2, JUNE 2010

• Video Image Acquisition: This was described inSection II-E.

• Video Image Management and Distribution: This was al-ready described in Section II-E.

• Radar Visualization Interface. This is an independent ap-plication for visualizing in real time the objects identifiedby the Radar Scanning System. This software module runsunder an MS-Windows virtual machine installed in theISMS.

Four of the five video images managed by this system arecaptured by a frame grabber identical to the one installed in theLPS, and they are managed as described in Section II-E. Oneof these cameras points at the driver, a second one is locatedin the back of the vehicle to take the driver’s view throughthe rearview mirror. A third camera can be located in severaldesignated places inside the vehicle. The fourth video image isnot directly taken from a camera but from the ETS eye videooutput.

The fifth video image is obtained from a camera located asclose as possible to the driver’s head (scene camera) and isespecially important for many experiment purposes. Therefore,it has more demanding requirements and a more complexmanagement. The scene image is taken from a high-resolutionfirewire digital camera. Images can be captured in two differentsizes according to the aim of the experiment. The usual size is1280 × 1024 at a frame rate of 25 frames/s. A cropped 1280 ×512 image can also be recorded at a double rate of 50 frames/s,covering just the windshield area—this format is needed whenthe experiment involves optical flow analysis. The scene cameraprovides images in MPEG format with a compression factor ofup to 20 : 1.

The ISMS stores and distributes the scene image in a similarway to the other images. Besides, it decompresses every frameof the scene image and renders it on a graphics adapter with itsTV output connected to the ETS scene camera input.

G. User Interface and Storage System (UISS)

The UISS is the Argos front-end that provides user access tothe whole system. It is based on a standard rack mounted Linux-PC and it is the only one with conventional human-interfacedevices (keyboard, pointer). It also has a multi-headed display,a quad graphics adapter, and a magnetic tape.

The UISS coordinates all the computers and software mod-ules and sends instructions to configure, start, manage, andfinish the driving sessions. The instructions may be generatedby direct actions of the supervisor or by the Roadmap Programexecuted by the UISS. It provides a means of interaction tothe users of the systems, mainly the supervisor, to managedriving sessions, as well as for maintenance tasks, backups,etc. The interaction is performed through the supervisor post,consisting of a wireless keyboard with an incorporated track-ball and three LCD displays located in front of the rear seat.The UISS communicates through the Ethernet with the othersystems.

The UISS executes the user-interface application that allowsthe supervisor to perform the initial procedures required toinitiate a driving session such as naming it, selecting a Roadmap

Program, calibrating the ETS, or choosing the video imagesto be included in the session. Once a session is started, theUISS shows the values of the signals the supervisor wantsto monitor and the selected video images. If the scene imageis selected, it superimposes a mark on the coordinates of thedriver point of gaze received from the ETS. The user interfacealso allows the supervisor to activate certain stimuli and tointeract with the Roadmap Program interpreter to control theexperiments. After a session has finished, the UISS allows thesupervisor to extract relevant selected data into text formattedfiles, as well as to reproduce selected fragments of the storedsessions.

The UISS is also responsible for storing all the informationrequired for performing experiments (generated in the UISSby the supervisor or the Roadmap Program interpreter) andall the data collected by Argos during the driving sessions[received from the CAS, the LPS, and the Radar ScanningSystem (RSS)]. All of this information is synchronized bymeans of the timebase generated by the CAS.

The software of the UISS consists of the followingmodules.

• Synchronized Storage. This is responsible for bufferingand storing all signal data as well as binary and textualinformation in acquisition-time order. This system needsbuffer capacity so as to hold enough data to reorder signalsets received out of order from different subsystems.

• Roadmap Program Interpreter. This is responsible forinterpreting the Roadmap Program written in the ArgosExperiment Specification Language (ESL). This modulecan be considered the software manager of the experiment,as it is only controlled by direct commands provided bythe supervisor. The Roadmap Program communicates withall the other modules, decides preprogrammed actions inresponse to acquired signals, and takes the overall controlof the experiment.

• Dialog Control. This is responsible for communicatingwith the experiment supervisor. It has to generate differ-ent views of signals being acquired, recognize supervisorcommands and driver interaction, as well as communicatewith other modules, particularly with the Roadmap Pro-gram interpreter.

• Signal Postprocessing Subsystem. This is responsible forcalculating certain parameters from acquired signals. Forinstance, this module has to select the object that is closestto the vehicle from the set of objects identified and locatedby the RSS.

• Virtual machine interface. This is the client side of thevirtual machine running on the ISMS.

• Video Image Visualization: This is already described inSection II-D.

H. Radar Scanning System (RSS)

The RSS is a customized commercial embedded computercontrolling a high-performance four beam radar scanner locatedat the front of the vehicle. It contains a single software module,i.e., Vehicle Detection, Identification and Registering, which isresponsible for detecting and tagging in real time environment

Page 7: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

PÉREZ et al.: AN ADVANCED IVDR FOR CAR DRIVER BEHAVIOR EXPERIMENTATION 469

Fig. 3. Interior arrangement.

limits and moving objects (other vehicles or pedestrians) in theproximity of the experimental vehicle. It communicates throughthe Ethernet with the UISS, which stores the received data afterbeing timestamped.

The RSS interface is based on a proprietary Windows appli-cation running on a virtual machine installed in the ISMS andmanaged from the supervisor post.

III. IMPLEMENTATION ISSUES

The Supervisory Control and Data Acquisition system de-scribed in the previous section had to be implemented on aconventional vehicle in such a way that the impact on exper-imentation drivers is minimized and that they should feel asclose as possible to driving their own vehicles. Therefore, chal-lenges related with the integration of cameras, stimuli presen-tation devices, sensors, and computers arise. These integrationproblems are both ergonomic and mechanical, i.e., the installedsubsystems not only have to be installed in a harmonized wayso that the driver view is similar to the view in a standardvehicle, but they also have to be safe in case of a traffic accident.Therefore, installation of the equipment was delegated to acompany with much experience in car prototyping. Moreover,there is another major problem related with on-board systems:the power supply device.

A. Power Supply

Preliminary estimates showed that the power drained bythe system could reach 2000 W. It should be noted that thewhole system is built up of 8 main subsystems, nine cameras,five displays, and some peripheral devices such as a GPS, acommunication switch, and sensors. Some sensors are fairlysimple such as accelerometers, but others, such as the odometer,are quite complex. Moreover, five of the eight main subsystemsare in charge of complex activities such as image processing,which demands great processing power and, thus, great powerconsumption.

In order to provide such an amount of power, it was necessaryto replace the alternator by a larger one, which is able to supply180 A at 12 V or 2160 W. Of course, the input voltage ofevery system had to be 12 V, and thus, conventional computersystems had to be equipped with 12 V power supply units.

However, an alternator does not supply its maximum power atlow throttle, and thus, the installed alternator was not sufficientby itself. Therefore, an additional battery of 200 Ah had to beinstalled to supply extra power at low throttle and to cope withpossible power drain peaks. In fact, it has to supply about 60 Aat low throttle to complement the alternator, and therefore, thebattery is partially discharged during the experiments and mustbe charged overnight.

Finally, a 220 V AC to 12 V DC 150 A power supply wasinstalled in order to avoid having to start the engine during de-velopment and maintenance activities. During testing activitiesit was possible to measure the actual power drain of the wholesystem, and it is close to 140 A. As a result, there is some sparepower for system upgrades.

B. Vehicle

The hardware systems to be installed require a lot of space.In order to accommodate the five PC-computers and otherhardware systems such as the Gigabit switch or the SMS andprovide room for future upgrades, it was necessary to installtwo 483 × 686 × 704 mm racks. It is also worth mentioning thespace needed for the supervisor’s control post, which has to ac-commodate three displays with sufficient room left to comfort-ably manage the wireless keyboard and pointer. There are alsoother bulky devices which need to be installed, such as the ad-ditional battery, the power supply switches, or the RSS system.

Therefore, a minivan was selected as the vehicle for theArgos system removing the rear bench and the left part of themiddle bench, thus leaving only three seats in the vehicle: twoin the front and one more behind. As can be seen in Fig. 3,this arrangement provides enough room to accommodate thesupervisor’s control post, all the subsystems, and, of course, thedriver.

The selected minivan was a SEAT Alhambra with a top-rangeengine, 2.8-l v6 150 kW, to cope with the extra weight andthe power drained by the power supply system. In this way, itbehaves like a vehicle with an average engine.

IV. SYSTEM USAGE

The main goal of the Argos system is to serve as a toolto facilitate experimental research about driver behavior inrealistic and fully controlled conditions, in combination with

Page 8: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

470 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 2, JUNE 2010

artificial environments. Each experiment requires a driverwhose behavior is to be studied as well as a supervisor withexpert knowledge both in operating the vehicle systems andmonitoring driver behavior. This person is responsible for thefollowing experimental specifications.

• What parameters have to be studied?• How do we split the experiment into different stages?• What objectives have to be reached at each stage?• How does the system interact with the driver?• How is the acquired information used to obtain conclu-

sions about driver behavior?

A. Preparing the Experiment

Before starting the driving session, the supervisor has toselect, through the Argos user interface, the images to be ac-quired as well as the experiment control program (the RoadmapProgram). This program formally specifies the proposed ex-periment, and it is written in ESL: a language designed forthe Argos project to help the session supervisor to formallyspecify complex and controlled experiments. This language hasthe following features.

• It provides simple data types of classical programminglanguages: integer, bit field integer, and floating pointnumbers. The supervisor can define complex data types.Some complex types are predefined and represent severalicons the program can display on the emulated instrumentpanel as well as sound or luminous stimuli patterns.

• It is a tightly typed language that prevents the programmerfrom introducing errors into the experiment specificationprogram.

• It has the classical imperative programming languagecontrol structures: do while, if then else, and for.These structures allow the program to branch or to it-erate, depending on several conditions considered in theexperiment.

• It provides the user with several embedded functions.These functions allow the conversion of data from onetype into another, to get instances of most frequently usedrandom variables, and to interact with the driver throughthe emulated instrument panel, the visual stimuli LEDarray, and sound stimuli.

The Roadmap Program is executed by the UISS while dataare being acquired allowing evaluation of the driving sessionstatus as well as interaction with the supervisor.

Furthermore, the Roadmap Program has the following addi-tional features.

• It runs every time a data set is received from any of theacquisition systems.

• It obtains the elapsed time of the session.• It evaluates the last acquired value of a signal.• It produces random variables to simulate random

situations.

B. Experiment Development

The supervisor is in charge of starting and finishing thedriving session through the Argos user interface. During the

Fig. 4. Supervisor control post.

experiment development he/she can interact with the Argossystem by using the interface devices in the following ways:

• modifying the experiment initially scheduled, e.g., alteringthe order or the parameters of its stages, or cancelling somestages.

• marking special interest points of the driving session suchas driver reactions, events, etc.

• activating audio, luminous, or graphical stimuli to interactwith the driver, as explained in the next section.

• monitoring specific signals and images, as explainedbelow.

Although all signals described in previous sections arestored, the experiments are usually aimed at specific signalsthat the supervisor decides to monitor. For example, if theexperiment deals with vehicle speed, the supervisor can monitorthe instantaneous value of this variable to make decisionsduring the driving session. The supervisor can visualize severalsignals in the three-display system and select some images tobe monitored in real time. Depending on the signal, differ-ent display formats can be selected: text, binary, or graphicrepresentation.

Fig. 4 shows how the signal and image visualization iscarried out. The left display shows three images, i.e., scene,driver and rear view, as well as two windows where certainbinary and textual signals are visualized. The display on theright shows two signals visualized in graphic format.

C. Interacting With the Driver

The experiment supervisor can interact with the driver dur-ing the driving session through the Roadmap Program. It isin charge of activating the sound and luminous stimuli andtriggering the visualization of several information items on theemulated instrument panel.

The projection of different shapes on the windshield can beused to instruct the driver to perform different actions. It canalso be used to produce distractions for the driver. An exampleof the possible shapes is the name of the Argos project shownin Fig. 5.

The supervisor can also make use of the emulated instrumentpanel to interact with the driver. Fig. 6 shows a simplifiedinstrument panel where signal values such as speed can be

Page 9: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

PÉREZ et al.: AN ADVANCED IVDR FOR CAR DRIVER BEHAVIOR EXPERIMENTATION 471

Fig. 5. Example of stimuli board information.

Fig. 6. Emulated instrument panel.

distorted. The driver can then be told to identify incorrect speedvalues.

Furthermore, the supervisor can interact with the driverthrough the Roadmap Program by sending predefined operatorcodes through the keyboard to trigger some of the aforemen-tioned events.

All of these events may require responses from the driver,usually by pressing some of the push buttons mounted onthe steering wheel. The events and the driver response areregistered in the session files with its timestamp.

D. Experiment Analysis

When the experiment finishes, the system provides additionalfunctionalities, allowing experts in driving behavior study touse the acquired data and images to extract results from theexperiment. One of the most interesting functionalities allowsto replay the driving session and to identify or select interestingsession segments. For example the analysis may be centeredon a segment of the session (from the 32nd minute to the40th), when the car is close to the left lateral road band, orwhen a signal reaches a certain value. In this way, experts canfocus the study on session segments that may contain relevantinformation.

Even the simplest experiment analysis may require extractingseveral data to introduce them into a statistical tool. Argosallows the user to select several signal data as well as a sessioninterval to be dumped in a tab-delimited format. This file maybe fed into another tool to perform more complex analysis.

V. EXAMPLES OF REAL APPLICATIONS

The flexibility of the Argos system ranges from extraor-dinarily complex experiments, where all the subsystems areinvolved, to simple experiments in which only a few subsystemsare used. Complex experiments require a time-costly prepa-ration task, mainly specifying the Roadmap Program, whilethe simplest ones may not need it. The following subsectionsdescribe two real experiments that use a subset of the overallArgos capabilities.

A. Estimating Arrival Time

This experiment has been carried out by the Spanish TrafficAgency to study the speed and time to collision perceptionof young drivers. In addition, the equivalence of arrival-timeestimates performed in a vehicle using video images was alsoevaluated in order to validate image tests as a substitute forvehicle tests. The experiments were carried out by seatingparticipants from a chosen group of drivers of different genderand age in the front passenger’s seat. The sample was composedby 20 male and 20 female drivers aged between 20 and 30.

Participants were deprived of vision for a variable distance toa target situated at the end of a straight lane. They were requiredto estimate the instant when the car would pass between theposts marking the target. Different constant speeds and differentpredefined distances were used. Participants were also requiredto make the same estimates in the laboratory using videoimages. The video images were the ones taken during the realdriving session using the high-resolution scene camera. As thecamera recorded the same scene as observed by participants(scenario, course disposition, speeds, distances, and vision andnon-vision times were the same as in the real tests), the resultsare comparable.

In order to properly perform those experiments, a lane wasmarked by using six beacons and one target at the end. Beaconswere situated at distances of 225, 150, 125, 100, 75, and0 m to the target. The Roadmap Program was in charge ofcontrolling the instants when the passenger was blindfolded,depending on the beacon signals and the predefined distances.Participants started the experiment deprived of vision; theyrecovered vision at 225 m, and at predefined distances (150,125, 100, or 75 m), they were deprived of vision and asked topush a button when they judged that the vehicle reached thetarget position.

The Roadmap Program made it easy to automate theseexperiments. Other Argos equipment was also needed:

• Tachistoscopic glasses were used to blindfold the pas-senger. They were activated or deactivated by the CASwhen the Roadmap Program ordered it, in response to thedetection of beacons.

• Beacon signals were generated by image processing algo-rithms running on LPS.

• The UISS executed the Roadmap Program but was alsoin charge of monitoring several parameters, such as thespeed. In this way, if there were significant variations, theexperiment was invalidated by the Roadmap Program andhad to be carried out again.

Page 10: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

472 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 11, NO. 2, JUNE 2010

• The ISMS was in charge of processing and storing datafrom the high-resolution scene camera in order to bereproduced during the laboratory experiment.

• The SMS was in charge of monitoring the button thatparticipants had to push when they thought that the targethad been reached.

After all data had been gathered, the supervisor had to selectthe relevant driving session intervals, i.e., those intervals whichstarted a few seconds before the 225-m beacon and finishedwhen the button was pushed or the target was reached. Therelevant signals of these experiments were dumped to a tabdelimited file to be analyzed with external tools.

This ambitious study could not have been carried out withoutthe support of a tool like Argos. It should be noted that oneof the main aims was to evaluate the relative contributionof speed and distance to time estimates in image systems.Therefore, a tool which introduces as little distortion as possiblebetween vehicle and laboratory tests while providing accuratemeasurement of speed, distance, and events is needed.

B. Longitudinal Driving Behavior

The Argos vehicle has been used by the Foundation for theResearch and Development in Transport and Energy (CIDAUT)to carry out a set of field tests in a real driving context as part ofits research activities within a European project co-funded bythe European Commission under the Seventh framework pro-gram of the European Union. The project, named the IntegratedHuman Modelling and Simulation to support Human Error RiskAnalysis of Partially Autonomous Driver Assistance Systems(ISi-PADAS), relies on those field tests to gather informationabout the behavior of drivers of diverse age and gender, underdifferent traffic conditions, with special emphasis on longitudi-nal driving behavior, that is, the selection and maintenance ofspeeds and distances to the preceding vehicle.

Specifically, the aim of these field tests was to gain aninsight into normal driving in real traffic situations, analyzingthe influence of various factors related to distraction, age, andgender issues. Within this context, the Argos vehicle collecteda set of relevant driving variables, namely, speed, headwaydistance, steering wheel angle, use of pedals, use of indicators,etc., in order to analyze driver behavior in accordance withdifferent traffic scenarios.

The collected variables were combined with experts’ obser-vations made during the trials, which were aimed at studyingdriving errors and events occurring in those scenarios. Scenar-ios consisted of a categorization of driving maneuvers in thefollowing situations: free driving (no vehicle ahead), car follow-ing (lead vehicle in front), lane change (left/right), overtaking(as a combination of several maneuvers), approaching slowervehicle (when speed difference was high), and approaching atraffic light (in an urban context). In some particular scenarios,a distracting secondary task was given to the driver (a cognitivetask based on a numerical counting). The identification of thestart and end of the secondary tasks was registered by using theexperimenter push buttons.

These field tests were conducted by CIDAUT in the sur-roundings of the city of Valladolid, Spain, making use of a pre-

defined route mainly consisting of highway roads but consistingof some extra-urban sections as well. A sample of 20 male andfemale drivers between 24 and 75 participated in the study in80-min driving sessions.

Once the practical experiments had been finished, they hadto be analyzed by experts selecting the relevant driving sessionfragments (intervals), thus comparing periods with and withouta secondary task.

Experts only have to select the signal that delimits thesefragments (pressing of the supervisor push-buttons), the signalvalue (when any push-button is active), and the signals to bestudied in the experiment (speed, headway distance, steeringwheel angle, etc.). The values of these signals can be dumpedto a tab-delimited file for further studies using other tools.

As a result of these field tests, a set of enriched data aboutreal driving behavior was generated, which has to be combinedwhen conducting the analysis, so that the influence of severalfactors in driving can be studied. The ISi-PADAS project iscurrently under way, and the data are being analyzed.

VI. CONCLUSION

IVDR systems provide a very useful platform to analyzedriver behavior in real driving situations. Recently, a number ofdifferent approaches to exploit IVDR capabilities, which havebeen sponsored by several national traffic agencies, have beenpursued. In this paper, the Argos system, which is a new IVDRfunded by the Spanish Traffic Agency, has been described.

Argos is a complex system built around a multicomputerplatform. It is based on state of the art technologies: last-generation sensors, advanced human-machine interfaces, im-age compression, video on disk recording, image processingalgorithms, as well as pattern-recognition algorithms. Thesefeatures, together with the fact that it uses standard hardwarein the main subsystems, as well as open software for operatingsystems and major off-the-shelf components, ensure a long lifeexpectancy for the system.

The development of this complex system involved quite alarge and interdisciplinary group, with the support of a com-pany experienced in car prototyping. It is worth mentioning thatthe requirements were jointly established with psychologistsfrom the Spanish Traffic Agency. As a result, the system hasbeen continuously validated.

The system has been developed over 5 years. The first twoyears were spent on defining the baseline requirements andtesting the state-of-the-art technologies that would be used inthe system. The last year of the project was devoted almostentirely to on-road and laboratory testing, as well as fine-tuning the integration of the different subsystems to achieve therequired performance. The final system was delivered in early2008 and contains around 100 000 lines of code, not countingthe multiple off-the-shelf components used.

Argos requires users to undergo some training on experimentconfiguration and system use but is then a powerful tool toautomate experiments.

The final scores on the road tests have shown that the systemfulfills the established requirements and the expectations of theSpanish Traffic Agency.

Page 11: Argos: An Advanced In-Vehicle Data Recorder on a Massively Sensorized Vehicle for Car Driver Behavior Experimentation

PÉREZ et al.: AN ADVANCED IVDR FOR CAR DRIVER BEHAVIOR EXPERIMENTATION 473

REFERENCES

[1] I. Catling and B. McQueen, “Road transport informatics in Europe—Asummary of current developments,” in Proc. 5th Jerusalem Conf. Inf.Technol. “Next Decade in Information Technology” (Cat. No.90TH0326-9), Oct. 1990, pp. 702–715.

[2] M. Bell, “Policy issues for the future intelligent road transport infrastruc-ture,” in Proc. Inst. Elect. Eng.—Intell. Transp. Syst., Jun. 2006, vol. 153,no. 2, pp. 147–155.

[3] A. Khan, A. Schaefer, and M. Zetlmeisl, “Efficient memory-protectedintegration of add-on software subsystems in small embedded automo-tive applications,” IEEE Trans. Ind. Informat., vol. 3, no. 1, pp. 44–50,Feb. 2007.

[4] F. Salewski and S. Kowalewski, “Hardware/software design considera-tions for automotive embedded systems,” IEEE Trans. Ind. Informat.,vol. 4, no. 3, pp. 156–163, Aug. 2008.

[5] M. Recarte and L. Nunes, “Mental workload while driving: Effects onvisual search, discrimination and decision making,” J. Exp. Psychol.:Appl., vol. 9, no. 2, pp. 119–137, Jun. 2003.

[6] S. Sekizawa, S. Inagaki, T. Suzuki, S. Hayakawa, N. Tsuchida, T. Tsuda,and H. Fujinami, “Modeling and recognition of driving behavior based onstochastic switched ARX model,” IEEE Trans. Intell. Transp. Syst., vol. 8,no. 4, pp. 593–606, Dec. 2007.

[7] Y. Liang, M. L. Reyes, and J. D. Lee, “Real-time detection of drivercognitive distraction using support vector machines,” IEEE Trans. Intell.Transp. Syst., vol. 8, no. 2, pp. 340–350, Jun. 2007.

[8] A. Giusti, C. Zocchi, and A. Rovetta, “A noninvasive system for evaluat-ing driver vigilance level examining both physiological and mechanicaldata,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 1, pp. 127–134,Mar. 2009.

[9] S. L. Comte, “New systems: New behaviour?” Transp. Res. Part F: TrafficPsychol. Behaviour, vol. 3, no. 2, pp. 95–111, Jun. 2000.

[10] C.-Y. Chan, “Characterization of driving behaviors based on field obser-vation of intersection left-turn across-path scenarios,” IEEE Trans. Intell.Transp. Syst., vol. 7, no. 3, pp. 322–331, Sep. 2006.

[11] H. Rakha, I. El-Shawarby, and J. R. Setti, “Characterizing driver behav-ior on signalized intersection approaches at the onset of a yellow-phasetrigger,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 4, pp. 630–640,Dec. 2007.

[12] X. Ma and I. Andreasson, “Behavior measurement, analysis, and regimeclassification in car following,” IEEE Trans. Intell. Transp. Syst., vol. 8,no. 1, pp. 144–156, Mar. 2007.

[13] A. Doshi and M. Trivedi, “On the roles of eye gaze and head dynamicsin predicting driver’s intent to change lanes,” IEEE Trans. Intell. Transp.Syst., vol. 10, no. 3, pp. 453–462, Sep. 2009.

[14] A. Chidester, J. Hinch, and T. Roston, “Real world experienceswith event data recorders,” presented at the 17th Int. Tech. Conf.Enhanced Safety Vehicles, Amsterdam, The Netherlands, 2001,Paper 247.

[15] J. Correia, K. Iliadis, E. McCarron, and M. Smolej, “Utilizing data fromautomotive event data recorders,” in Proc. Can. Multidisciplinary RoadSafety Conf. XII, London, ON, Canada, Jun. 2001.

[16] T. Toledo, O. Musicant, and T. Lotan, “In-vehicle data recorders for moni-toring and feedback on drivers’ behavior,” Transp. Res. Part C: EmergingTechnol., vol. 16, no. 3, pp. 320–331, Jun. 2008.

[17] N. du Lac, C. Delaunay, and G. Michel, “RTMaps—real-time multisensorprototyping software,” in Proc. 3rd Nat. Conf. Control Archit. Robots,Bourges, France, May 2008.

[18] T. Dingus, S. Klauer, V. Neale, A. Petersen, S. Lee, J. Sudweeks,M. A. Perez, J. Hankey, D. Ramsey, S. Gupta, C. Bucher,Z. R. Doerzaph, J. Jermeland, and R. Knipling, “The 100-car naturalisticdriving study phase II—Results of the 100-car field experiment,” Dept.Transp., Washington, DC, Tech. Rep. DOT-HS-810-593, 2006.

[19] L. Pastor, P. de Miguel, A. Pérez, F. Rosales, S. Rodríguez, A. Cabañas,and A. Rodríguez, “Sensor techniques using image processing for driverbehaviour study,” in Automotive Sensory Systems, C. O. Nwagboso, Ed.London, U.K.: Chapman & Hall, 1993, ch. 9, pp. 185–209.

[20] L. Nunes and M. Recarte, “Argos program: Development of technologicalsystems and research programs for driver behavior analysis under realtraffic conditions,” in Proc. ISHFRT 2, Apr. 1997, pp. 630–640.

[21] M. Recarte and L. Nunes, “Mental load and loss of control over speedin real driving. Towards a theory of attentional speed control,” Transp.Res. Part F: Traffic Psychol. Behaviour, vol. 5, no. 2, pp. 111–122,Jun. 2002.

[22] L. Nunes and M. Recarte, “Cognitive demands of hands-free-phone con-versation while driving,” Transp. Res. Part F: Traffic Psychol. Behaviour,vol. 5, no. 2, pp. 133–144, Jun. 2002.

Antonio Pérez received the M.S. degree in tele-comunication engineering and the Ph.D. degree incomputer science from the Technical University ofMadrid (UPM), Madrid, Spain, in 1979 and 1982,respectively.

He is currently a Full Professor with the De-partment of Computer Systems Architecture andTechnology at UPM. His research interests includecomputer architecture, fault-tolerant computers, andthe design of microprocessor systems.

M. Isabel García received the M.S. degree in com-puter engineering and the Ph.D. degree in com-puter science from the Technical University ofMadrid (UPM), Madrid, Spain, in 1982 and 1985,respectively.

She is currently an Associate Professor with theDepartment of Computer Systems Architecture andTechnology at UPM. Her research interests includecomputer architecture and instruction-level paral-lelism architectures.

Manuel Nieto received the Bachelor of Sciencedegree in computer engineering from the Techni-cal University of Madrid (UPM), Madrid, Spain,in 1986.

He is an assistant professor in the Departmentof Computer Systems Architecture and Technologyat UPM. His research interests include computerarchitecture and microcontroller-based design.

José L. Pedraza received the M.S. degree in elec-tronics and the Ph.D. degree from the University ofSalamanca, Salamanca, Spain, in 1981 and 1987,respectively.

He is an associate professor with the Depart-ment of Computer System Architecture and Technol-ogy at the Technical University of Madrid, Madrid,Spain. His research interests include real-time imageprocessing and embedded systems.

Santiago Rodríguez received the M.S. degree incomputer engineering and the Ph.D. degree in com-puter science from the Technical University ofMadrid (UPM), Madrid, Spain, in 1990 and 1996,respectively.

He is currently an Associate Professor with theDepartment of Computer Systems Architecture andTechnology at UPM. His research interests includereal-time systems and fault-tolerant computers.

Juan Zamorano received the M.S. and Ph.D. de-grees in computer science from the Technical Uni-versity of Madrid (UPM), Madrid, Spain, in 1987and 1995, respectively.

He is an Associate Professor with the Departmentof Computer System Architecture and Technology atthe School of Computer Science, UPM. He teachesComputer Architecture and Real-Time Systems. Hisresearch interests are in the design of and imple-mentation of real-time systems, including designmethods, software and hardware architectures, and

real-time operating systems and languages.