6
Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine 1 , Mark Adam Ang 2 , Scott Drew Pendleton 3 , Hans Andersen 3 , Marcelo H. Ang Jr. 3 1 Massachusetts Institute of Technology Cambridge, MA, USA [email protected] 2 nuTonomy Singapore [email protected] 3 National University of Singapore Singapore {scott.pendleton01, hans.andersen}@u.nus.edu, [email protected] ABSTRACT In this paper, we describe methods of conveying perception information and motion intention from self driving vehicles to the surrounding environment. One method is by equipping autonomous vehicles with Light-Emitting Diode (LED) strips to convey perception information; typical pedestrian-driver acknowledgement is replaced by visual feedback via lights which change color to signal the presence of obstacles in the surrounding environment. Another method is by broadcasting audio cues of the vehicle’s motion intention to the environ- ment. The performance of the autonomous vehicles as social robots is improved by building trust and engagement with interacting pedestrians. The software and hardware systems are detailed, and a video demonstrates the working system in real application. Further extension of the work for multi-class mobility in human environments is discussed. ACM Classification Keywords H.1.2 User/Machine Systems: Human factors, Human infor- mation processing, Software psychology; H.5.m. Information Interfaces and Presentation (e.g. HCI): Miscellaneous; I.2.9 Robotics: Autonomous vehicles, Operator interfaces, Informa- tion Interfaces and Presentation Author Keywords Human-Robot Interface; Autonomous Vehicle; Pedestrian Interaction; Social Robotics; Light-Emitting Diode (LED); Robot Operating System (ROS); Arduino INTRODUCTION Mobility-on-Demand (MoD) services, such as car sharing or on demand taxi services have seen huge growth in the last few years with services such as Uber and Lyft [12]. Autonomous vehicles have long been awaited as the next generation of Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. HAI’16, October 4–7, 2016, Biopolis, Singapore. Copyright © 2016 ACM ISBN 978-1-4503-4508-8/16/10 ...$15.00. http://dx.doi.org/10.1145/2974804.2974833 mobility especially in highly urbanized area such as Singa- pore. A truly “on-demand” mobility service can be realized by utilizing a fleet of autonomous vehicles throughout an urban environment. Autonomous vehicles offer potential for additional safety, in- creased productivity, greater accessibility, better road effi- ciency, and positive impact to the environment. Research in autonomous vehicles has seen dramatic advances in recent years, due to the increases in available computing power and reduced cost in sensing and computing technologies. Compe- titions such as the 2007 DARPA Urban Challenge [15] have accelerated the field of autonomous vehicle design and the development. In our previous work [11], we discussed the utility of hav- ing a multi-class autonomous vehicle fleet for a MoD system, through a simple usage case involving a road car (Mitsubishi iMIEV) and golf cars on the National University of Singapore campus. Both classes of vehicles are designed to utilize the same software architecture (with only low-level controls dif- fering) and general sensor configuration, which are chosen for ease of fleet expansion. The functionality of the service was demonstrated in an uncontrolled environment open to real pedestrian and vehicular traffic. It is shown that while the car can operate at higher speeds on the road, the golf car has the flexibility of operating in pedestrian areas where cars are not allowed, thereby expanding the area coverage of the MoD service. However, although autonomous vehicle (AV) technologies have been a popular topic of research for many years and several prototypes have already demonstrated impressive capa- bilities, pedestrian behavioural interactions with autonomous vehicles, especially with a multi-class fleet of AVs have not been thoroughly studied. This work serves as a preliminary investigation into autonomous vehicles as social robots, where audio and visual cues are provided to notify nearby human agents about the vehicle’s intentions, and to acknowledgement them of the vehicle’s perception. Pedestrian-driver acknowledgement plays an integral role in road safety. Pedestrians and drivers communicate their inten- tions with each other through methods of eye contact, hand gestures and other nonverbal communication methods. This

Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

  • Upload
    others

  • View
    4

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

Pedestrian Notification Methods in Autonomous Vehiclesfor Multi-Class Mobility-on-Demand Service

Evelyn Florentine 1, Mark Adam Ang2, Scott Drew Pendleton3, Hans Andersen3, Marcelo H. Ang Jr.31 Massachusetts Institute of

TechnologyCambridge, MA, USA

[email protected]

2 nuTonomySingapore

[email protected]

3 National University ofSingaporeSingapore

{scott.pendleton01,hans.andersen}@u.nus.edu,

[email protected]

ABSTRACTIn this paper, we describe methods of conveying perceptioninformation and motion intention from self driving vehiclesto the surrounding environment. One method is by equippingautonomous vehicles with Light-Emitting Diode (LED) stripsto convey perception information; typical pedestrian-driveracknowledgement is replaced by visual feedback via lightswhich change color to signal the presence of obstacles in thesurrounding environment. Another method is by broadcastingaudio cues of the vehicle’s motion intention to the environ-ment. The performance of the autonomous vehicles as socialrobots is improved by building trust and engagement withinteracting pedestrians. The software and hardware systemsare detailed, and a video demonstrates the working system inreal application. Further extension of the work for multi-classmobility in human environments is discussed.

ACM Classification KeywordsH.1.2 User/Machine Systems: Human factors, Human infor-mation processing, Software psychology; H.5.m. InformationInterfaces and Presentation (e.g. HCI): Miscellaneous; I.2.9Robotics: Autonomous vehicles, Operator interfaces, Informa-tion Interfaces and Presentation

Author KeywordsHuman-Robot Interface; Autonomous Vehicle; PedestrianInteraction; Social Robotics; Light-Emitting Diode (LED);Robot Operating System (ROS); Arduino

INTRODUCTIONMobility-on-Demand (MoD) services, such as car sharing oron demand taxi services have seen huge growth in the last fewyears with services such as Uber and Lyft [12]. Autonomousvehicles have long been awaited as the next generation of

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected]’16, October 4–7, 2016, Biopolis, Singapore.Copyright © 2016 ACM ISBN 978-1-4503-4508-8/16/10 ...$15.00.http://dx.doi.org/10.1145/2974804.2974833

mobility especially in highly urbanized area such as Singa-pore. A truly “on-demand” mobility service can be realized byutilizing a fleet of autonomous vehicles throughout an urbanenvironment.

Autonomous vehicles offer potential for additional safety, in-creased productivity, greater accessibility, better road effi-ciency, and positive impact to the environment. Researchin autonomous vehicles has seen dramatic advances in recentyears, due to the increases in available computing power andreduced cost in sensing and computing technologies. Compe-titions such as the 2007 DARPA Urban Challenge [15] haveaccelerated the field of autonomous vehicle design and thedevelopment.

In our previous work [11], we discussed the utility of hav-ing a multi-class autonomous vehicle fleet for a MoD system,through a simple usage case involving a road car (MitsubishiiMIEV) and golf cars on the National University of Singaporecampus. Both classes of vehicles are designed to utilize thesame software architecture (with only low-level controls dif-fering) and general sensor configuration, which are chosenfor ease of fleet expansion. The functionality of the servicewas demonstrated in an uncontrolled environment open to realpedestrian and vehicular traffic. It is shown that while thecar can operate at higher speeds on the road, the golf car hasthe flexibility of operating in pedestrian areas where cars arenot allowed, thereby expanding the area coverage of the MoDservice.

However, although autonomous vehicle (AV) technologieshave been a popular topic of research for many years andseveral prototypes have already demonstrated impressive capa-bilities, pedestrian behavioural interactions with autonomousvehicles, especially with a multi-class fleet of AVs have notbeen thoroughly studied. This work serves as a preliminaryinvestigation into autonomous vehicles as social robots, whereaudio and visual cues are provided to notify nearby humanagents about the vehicle’s intentions, and to acknowledgementthem of the vehicle’s perception.

Pedestrian-driver acknowledgement plays an integral role inroad safety. Pedestrians and drivers communicate their inten-tions with each other through methods of eye contact, handgestures and other nonverbal communication methods. This

Page 2: Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

has strong implications for the assurance of safety for pedestri-ans and other road users such as bicyclists as vulnerable roadusers. In 2009 the World Health Organization estimated thatthere are 1.2 million road traffic fatalities per year globally, 35percent of which were pedestrians, 60 percent of which werevulnerable road users - pedestrians, bicyclists, and motorcy-clists [10].

LIDAR(Feature Detection)

LED Message Board

Camera

Overhead Monitor

Touchscreen

Steering Motor

Braking Motor

Speaker

LIDAR(Obstacle Detection)

DSRC Antenna

LIDAR – each side(Obstacle Detection)

Interface Circuit

Network Switch

Power Distribution

Voltage Conditioner

Overcurrent Protection

Wheel Encoders(On Drive shafts)

IMU

Control Panel

Computers

Figure 1. Hardware overview, highlighting primary retrofit additionsto a Yamaha YDREX3 golf buggy made in order to enable autonomouscapabilities. [Source: [12]

When pedestrians or other vulnerable road users must share theroad with larger motor vehicles, they have good reason to seekassurance that drivers of larger vehicles are aware and willavoid them. This is also the case when smaller vehicles suchas golfcarts are operating in pedestrian environments such asparks, airports, university campuses, etc. However, with theadoption of autonomous driving technologies, the relationshipbetween pedestrian and driver has changed significantly. Thepedestrian has no clear way of ensuring that the driver seeshim moving around the vehicle as there is no driver. Andthe driver has no sure way of notifying the pedestrian of hisintention to move on or slow down and let the pedestrian crossits path. As a result, the pedestrian may not feel safe with theidea of AVs, and the passenger inside the AV may not feel safeto ride in one.

In this work we describe methods of notifying the pedestrianas well as the autonomous vehicle’s passengers of the vehicle’sperception and motion intentions. A golf buggy is retrofittedwith sensors and actuators in order to enable autonomousdriving capabilities (Fig. 1). A speaker, an LED messageboard and a LED strip are used to communicate with thepedestrians. A touch screen is installed in the vehicle to accessthe booking system and visualizations.

This paper is organized as follows. In Section II, related workon autonomous vehicle acknowledgement of pedestrian willbe reviewed. The hardware and software components of thesystem will be discussed in Section III. The future work andconclusion will be presented in Section IV.

RELATED WORKSWith the adoption of autonomous vehicles, the number of traf-fic accidents can be dramatically reduced. However, ethicalquestions have been risen by Bonnefon et. al [1] whenever anaccident is inevitable, such as in situations where the vehicle

has to choose the lesser of two evils. The problem of ensuringpedestrian safety in the presence of road traffic is well acknowl-edged, with many research efforts focusing on autonomousvehicle perception such as pedestrian detection capabilities forAVs, as well as more advanced methods of reasoning aboutpedestrians’ movements [13]. The perception capabilities pro-vide the vehicle with situational awareness of its environment,however, pedestrians involved in the interaction may not beaware of the vehicle’s perception capabilities/limitations.

As vehicles interact with pedestrians at road crossings, cer-tain aspects of social robots apply. When operating smallervehicles in pedestrian environments, the human-robot socialinteractions become even more frequent and varied. Two im-portant metrics for human robot interaction in these socialcontexts are trust and engagement [14].

It has been observed that humans consistently read and in-terpret nonverbal cues similarly for robots as for people, andthat cooperative human-humanoid tasks were performed moreefficiently when gaze was incorporated in addition to moreexplicit forms of communication such as nodding and point-ing [2]. Furthermore, eye contact has been shown to reducemiscommunication between a robot and human, where it wasobserved that humans benefited from knowing they were beingwatched [4]. Several humanoid robots have also been success-ful in achieving joint attention from the human and the robotthrough gaze [9].

Several other methods have been tested for representing theintention of a mobile robot, namely its intended directionand speed of motion via: lamps or blowouts [6], “eyeball”images or abstract signs on an omnidirectional display [7], andprojected paths/symbols on the ground [8]. While these typesof indications of intention could indeed be useful for a humanto interact with a mobile robot, there is no direct insight intowhether the robot is responding to the human’s presence.

In this work, we display perception information, as well asmission state and intentions from a self-driving vehicle tonearby pedestrians. The autonomous vehicle is treated as asocial robot and serves to enhance the social behaviours ofdriving, namely by fostering greater trust and engagementfrom humans who interact with it via audio and visual feed-backs.

METHODS AND RESULTS

In-Vehicle User InterfaceWe have designed a web-based booking system shown in Fig.2 to accept mission requests. A mission ticket is created in theformat of [Pick-up Station, Drop-off Station], where Pick-upStation and Drop-off Station correspond to the passenger’spick-up location and passenger’s destination respectively. Themission ticket is then sent to the central server which managesthe database of all tickets in the mission pool and assignsmissions to each vehicle in the fleet by a simple first-come,first-served basis.

The assigned vehicle’s mission planner finds a route betweenthe given pick-up and drop-off point. Predetermined pathsare stored in a directed graph. The route searching module

Page 3: Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

Figure 2. Online booking system for National University of Singapore’sUniversity Town, with 11 stations shown. The blue line represents thepath the the passenger intends to travel which is from “Enterprise” sta-tion to “ERC” station.

performs a Dijkstra search over a directed graph of referencepath segments reflecting the road network connectivity [5].

After assigning the reference paths for execution, the missionplanner monitors the mission status. In our system, the mis-sion statuses consist of Mission Waiting, Approach Pick-Up,Arrive Pick-Up, Approach Destination, Arrive Destination,and Mission Infeasible.

The passenger is then notified through both on-screen and on-board audio cues about the state of the current mission planner,and therefore can take appropriate action accordingly such asboard/leave the vehicle safely, or understand whether or notthe oncoming vehicle is reserved for him.

The dynamic virtual bumper (DVB), illustrated in Fig. 3 isutilized to generate an advisory speed for the vehicle’s safenavigation in the presence of both the static and dynamic obsta-cles [12]. The DVB is a tube-shaped zone with its centerline asthe vehicle’s local path, with its width wt and height ht definedas a quadratic functions dependent on the vehicle’s speed vt .

Figure 3. An illustration of the dynamic virtual bumper. [Source [12]]

wt = w0 +αv2t

ht = ho +βv2t

where w0 and h0 are the static distance buffers and α and β arethe coefficients that determine the growth rate of the dynamicvirtual bumper as the velocity increases. LIDARs are usedto detect obstacles in the vicinity. When an obstacle Oi is

detected within the DVB, the vehicle will generate an advisoryspeed of a new desired DVB, whose boundary is marked bythe position of the nearest obstacle. Since the desired DVBis smaller than the current DVB upon encountering a nearbyobstacle, the newly calculated target velocity will be smallerthan the current velocity, thus the vehicle will be advisedto slow down. The DVB accounts for the presence of bothstatic and moving obstacles, where the considered obstacleset O is defined by the union of static obstacle and movingobstacles sets, O = Ostatic ∪Omoving. While Ostatic can bedirectly obtained from sensor measurement, Omoving has to beobtained from prediction of moving object trajectories aroundthe vehicle, and therefore the DVB may frequently adjust insize when dynamic obstacles are present.

Figure 4. In-vehicle visualization of the moving obstacle detection anddynamic virtual bumper. The red box is the dynamic safety zone, andthe red arrows are the velocity vectors of the detected moving obstacles.

The moving obstacle detection output, as well as the dynamicvirtual bumper is also displayed on the screen inside the ve-hicle (Fig. 4), such that the passengers can understand themotion intention of the AV, as well as take preventive actionsif there are failures in the vehicle’s perception system.

Pedestrian Notification System

Figure 5. A retrofitted golfcart for autonomous functions, with pedes-trian notification LED strip turned on. The blue color indicates no obsta-cle within close range, whereas red shows presence of a nearby obstacle.[Source [3]]

Page 4: Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

The motion intentions of the autonomous vehicles, such asthe route destination and mission state, are conveyed to itssurrounding through a LED message board. An audio cue inthe form of music has also been broadcast through the speakerwhile the AV is driving autonomously to capture the attentionof the surrounding pedestrians, who otherwise may not noticethat the vehicle doesn’t have a human driver.

A strip of LEDs has been installed along the outside of thevehicle to signal to pedestrians whether their presence has beenperceived by the autonomous vehicle. Thus the pedestrianscan receive acknowledgment from the AV via the change inLED color. A single planar LIDAR is used as the observationsource of surrounding obstacles in the vehicle’s vicinity. TheLIDAR data could be associated to a particular LED usingeither polar or Cartesian mapping.

Figure 6. Visual comparison of Cartesian vs polar obstacle to LEDposition mapping. Note that both pedestrians shown in this scenariowould fall within the same index range under polar mapping (bluearea), but would fall within two different ranges for Cartesian mapping(near pedestrian in red front range, far pedestrian in green rear range).[Source [3]]

With polar mapping, the origin of the coordinate system wouldbe the center of the associated LIDAR. Each LED correspondsto the LIDAR reading from certain angle ray traces, e.g. theblue triangular section in Fig. 6, and the display value isdetermined from the minimum radial distance measured bythe LIDAR within the particular angle range.

With Cartesian mapping, the coordinate axes should be alignedto the primary axes of the vehicle. For example, the +x di-rection can be defined as the forward direction of the vehicle,and +y direction as perpendicular to the right of the vehiclewith an origin point at the center of mass of the vehicle. Eachof the two pedestrians shown in Fig. 6 would fall within twodifferent ranges of x values, the near pedestrian in the redregion and the far pedestrian in the green region, and thenwould be associated with two different sets of LEDs locatedalong the right side of the vehicle, where the distance betweenthe pedestrians and the right edge of the vehicle would bemonitored and displayed on the LEDs.

In this scenario, depending on which mapping scheme is cho-sen, both pedestrian detection locations are either associatedwith the same set of LEDs (in the polar case) or different setsof LEDs (in the Cartesian case). The polar representation ofthe LIDAR data is used in this work since the associated ray

Figure 7. LIDAR ray traces shown in blue. Note that the closer proxim-ity pedestrian is detected by 22 incident LIDAR rays in this case, whilethe further proximity pedestrian is detected by only 10 rays. Thus thecorresponding number of LEDs associated with the detection would beapproximately half for the far vs the near case. [Source [3]]

traces nearly overlap with the LED locations for close prox-imity pedestrians due to the placement of the LED strip onthe vehicle, and this is the more straightforward approach toimplement.

Let R = {r1,r2, · · · ,rL} be the ordered array of ray trace dis-tance values, where L is the total number of ray traces in asingles scan, such that with LIDAR angular scan resolutionφ , given a ray ri scanning at an angle θ , the ray r(i+1) wouldcorrespond to the ray scanning at the angle θ + φ . Then agrouping can be assigned to each ray trace according to anarray K = {k1,k2, · · · ,kL}, where K is defined as

K = ceiling{(1,2, · · · ,L)∗ N

L

}(1)

where N is the total number of LEDs used. This will resultin N different groupings, such that ki = j would indicate thatray trace ri will correlate to the j-th LED. Then the minimumvalue can be assigned for each grouping and stored in an arrayD = {d1,d2, ,dN}, where each value d j is defined as

d j = min{ri∀i, i ∈ {1,2, · · · ,L}|ki = j} (2)

where j ∈ {1,2, · · · ,N}Once the minimum values are found, there is then some flexi-bility in deciding what color to display on the LED for eachdistance value. A continuous or discrete spectrum of colorscould be used. For a continuous spectrum, far away obstaclescould correspond to cool colors (blue, purple, etc.) and nearcould correspond to warm colors (red, yellow, etc.). For adiscrete spectrum, one or several distance thresholds would beset such that within a specified range of distances, the LEDswould be set to a certain user defined fixed color. Otherwisethe brightness of the LEDs could be varied such that the lightsare dim or off for far away obstacle detections but bright fornearby obstacles. This varied brightness or on/off approachmay reduce power consumption, though when no obstaclesare present then there would not be an obvious indication thatthe system is operational.

A single discrete color changing threshold is chosen here,where the LEDs would be set to either blue color or red coloraccording to a Boolean decision factor array γ = γ1,γ2, · · · ,γN

Page 5: Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

defined as

γ j = (d j < δ ) (3)

where δ is a user defined threshold distance, compared againstthe minimum distance value d j. γ j = 0 corresponds to a bluecolor setting for the j-th LED, and conversely γ j = 1 indicatesa red color setting for the j-th LED. Thus the blue lights areset to turn red to warn a pedestrian when they are closer to thegolfcart (or more specifically the front LIDAR) than the red-light range. Note that the neighboring LED color assignmentswill continue to change as an obstacle approaches the golfcarteven past the red-light range threshold. As an object movescloser to the golfcart, more LIDAR beams will detect thatsame object, hence more LEDs light up red Fig. 7.

The red-light range is set to 2.5 meters in the final chosenconfiguration. This value was set by manual tuning in a slowmoving pedestrian environment. While it is difficult to specifyan optimal value for the red-light range, this is correlated topedestrian expectations for how close they need to be beforethe vehicle acknowledges them. If the threshold is set muchhigher, the system frequently detects other objects besides thepedestrian, which results in unintuitive large sections of redin the LED strip. Besides, the pedestrians may not typicallybe expecting to be detected at great distances away from thevehicle. When the threshold is set lower, the pedestrian hadto stand too close to the car before the LEDs would turn red,hence they could be looking for acknowledgement from thevehicle earlier without receiving it.

A video showing the working system can be accessed athttps://youtu.be/UmruwRx7dW4 [3]. It was observed that betterpedestrian engagements were achieved with the combinationof audio cues and perception acknowledgement system. Thestriking appearance of the LED and distracting music capturedthe pedestrian’s attention to observe that the passing vehicle isoperating autonomously, and therefore prompted the pedestri-ans to behave more consciously rather than paying attentionto their electronic devices.

CONCLUSION AND FUTURE WORKWe have described methods of informing both the passengersand the pedestrians about the autonomous vehicle’s perceptionstatus and intentions. In terms of pedestrian-robot social inter-action, the audio cues and LED strips have proven to be usefulas a warning/acknowledgement mechanism for the pedestriansthat are very close to the car, garnering trust and engagement.However, this can be much further improved upon.

Using an LED strip to broadcast obstacle detections from aLIDAR was found to be an effective method to acknowledgepedestrian presence. The system is simple to create, uses littlememory on the golfcart, and can be easily expanded. The LEDstrip can be extended to fit around the car and sync to more thanone LIDAR so that a pedestrian can be “followed” by a sectionof red lights no matter what part of the car they are walkingaround. In order to do this, the obstacle detection mappingcould use Cartesian coordinates, rather than polar. This projectis both low-cost and unobtrusive, as it uses components thatare already integral to the autonomous golfcart, such as the2D LIDAR.

Further tests of the system under different conditions is alsoimportant as some notification methods may be more or lessadvantageous under certain unique circumstances that havenot been encountered in our previous tests.

In the future, we would like to work on informing both thepassengers and the pedestrian on the intended speed and ex-act direction that the vehicle is about to travel. This can beachieved by implementing more specific audio interactions,rather than just catchy noises to attract the pedestrians’ atten-tion, such that the visually impaired could infer more completeinformation on the intention of the vehicle. The transmissionof concepts of safety zone and dynamic virtual bumper mayalso be useful to communicate to external agents.

Interactions between human drivers and autonomous vehiclesas an aspect of social robotics should also be considered morecarefully, with additional audio and visual cues options to beimplemented and considered, such as external facing screens.The selection of the size of the visual cues has to be consideredproperly for them to not distract road users too much.

It would be advantageous to collect surveys and questionnairesfrom different categories of road users such as drivers, pedes-trians, and motorcyclists in order to test whether these audioand visual cues would improve their trust of AVs on the road,and at which point do these methods become uncanny anddistracting rather than helpful, as the end goal of autonomousvehicle development is for them to integrate seamlessly intothe society such that human beings won’t even notice that thevehicles drive themselves.

Further interactions in the framework of multi-class MoDcan also be considered. Different classes of AVs inter-act at the boundaries between two differing environmenttypes/deployment areas. Questions such as “how could onetransfer between a car and a bus in a highly autonomous trans-portation system?” have not been widely discussed in theliterature.

AcknowledgmentThis research was supported by the Future Urban Mobilityproject of the Singapore-MIT Alliance for Research and Tech-nology (SMART) Center, with funding from Singapore’s Na-tional Research Foundation (NRF).

REFERENCES1. Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan.

2015. Autonomous Vehicles Need Experimental Ethics:Are We Ready for Utilitarian Cars? (2015).http://arxiv.org/abs/1510.03346

2. Cynthia Breazeal, Cory D Kidd, Andrea LockerdThomaz, Guy Hoffman, and Matt Berlin. 2005. Effects ofNonverbal Communication on Efficiency and Robustnessof Human-Robot Teamwork. (2005).

3. Evelyn Florentine, Hans Andersen, Mark Adam Ang,Scott Drew Pendleton, Guo Ming, James Fu, and MarceloH Ang Jr. 2015. Self-Driving Vehicle Acknowledgementof Pedestrian Presence Conveyed via Light-emittingDiodes. In IEEE International Conference on Humanoid,

Page 6: Pedestrian Notification Methods in Autonomous Vehicles for … · Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service Evelyn Florentine

Nanotechnology, Information Technology Communicationand Control, Environment and Management (HNICEM).

4. Hiroshi Ishiguro, Tetsuo Ono, Michita Imai, andTakayuki Kanda. 2001. Development of an InteractiveHumanoid Robot "Robovie" - An interdisciplinaryresearch approach between cognitive science and robotics.Proc. Int. Sympo. Robotics Research (2001).

5. Wei Liu, Zhiyong Weng, Zhuangjie Chong, XiaotongShen, Scott Pendleton, Baoxing Qin, Guo Ming JamesFu, and Marcelo H. Ang. 2015. Autonomous vehicleplanning system design under perception limitation inpedestrian environment. In 2015 IEEE 7th InternationalConference on Cybernetics and Intelligent Systems (CIS)and IEEE Conference on Robotics, Automation andMechatronics (RAM). 159–166. DOI:http://dx.doi.org/10.1109/ICCIS.2015.7274566

6. Takafumi Matsumaru. 2006. Mobile robot withpreliminary-announcement and display function offorthcoming motion using projection equipment. In IEEEInternational Workshop on Robot and Human InteractiveCommunication. 443–450. DOI:http://dx.doi.org/10.1109/ROMAN.2006.314368

7. T. Matsumaru, K. Akiyama, K. Iwase, T. Kusada, H.Gomi, and T. Ito. 2003a. Robot-to-human communicationof mobile robot’s following motion using eyeballexpression on omnidirectional display. In IEEE/ASMEInternational Conference on Advanced IntelligentMechatronics, AIM, Vol. 2. 790–796. DOI:http://dx.doi.org/10.1109/AIM.2003.1225443

8. Takafumi Matsumaru, Hisashi Endo, and Tomotaka Ito.2003b. Examination by Software Simulation onPreliminary-Announcement and Display of MobileRobot’s Following Action by Lamp or Blowouts. IEEEInternational Conference on Robotics and Automation(ICRA) 2 (2003), 771–777. DOI:http://dx.doi.org/10.1109/AIM.2003.1225440

9. Dai Miyauchi, Akio Nakamura, and Yoshinori Kuno.2005. Bidirectional eye contact for human-robotcommunication. IEICE Transactions on Information andSystems E88-D, 11 (2005), 2509–2516. DOI:http://dx.doi.org/10.1093/ietisy/e88-d.11.2509

10. H Naci, D Chisholm, and T D Baker. 2009. Distributionof road traffic deaths by road user group: a global

comparison. Injury prevention : journal of theInternational Society for Child and Adolescent InjuryPrevention 15, 1 (2009), 55–59. DOI:http://dx.doi.org/10.1136/ip.2008.018721

11. Scott Pendleton, Zhuang Jie Chong, Baoxing Qin, WeiLiu, Tawit Uthaicharoenpong, Xiaotong Shen, GuoMing James Fu, Marcello Scarnecchia, Seong-Woo Kim,Marcelo H. Ang, and Emilio Frazzoli. 2014. Multi-ClassDriverless Vehicle Cooperation for Mobility-on-Demand.In Intelligent Transportation Systems World Congress(ITSWC). DOI:http://dx.doi.org/10.1017/CBO9781107415324.004

12. Scott Pendleton, Tawit Uthaicharoenpong, Zhuang JieChong, Guo Ming, James Fu, Baoxing Qin, Wei Liu,Xiaotong Shen, Zhiyong Weng, Cody Kamin,Mark Adam Ang, Lucas Tetsuya Kuwae, Katarzyna AnnaMarczuk, Hans Andersen, Mengdan Feng, GregoryButron, Zhuang Zhi Chong, Marcelo H Ang, EmilioFrazzoli, and Daniela Rus. 2015. Autonomous Golf Carsfor Public Trial of Mobility-on-Demand Service. InIEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS). 1164–1171.

13. Baoxing Qin, Zhuang Jie Chong, Sooh Hong Soh,Tirthankar Bandyopadhyay, Marcelo H. Ang, EmilioFrazzoli, and Daniela Rus. 2014. A spatial-temporalapproach for moving object recognition with 2D LIDAR.In International Symposium on ExperimentalRobotics(ISER).

14. Aaron Steinfeld, Terrence Fong, Michael Lewis, JeanScholtz, Alan Schultz, David Kaber, and MichaelGoodrich. 2006. Common Metrics for Human-RobotInteraction. 1st ACM SIGCHI/SIGART conference onHuman-robot interaction (HRI ’06) (2006), 33–40. DOI:http://dx.doi.org/10.1145/1121241.1121249

15. C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner,M. N. Clark, J. Dolan, D. Duggins, T. Galatali, C. Geyer,M. Gittleman, S. Harbaugh, M. Hebert, T. M. Howard, S.Kolski, A. Kelly, M. Likhachev, M. McNaughton, N.Miller, and D Peterson. 2008. Autonomous driving inurban environments: Boss and the Urban Challenge. J.Field Robotics 25 (2008), 425–466. DOI:http://dx.doi.org/10.1002/rob