14
Capstone Design Final Report Robotics and Computer Vision Autonomous Vehicle Learning System [Robotics Team] Authors: Luke Miller Robert Schultz Rahul Tandon Supervisor: Professor Kristin Dana Rutgers State University of New Jersey Department of Electrical and Computer Engineering May 8, 2016

FinalReport

Embed Size (px)

Citation preview

Page 1: FinalReport

Capstone Design Final Report

Robotics and Computer Vision

Autonomous Vehicle Learning System[Robotics Team]

Authors:Luke MillerRobert SchultzRahul Tandon

Supervisor:Professor Kristin Dana

Rutgers State University of New Jersey

Department of Electrical and Computer Engineering

May 8, 2016

Page 2: FinalReport

1 Introduction

The Autonomous Vehicle Learning System(AVLS) enables evolutionary optimization of traf-fic systems through individual vehicle data col-lection. The goal of the AVLS is the design andcontrol of an autonomous vehicle, using networkcommunication, computer vision and route op-timization. This concept can be scaled to con-sumer sized vehicles, promoting a revolutionizedtransportation system with no dependency uponhuman based traffic indicators. On a small scale,lane and sign detection enable the AVLS robotto avoid obstacles and navigate an un-mappedmock city testbed. As intersections are registered,newly discovered paths between nodes are com-municated to a centralized server which optimizesroutes over time. Vehicles will drive cyclicallybetween a set of destinations using paths selectedby a centralized server. This study demonstratesimportant capabilities of autonomous travel as awidespread form of transportation.

1.1 Problem Statement

Current traffic systems rely on visual indicatorsto limit congestion and accidents. Humans areerror prone, and require signals to aid in naviga-tion. Machines do not require visual instructionand can make use of nearly instantaneous wirelesscommunication. Using computer vision, roboticsand machine learning, autonomous vehicles can berealized that exceed the human capability of driv-ing. This will reduce accident-related mortality,optimize route navigation and limit overall trafficcongestion.

1.2 Major Components

The Autonomous Vehicle Learning System hasbeen divided into two subgroups with specific fo-cuses on Robotic Design & Control and ComputerVision. Components within the Robotics groupinclude: Hardware Design, Network Communica-tion, and a Proportional Integral Derivative DriveController. Components within the Computer Vi-sion group include: Lane Detection, Stop SignRecognition and Intersection Identification., andlarge-scale simulation.

Required Components:• Four-wheel Drive Robotic Base• Single Board Computer• Wireless Networking Adapter• Computer Vision Camera

• Test Bed Platform• Isolated Wireless Communication Network• Vehicle Data Processing Server

2 Methods / Objectives

Tier 1 - Primary Goals

Hardware Control The Tier 1 hardware goal ofAVLS is to design and build a small autonomousvehicle capable of lane and traffic indicator de-tection. The vehicle has independent drive trainscontrolled by a four way H-Bridge motor con-troller. Both rear wheels utilize rotary encodersto track position. The Raspberry Pi connects toa private and encrypted router which facilitatescommunication with a local server.

Hardware Components:• Single-board Computer: Raspberry Pi 2;

26 I/O pins programmable via C++ andPython

• X-Media USB Wireless Adapter; 300 Mb/sNetwork Communication

• Raspberry Pi Camera Module; 2592 x 1944Resolution; 90fps

• Baron Four-wheel Drive Robotic Base; TwoRotary Encoders

• D-Link Gigabit Wireless Router; WPA2Network Encryption

• New Trent 10,000 mAh Backup Battery with2A and 1A output

Noteworthy parameters to evaluate:• Test-bed Materials• Robotic Platform

Figure 1: This is the final robotic platform de-signed by the AVLS robotics sub-team. The fourindependently controlled motors proved to be crit-ical for precise navigation.

The preliminary robotic implementation ofAVLS was heavily dependent on the modification

2

Page 3: FinalReport

on an existing remote control car. While this ve-hicle selection was cheaper for initial testing anddesign, the robot proved to be inaccurate and un-reliable in terms of speed and unintended drift.Without an integrated Pulse Width Modulationspeed controller and variable degrees of turning,it was nearly impossible to precisely navigate thevehicle.

The final implementation, as seen in Figure1 incorporates integrated PWM control, four-wayindependent drive and rotary encoders. The speedof the vehicle can be specified through an opensource library, and turning can be accurately con-trolled by adjusting the ratio of left to right wheelspeeds.

Software Control The software priority intier 1 is to engineer a vehicle control utility. Theautonomous vehicle must smoothly stay within it’slane, stop at traffic indicators and consistentlyperform turns at an intersection. Robotic soft-ware control bridges raw computer vision data toinstruct hardware for test-bed navigation.

Smooth driving is dependent on the implemen-tation of a Proportional Integral Derivative Con-troller tuned to a specific operational environment.The proportional term defines a proportional rateat which the location of the robot must be ad-justed after each each cycle. The integral termaddresses the accumulation of error over time ina particular direction. This term eliminates thebuildup of offsets. The derivative term dictatesthe rate at which error adjustments are made.Derivative terms address the vehicle overshootinga target trajectory. Though AVLS has both errorangle and error displacement as metrics for lanedeviation, the current PID only uses horizontaldisplacement as an input argument for the con-troller. After PID implementation, there was asubstantial decrease in the average angle error ofthe vehicle over time. This indicates smoother er-ror correction as opposed to a basic proportionalcontroller.

Noteworthy parameters to evaluate:• Lighting Conditions• PID Controller Parameters• Robot Speed

Tier 2 - Secondary GoalsThese goals have not yet been realized in

robotic form, but are discussed and analyzedthrough a software simulation.

Scalability This project aims to demonstratethe scalability of learning systems for Autonomous

Vehicles. The vehicles should be reproduciblewith uniform components, and specifications. Thenewly created vehicles will be immersed in an en-vironment together, and will perform consistentlyduring nominal and off-nominal behavior. At thisstage, each vehicle will recognize another as ageneric obstacle, and basic maneuvers will be usedto avoid collision.

Noteworthy parameters to evaluate:

• Vehicle Count

Navigational Mapping In order to achievetier 2 secondary goals, navigational mapping is re-quired for a stable system. This enables a vehicleto detect its location within the environment withrelation to other vehicles or obstacles around it.A local map is generated by each vehicle based onlandmarks at intersections, and each local map isconverged into a global map.

Noteworthy parameters to evaluate:

• Map Size• Lane Count• Road Distances

Tier 3 - Tertiary GoalsThese goals have not yet been realized in

robotic form, but are discussed and analyzedthrough a software simulation.

Communications Navigational mapping willopen possibilities into vehicle-vehicle communica-tion. To implement this feature, cellular radioswill transmit individual vehicle data to a central-ized location which distributes pertinent informa-tion to each car. Alternatively, vehicles can com-municate data through a peer-peer network andbypass reliance on a central server. The route ofeach vehicle will be calculated dynamically basedupon parameters collected from each car.

Noteworthy parameters to evaluate:

• Wireless Signal Band (WLAN, Bluetooth,USRP)

• Communication Method (Server, Peer-to-Peer)

Route Optimization Once a communicationnetwork is established between vehicles, a func-tional path will be found using depth or breadthfirst search. The current global map and trafficknowledge is used heuristically to develop an opti-mal path between predefined destinations for thevehicles. This route optimization happens in real-time, allowing dynamic access for any vehicle dur-ing run time. Random mutations in routes due to

3

Page 4: FinalReport

traffic or obstacles can aid in the selection of moreoptimal routes over time.

Noteworthy parameters to evaluate:

• Sort Algorithm of Indexed Paths• Initial Path Algorithms

Software SimulationOverview The AVLS simulation is a graph-

ical representation of how AVLS will operate ona large scale. While autonomous vehicles (AVs)explore an un-mapped city, they populate nodesand edges of a directed graph. Individual findingsare concatenated with a unified global map on aserver. The server is responsible for aiding the AVsin navigation. To process visuals, GraphStreamAPI is used to more easily facilitate analysis.

Algorithm As each vehicle seeks it’s own des-tination, an A* searching algorithm is performedon the server-side unified map to determine short-est path for each car. The A* search algorithm isa modified version of Dijkstra’s algorithm, whichcreates a tree evaluating the weights of each edge.A* was selected for its heuristic-based weights ingraph searching.

Simulation Design and Goal The goal ofthis simulation is to monitor and evaluate trafficflow under the AVLS system. While each vehiclefollows their most optimal route to predetermineddestinations, the length and condition of the pro-cess can be evaluated systematically. Parametersaddressed are vehicle count, graph / route weight-ing schemes, map design, as well as average trafficspeed and driving times.

Behavioral States The AVLS will operate in3 states on a macroscopic level: a transient state,transitioning state, and steady state. The behav-ior of the system is modeled by state, with thetransient state initializing the system, transition-ing state switching system modes, and steady statemodeling system behavior as time progresses.Each of the behavioral states can be seen as thesimulation switches between conventional traffic,AVLS with traffic indicators, and an ideal AVLSimplementation with no human-based traffic indi-cators.

Transient State Behavior At the start ofthe simulation, AVs do not have prior knowledge oftheir environment. They drive locally and commu-nicate their location to a centralized server, whichstores all pertinent information on the vehicles forroute navigation and traffic analysis. When in-tersections are visited by multiple AVs, the serverconnects the locations as nodes in a graph to prop-erly analyze the city.

Figure 2: Graphing Algorithm Illustrated. Thisdemonstrates the mapping of a previously un-known location. In part 4, the world is completelymapped and an optimal path can be found be-tween two nodes.

Transitioning Behavior Figure 2 containsfour major stages of the graphing algorithm for asmall town. First, vehicles map their local area asthey drive around. As vehicles arrive at alreadydiscovered nodes, individual segments begin tojoin together in a global map. After enough nodesare found to link a given starting location to adestination, the searching algorithm can be per-formed while vehicles complete the map.

Driving Patterns The simulation containsAVs that are programmed to drive to differentdestinations following a certain parameter-baseddriving mode. Route navigation is done basedupon the distance weights between each intersec-tion; varying these weights results in noticeablydifferent behaviors in the AVs. In a deployed sys-tem, these weights would need to be tuned to thespecific environment. Sample weights may or maynot include traffic depending on the mode of AVLSimplementation.

Steady State Behavior: Traditional Traf-fic Mode This test mode serves as the controlgroup representative of current driving mecha-nisms. The AVs are placed into a mock city withno communication or traffic recognition. Drivingblindly into congestion, heavy traffic grows withinreal-time minutes of simulation. This causes the

4

Page 5: FinalReport

amount of time for each car to reach their desti-nation to increase drastically.

Figure 3: Steady state model of traditional trafficflow. Each black square represents an AV drivingin the simulated test bed. There is major conges-tion along the central road, prohibitively haltingtraffic flow.

Steady State Behavior: AVLS & TrafficMode In the second mode, the AVs are throwninto the test bed with real-time knowledge of otherAVs. However, due to modern traffic infrastruc-ture, stop signs and lights are prevalent factorsthat slow the flow of vehicles. These factors causeslowdown and prevent AVLS from synchronizeddriving. As a result, light congestion can be seenin this test case, however it is much more uni-formly distributed. This intermediate phase ofautonomous travel will lightly decrease time nec-essary for cars to reach their destinations.

Figure 4: Steady state model of autonomous traf-fic flow with real-time heuristics. Each blacksquare represents an AV driving in the simulatedtest bed. In this case, congestion is dispersedacross the test bed, equalizing and improving traf-fic flow.

Steady State Behavior: Optimal AVLSMode In the third mode, the AVs are placedinto the test bed with real-time knowledge ofother AVs, and the road infrastructure is not pro-hibitive. Cars will not slow down in traffic, asthe AVs are driving harmoniously, and in a syn-chronized fashion. This case highlights significantimprovements over traditional traffic flow and rep-

resents a future in which every vehicle on the roadis autonomous.

Figure 5: Steady state model of autonomous traf-fic flow in an optimal system. Each black squarerepresents an AV driving in the simulated testbed. In this test case, congestion is greatly dis-persed across the test bed, providing a significantimprovement to traffic flow.

Analysis: Average AV Time In figure 6,the average time required for the AVs to reachtheir randomly assigned destinations was shown todecrease as a result of each mode of autonomoustravel. Traditional traffic patterns were modeledfor the first fifteen-thousand frames. The aver-age time required for each car to reach its desti-nation peaked at 100 seconds and remained at asteady state of around 90 seconds. After an ad-ditional fifteen-thousand frames, AVLS with traf-fic mode was simulated in the same environmentand the average simulation time decreased to 60seconds, a 33% overall improvement over the tra-ditional mode. In an optimal situation, the carswould not have to slow as a result of traffic. Thethird mode exhibited peak optimal performancearound 30 seconds from origin to destination, a50% improvement over the second case and 66%improvement over the first case.

Figure 6: This graph represents the average simu-lated time necessary for AVs to reach their desti-nations. The line breaks denote a change in modefrom conventional traffic to AVLS with congestion,to ideal AVLS without congestion.

5

Page 6: FinalReport

Analysis: Average AV Speed In figure 7,the average speed of the AVs was plotted versussimulated time. The optimal speed of the AVs inthe simulation was set as 65mph, but as the cityis very populated, this goal cannot be reached. Inthe first 800 frames of the simulation, the tradi-tional traffic mode was evaluated, with about anaverage of 15mph in the steady state. After the800th frame, the AVLS navigation mode was ini-tialized, and speed improved by 3 mph. The AVLSsystem brought a 20% increase in average speedfor the vehicles. The third mode was not plotted,as in an optimal system, all cars would be goingat the fastest speed of 65mph.

Figure 7: This line graph represents the averagesimulated time required for AVs to reach their des-tination. The line breaks denote a change in mode.

Analysis: Case Comparison Among thethree modes, the third mode had the highest per-formance, while the second mode exhibited a sig-nificant performance increase over current tradi-tional traffic modes. This increase was between12% and 17%. The AVLS demonstrates promis-ing performance in cities that are not saturatedwith vehicles, or when vehicles can be synchro-nized with other AVs on the road.

Figure 8: This matrix of calculations displaysthe performance increases between AVLS modes.These figures include data from the ’transient’ por-tion of the simulation as well as ’steady state’

3 Experimental Results

3.1 Speed Determination

Using proportional control, an experiment wasperformed to determine an ideal speed with min-imal error in both angle and displacement. Asexpected, Trials depicted a 2.25 degree increase ofaverage error per Inch/Second increase in speed.This is due largely to the networking delay inducedby streaming images and drive commands betweenthe server and vehicle. With a reduced speed, thecar is acting on more recent drive commands thanif the car were moving faster. In a full implemen-tation, this would be resolved by an on-board com-puter carrying the load of image processing.

Figure 9: Speed plotted against error rate in aver-age horizontal displacement and angle. The graphrepresents a linear increase in error angle for everyadditional inch/second in speed. 2 inches/secondwas chosen as the final robot speed as a compro-mise between smooth driving and latency.

3.2 PID Coefficient Determination

u(t) = Kpe(t) +Ki

∫ t

0

e(τ)dτ +Kdde(t)

dt(1)

An experiment was conducted to tune andarithmetically determine the best coefficients fora PID Controller. An over-head camera was uti-lized to track the car as it traversed a straightlane of six feet. In order to tune the PID parame-ters, each were adjusted independently, and werethen adjusted simultaneously during the next setof tests. For each set of coefficients, four trialswere conducted and a each resulting video was fedinto MATLAB to calculate the vehicle’s centroid.Since the vehicle and test-bed color are distinctshades, MATLAB can track the darkest area ofeach frame and calculate it’s two-dimensional cen-ter. The same vehicle centroid value was used foreach iteration of coefficients, and was utilized to

6

Page 7: FinalReport

trace the vehicle across the frame.

Figure 10: Four trials were conducted using PIDvalues: P=.011, I=.0006,D=.001. The black linerepresents the center of the lane, while coloredlines represent the vehicle’s deviation during eachtrial.

In 3 out of 4 trials for the first parameter set,the car was able to correct its path after deviation.An average of the four trials is shown in Figure 11.

Figure 11: Four trials were conducted using PIDvalues: P=.011, I=.0006,D=.001. The black linerepresents the center of the lane, while the red linerepresents the average deviation of these four tri-als.

This parameter set resulted in the AV closelyfollowing the line with a max displacement ofaround 30 pixels from the median, before return-ing to the center. When scaled to fit every param-eter set, as in 12, this set gives a clear and nearlystraight trajectory in relation to the center lane.

Figure 12: This trial represents the average vehi-cle displacement for each parameter set in relationto the center. Some parameter sets caused the carto deviate a far greater distance from the centerof the lane than others. Looking at the averagesof all the PID variations, an optimal set of PIDvalues was selected.

Based off the results an equation for the PIDcontroller was determined. The coefficients usedwere based off parameter set 1. Compared to theother parameter sets , set 1 had the least devia-tion over time and corrected its trajectory quicklywhen off course.

Pn(e) = [.011]en + [.0006]

n∑0

en + [.001]dendn

(2)

After PID implementation, the robot experi-ences an average 2.93 degree error at the chosenspeed of 2in/sec. The result is a 73.12% decreasein error from a simple proportional controller. Ad-ditionally, the PID controller is used to counteractdriving errors that result from photo transfer la-tency. This correction smoothing helps to preventthe vehicle from over or under-turning based uponasynchronous images.

3.3 Network Communication

The server and the Autonomous Vehicle must havea pipeline through which to relay real time driv-ing information. The server sends drive commandsbased off processed images received from the car.

7

Page 8: FinalReport

3.3.1 Previous Implementation

Figure 13: The initial implementation ofAVLS’s Network Communication System wasquite lengthy. This was not an optimal processbecause it takes too many steps for the car to re-ceive a single drive command.

In the previously implement communicationsystem, a camera mounted on the car would cap-ture and save a single photo at a time. A pythonscript running on the car reads the image andtransfers it over the network using the TCP proto-col. Meanwhile, the server actively receives pack-ets and writes them to a file. Single images aretransferred 1KB(1024 Bytes) at a time. Once anentire image has been received, computer visionis used to identify important features. The pro-cessed drive command is relayed back to the carwhich moves accordingly. This is a very lengthyand convoluted process because the camera needsto be initialized for each image.

The implementation in Figure 13 has manypoints of failure that each induce latency. Anupdated version of the process using an MJPEGStream is shown in Figure 14.

3.3.2 Current Implementation

Figure 14: The current implementation of AVLS’sNetwork Communication System. This process re-quires fewer steps compared to the original imple-mentation.

In the current implementation a cameramounted on the car starts an MJPEG streamwhile a computer vision application on the serverprocesses and analyzes the frames in real time.

The server calculates angle and horizontal dis-placement which is sent over the network to thecar’s PID controller.

Using the MJPEG stream significantly reducesnetwork transfer time. Images are read from a net-work stream as opposed to being written to a file.This helps to avoid memory leaks and decreasetime devoted to disk I/O.

The frame rate of the MJPEG stream wasadjusted with time to properly utilize the framebuffer within the computer vision application. Ini-tially, processing exhibited a 5 second delay duringinitialization in order to clear the buffer of previ-ously collected frames. Trials for frame rate deter-mination are are discussed in the table below.

Image read configurationsFrame Rate Skipped Lag24 FPS 0 Frames 5 seconds24 FPS 1 Frames 4.5 seconds24 FPS 2 Frames 4.2 seconds24 FPS 3 Frames 3.9 seconds24 FPS 4 Frames 3.4 seconds24 FPS 5 Frames 2.6 seconds12 FPS 0 Frames 3.7 seconds12 FPS 1 Frames 3.1 seconds12 FPS 2 Frames 2.5 seconds12 FPS 3 Frames 1.5 seconds12 FPS 4 Frames 1.4 seconds12 FPS 5 Frames 1.4 seconds6 FPS 0 Frames 3.3 seconds6 FPS 1 Frames 2.2 seconds6 FPS 2 Frames 1.7 seconds6 FPS 3 Frames 1.3 seconds

Figure 15: Latency was measured using an ap-proximation of the average time between an ac-tion and it’s occurrence in the video feed. Thetest examines a hand passing over the video feed.

A frame rate of 12 frames per second, with 3frames being discarded at each loop, was chosen asthe optimal solution. Frames are skipped in orderto avoid an overload of Computer Vision buffer.Any additional skipped frames would help reducelag but at a negative cost to driving accuracy.

8

Page 9: FinalReport

Figure 16: These are example commands called bythe computer vision application to transfer the dis-placement and angle to the car. The computer vi-sion application uses a python script, passing twoarguments (displacement,angle), to transfer driv-ing commands to the vehicle’s PID controller.

During testing, two network protocols were ex-amined. Both Transmission Control Protocol andUser Datagram Protocol had benefits and draw-backs for our specific implementation.

Transmission Control Protocol(TCP)• Benefits: Error Checking.• Drawbacks: Increased Latency.

User Datagram Protocol (UDP)• Benefits: Fast Packet Transmission.• Drawbacks: Potential Packet Loss.

Latency Table below compares the averagetransfer time between network protocols: TCPand UDP. TCP takes approximately 50 times aslong as UDP to send packets. Though the sam-ple size is small, our preliminary testing indicatesUDP as a sufficient solution for AVLS networkcommunication. AVLS sends a data packet con-taining an angle and a displacement value. Atest packet of 15 characters was used to conductthe test. UDP took .1369 microseconds and TCPtook 7.05 microseconds to send the string messageof 15 characters. This experiment was performedwith packet sizes emulating driving commands,but will be expanded to encompass image transferfor a broader view on network communication.

Average Transfer TimeNetwork Protocol Average Time(s)TCP 7.0532E-06UDP 1.36971E-05

Figure 17: TCP v. UDP Latency (in seconds)This chart shows the average time it takes to senda data packet over TCP/UDP network Protocol.

Error The TCP network protocol checks fordropped packets via verification from packet re-cipients. UDP does not have this fail safe in placeand will occasionally lose packets. Packet loss ishighly detrimental in a real-time reasoning systemin which missing information could result in dam-age of components.

4 Discussion

4.1 Challenges

The most significant problem with a remote com-puter vision guiding controller has been the no-ticeable latency between capturing frames fromthe robot and asserting drive commands. Whentested in isolation, Python-based MJPEG stream-ing over HTTP results in a noticeable but work-able latency of around 200ms on average. Theprimary contribution to this latency is line detec-tion processing from the camera. This latency iscompensated using a tuned PID controller.

The initial selection of a robotic base provedto be error prone. Due to manufacturing com-plications, a misalignment existed between thefront wheels. This resulted in significant devi-ation from a directed course. Despite attemptsto programmatically compensate for this drift,experimentation indicated that deviation variedgreatly between multiple necessary driving oper-ations. A new, more precise, robotic base waschosen.

The simulation currently relies on data contri-bution from individual vehicles within the systemand does not implement any learning algorithmsto optimize routes. The current method, referredto as evolutionary optimization, utilizes randompaths for all the AVs, in order to best understandthe testbed, and optimize traffic flow. A chal-lenge has been to employ system-wide learningalgorithms that unites the AVLS robot with thesoftware simulation.

The construction of a test environment hasbeen a developmental bottleneck that kept us fromproperly testing network communication, drivingspeed and PID coefficients. A uniform environ-ment in which to perform repeatable tests is cru-cial to determining success. The challenge was se-lecting appropriate yet cost efficient materials anda location for this test environment. Black elec-trical tape lanes proved to be extremely reflective

9

Page 10: FinalReport

and would interfere with lane detection. Makingthe shift to a matte paper tape significantly im-proved lane detection rate.

4.2 Future Work

Currently, all Tier 1 & 2 goals have been mostlyachieved and an autonomous vehicle has been im-plemented. Tier 3 has been partially implemented,with the AVLS simulation. The vehicle is ableto maintain trajectory while navigating through alane. It also capable of detecting stop signs andQR codes at intersections.

One aspect of the these tiers is navigationalmapping. The goal is to have the autonomous ve-hicles learn the map of the city as it drives fromintersection to intersection. The car contributesindividual data to enhance a centralized under-standing of the world. A QR code placed at eachintersection will be used to uniquely identify ve-hicle location. Therefore, when a car approachesan intersection, the server will recognize the QRcode and record the intersection as a node acces-sible from the vehicle’s last location. All vehicleswill recognize this path and continuously build themap of the city as they drive. Increasing the num-ber of cars reduces the amount of time needed togenerate the map.

4.3 Potential AVLS Extensions

• Situation Costs: Heuristics can be in-cluded in navigation and routing based ona network notification system to simulateschool zones, flood areas, and closed roads.

• Ultrasonic Sensor: A front and optionalrear ultrasonic sensor can be mounted on asingle vehicle for additional collision control.

• Multiple Lane Expansion: Additionalcameras can be mounted onto the vehicle,and the control system may be updated tonavigate multiple lanes, further increasingthe complexity of navigation.

5 Cost Analysis

5.1 Engineered Implementation

Cost of one vehicleItem Name Quantity CostDFRobot 4WD Robota 1 $56.00Raspberry Pi 2 1 $35.00Pi WiFi Adapter 1 $10.00Pi Camera 1 $20.00Rechargeable Battery Pack 1 $20.00Total Cost $141.00

Figure 18: Variable Costs of AVLS. The abovetotal cost is associated with the cost of one Au-tonomous Vehicle. Due to a cost of $141 per AV,sufficient funding would be required to test thesystem on a larger scale.

aDFRobot 4WD Arduino-Compatible Platformw/Encoders

Cost of other materialsItem Name Quantity Cost16 Sq. Ft Foam Padding 2 $40.00Rolls of Scotch Tape 2 $6.00Wi-Fi Router 1 $10.00Total Cost $56.00

Figure 19: Fixed Costs of AVLS. This cost can beused by multiple vehicles since all the vehicles willdrive on a single testbed and connect to the samerouter.

Total Cost The total cost of this project issplit into two categories.The cost of a vehicle is$141.00. The other cost for materials are shared.All cars drive on the same test track and connectto the same router, therefore the cost of the sharedmaterials is $56.00. The total cost of this projectis $197.00:

TotalCost = 141 ∗QCar + 56

= 141 ∗ 1 + 56 = 197

= $197.00

(3)

10

Page 11: FinalReport

5.2 Commercial Implementation

In a commercial AVLS development scenario, atest bed environment is not necessary to construct.The car will house its own computer vision proces-sor and will not need to offload real time calcula-tions. Real cars drive at greater speeds so it isessential to have a higher quality camera with im-proved frame rate and resolution. The computervision should take advantage of a Graphics Pro-cessing Unit for higher frame throughput. Poorresolution images or skipped frames can lead to in-correct commands which can be dangerous to thepassengers. Multiple infrared sensors are also nec-essary in order to detect objects around the car,so the car can make the appropriate movement toavoid them.

The commercial implementation would also re-quire a higher bandwidth communication networkin order to relay map and location information toa centralized server.

Estimated costItem Name Quantity CostHigh speed camera 1 $250.00NVIDIA GeForce 750 1 $100.00Raspberry Pi 2 1 $35.00Router 1 $200.00IFARED Sensor 8 $800Library licensing 1 UnknownTotal $1285+

Unknown

Figure 20: This represents estimates for the costof a real world implementation. These estimatesdo not include the price of the car itself.

For safety purposes it is recommended to house2 infrared sensors on each side of the car as well as2 on the back and front. This is useful for detect-ing objects next to the car that may in the blindspot of a human driver. The sensors in the backand front can aid in the avoidance of front andrear collisions.

A graphics card like the NVIDIA GeForce 750is a GPU used mainly on consumer desktops. Ithas more power than the typical raspberry pi andhelps reduce latency during processing. The pricesshown above are subject to change as the qualityor quantity of the items are modified. For exam-ple adding multiple cameras to provide a greaterrange of view can increase the price depending onthe camera quality.

11

Page 12: FinalReport

6 Current Trends

Traditional automatic vehicle theory relies heav-ily on probabilistic and stochastic analysis [5].Although the scope of most research deviates intoadvanced control systems, the basis for vehicle de-sign relies on optical and gyroscopic sensors muchlike the AVLS.

Companies like Google and Tesla are attempt-ing to create a mass market standard for theautonomous vehicle platform. Their vision is toreplace the human element in driving within ourlifetime. Many automotive manufacturers havejoined in to bring a small level of autonomy to theircurrent commercial lineup. New vehicle modelsoften feature self parking systems and guidanceauto pilots to keep their car in lane. This trendbegan with automatic braking and has advancedto commercial vehicles being summoned from aremote location for use.

Kettering University One existing solutionof an autonomous vehicle was developed by stu-dents at Kettering University.

The authors focus on how autonomous carswould interact with each other while driving. Thisalludes to a future where autonomous vehicles arecommon place on the roads and humans are un-necessary in the travel equation. When all vehiclesare autonomous, an individual unit will know itsparticular position when compared to other au-tonomous cars. Autonomous vehicles will be ableto relay this information to each other in orderto optimize traffic and avoid collision.[3] This issimilar to the goal of AVLS. In the future, stopsigns or street lights may not be required as anautonomous vehicle will relay its information toother autonomous vehicles and navigate accord-ingly.

In order to reduce the on-board processingdone by the car, the students developed an exter-nal Master Station. This master-slave relationshipreduces latency, decreases the number of compu-tations done on the vehicle microprocessor, andallows the implementation to be extended to mul-tiple cars. [3] AVLS also uses an external serverto reduce load on Raspberry Pi. However, AVLSin its current stage has only one master whilethe students at Kettering University elected for aone to one correlation between vehicles and offlineprocessing units. This may result in an enhanced

level of security and guaranteed, prioritized pro-cessing time. [3]

AVLS uses a single camera to capture thevehicle’s view and the server processes this infor-mation and gives back an angle and displacementvalue. The students’ implementation employs awide variety of sensors including a gyroscope, ac-celerometer, compass and speed encoder to deter-mine angular and linear position of the car. An IRrange sensor is also added so the car can avoid ob-stacles it encounters. IR works more efficiently atsmall distances which is practical for chaotic citydriving. [3]. The Master Station has an myRIOreal-time FPGA board which takes in local co-ordinates and formulates the position of the carin global coordinates.[3] Using this informationand a position sensor the car knows its relativeposition in relation to the global coordinate plane.Unlike AVLS, this implementation relies more onsensor data rather than the camera feed, to drivethe car autonomously.

In a large scale setting the wireless commu-nication must have an efficient range. AVLS isusing a local router where the car and server con-nect. Similarly, the students used a Digi XbeeWireless module which gives a range of 300 feet[3]. Having a greater range allows for a largertest environment. This implementation also usesa different protocol to transfer data between theserver and car. AVLS currently uses UDP networkprotocol and will in the future use TCP protocolsince it is more reliable. The students used aserial UART protocol with RTS/CTS handshakefor wireless communication because it reduces thenumber of frame collisions that occur in the hid-den node problem.[3] The hidden node problemoccurs when a particular node can see the hostnode but not the nodes around it. This way acar can communicate locality to other cars in thesystem. This can be used to eliminate the needfor stop signs or street lights.

Google Self Driving Car On a larger scale,Google is using similar methods to develop an au-tonomous car system to transport passengers toselect destinations. This approach is a reevalua-tion of conventional transportation systems withan emphasis on automaticity. With a fleet of au-tonomous vehicles individuals will not require per-sonal vehicles and the number of cars on the roadwill significantly decrease. Unlike AVLS whichrelies on a camera for computer vision, Google

12

Page 13: FinalReport

uses a single 64 beam laser to generate a three-dimensional map of the car’s surroundings. Thisrendering is cross-referenced with Google’s pro-prietary high resolution world maps to increaseaccuracy of the image. The downside of this ap-proach is that the autonomous vehicles, whilehighly accurate, require their environment to bepre-mapped by another fleet of Google mappingcars. The vehicle is equipped with four radars, forlong range object detection, a windshield mountedcamera, for traffic lights, a GPS, wheel encodersand an inertial measurement unit. The culmina-tion of these sensors provided the vehicle with acontextual understanding of its environment andallow it to detect anomalies even at far distances.Google has been testing these Self Driving Carsin Bay Area cities, with an supervisory humandriver on board. Although these tests have beenmostly successful, Google cars have been expe-riencing accidents as a result of being unable topredict the actions of human drivers. In an worldof entirely autonomous vehicles that communicatetheir intentions, this would not be an issue. [2]

Dedicated Short Range CommunicationMuch like AVLS, DaimlerChrysler has taken a newapproach to autonomous travel through vehicle-vehicle or vehicle-infrastructure communication.DSCR piggy backs on IEEE 802.11a at 5.8GHzwith custom protocol modifications. Signals areintended to be broadcast from each vehicle and re-ceived by other near-by cars. This provides the ve-hicle with an ”extended information horizon” thatallows the driver to be aware of situations thatmay not yet be visible. Preliminary DSRC im-plementations examine human operated vehiclesrelaying critical safety information to surroundingcars, in an attempt to reduce accident fatalities.One proposed example is ”Traffic Signal ViolationWarning” which would notify an operator if theirvehicle is expected to enter an intersection on ared light. This direct application of DSRC wouldaddress nearly 2,300 yearly fatalities in the US.One concern expressed with the DSCR is the pos-sibility of overwhelming a driver with information,ultimately distracting them from the road. Theresearch group has focused on the security andscalability of their design. When receiving overthe air information from other vehicles that dic-tates your car’s behavior, it is important to verifythat the data is coming from a legitimate source.A primary concern of this method is the hijackingof autonomous vehicles or the phishing of driversremotely through spoofed packets. Though the

primary directive of DSRC is to increase safetyof navigating road ways, DSCR could be used fortraffic advisory, electronic tolling, and updates todigital maps. [1]

Cloud Computing for Robots Cloud com-puting is becoming a cornerstone in a variety of in-dustries from consumer applications to large-scalemathematical processing. Companies like Amazonand Google offer other businesses virtual computerservices hosted at off-site locations. In addition todata security and redundancy, these shares of com-putational power allow small and large companiesalike to exceed their own local processing ability ata cheaper cost. This concept can be applied to thecase of artificial intelligence processing. In orderfor artificial intelligence to become an integratedpart of a consumer’s daily life, access to powerfulcomputers must be cost effective and scalable toany degree. Companies use cloud computing toreduce the financial burden of purchasing expen-sive computers that will be obsolete in a numberof years. This solution simultaneously addressesthe issue of off-site and redundancy backups of in-formation. Businesses are only required to pay amonthly or yearly fee to access hosted virtual ma-chines for its employees.[4]

As electronics get smaller, they tend to relyless on local processing power and more on of-floading calculation. This trend also applies tomass market robotics. The addition of powerfulcomputers and graphics cards can heavily inflatethe cost and weight of a robot, making them hardto manufacture on a large scale. Cloud comput-ing allows robots to become smaller while main-taining access to increasingly powerful computingresources. Much like AVLS, cloud computing en-ables robots to instantaneously access informationfrom other units while contributing to a collectiveof information.[4]

Cloud computing does have some drawbacks.Slow communication speeds will hinder roboticfunctionality, meaning that robots require redun-dant high speed communication mechanisms. Ifa robot were to lose connection to the host pro-cessor, safety features must ensure graceful fail-ure, or local computation must temporarily dic-tate actions.[4]

The architecture for cloud robotics is brokenup into two levels: machine-to-cloud(M2C) andmachine-to-machine(M2M). M2C allows robots toutilize the processing power of cloud based com-puters for computation. M2M enables robotsto work synchronously in a collaborative effort

13

Page 14: FinalReport

thus creating an ad-hoc cloud. This encour-ages rapid information sharing and results inthe increased performance of many computationalunits. [4] RobotEarth, an implementation ofcloud-computation, aims to bring these featuresto a wide variety of robots. RobotEarth featuresplug and play architecture which allows for newfeatures or nodes to be added without redesign-ing the system.[4] RobotEarth uses the DAvinCiframework, a software utility, that provides ac-cess to parallel and scalable computing. A largeenvironment of service robots can take advan-tage of the parallelism and scalability to increaseefficiency and means of communication betweenthem. RobotEarth uses the Robot Operating sys-tem (ROS) to facilitate communication with theDAvinCi server and the other robots. A HadoopDistributed File System (HDFS) connected to theDAvinCi server is used to process complex compu-tations requested by a robot. The DAvinCi serveracts as the master node and handles processingand computation when robots do not have enoughresources. [4]

Businesses have used cloud computing to scaletheir infrastructure quickly and efficiently. In or-der for robots to become a part of our daily lives,they need to take advantage of cloud computingin order to maintain a cheap and efficient system.

References

[1] Vehicle communication. IEEE Pervasive Com-puting October-December, 1536-1268, 2006.

[2] Erico Guizzo. How google’s self-driving carworks. IEEE Spectrum Online, October, 18,2011.

[3] Dwarkesh Iyengar and Diane L Peters. De-velopment of a miniaturized autonomous vehi-cle: Modification of a 1: 18 scale rc car forautonomous operation. In ASME 2015 Dy-namic Systems and Control Conference, pagesV003T50A008–V003T50A008. American Soci-ety of Mechanical Engineers, 2015.

[4] D Lorencik and Peter Sincak. Cloud robotics:Current trends and possible use as a service. InApplied Machine Intelligence and Informatics(SAMI), 2013 IEEE 11th International Sym-posium on, pages 85–88. IEEE, 2013.

[5] Cem Unsal, Pushkin Kachroo, and John S Bay.Multiple stochastic learning automata for ve-hicle path control in an automated highwaysystem. Systems, Man and Cybernetics, PartA: Systems and Humans, IEEE Transactionson, 29(1):120–128, 1999.

14