19
Adaptive Traction Control Sensor Network Masters Project Report Christopher Heistand Computer Science Department University of Colorado at Colorado Springs Apr 7, 2015 Approved by: Dr.Edward Chow (Advisor) Dr. Jugal Kalita Dr. Xiaobo Zhou

Adaptive Traction Control Sensor Network

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Adaptive Traction Control Sensor Network

Adaptive Traction Control Sensor Network

Masters Project Report

Christopher Heistand

Computer Science Department

University of Colorado at Colorado Springs

Apr 7, 2015

Approved by:

Dr.Edward Chow

(Advisor)

Dr. Jugal Kalita

Dr. Xiaobo Zhou

Page 2: Adaptive Traction Control Sensor Network

Abstract – Adaptive Traction Control SensorNetwork aims to be the building block for anacceleration based traction control system in acar. This paper presents the design,implementation and initial testing of theapplication. Performance of the ATCSN wascharacterized using multiple live track events.Software available on request.

Keywords – Traction Control, 9-axis, Android,Convex Hull, Acceleration Control

1. Introduction

Traction limitations have been a topic in race carssince the invention the modern tire. Breakthroughs insuspension geometry, tire technology and drivertechnique have pushed the limits of traction toincredible levels. As these advancements continue,measurement devices (accelerometers) have becomecheaper and more readily available. Gravity andGravity (G & G meters or traction maps) have tried tocapture what these traction limits are manually, butthe limit, and the distance to that limit, have alwaysbeen difficult to calculate requiring hours of skidpadtesting combined with hardcoded calculations.

With the introduction of cheap, commerciallyavailable accelerometers, acceleration mappingbecomes available for the average track enthusiast. G& G meters are used as a way to display the gatheredacceleration data in meaningful form (Figure 1). Thisis a plot of front to back acceleration (Y-axis) withlateral acceleration (X-axis) plotted against each other.The vector formed by lateral and forward Gs can bethought of as the total force on the tire itself. Tractionmaps emerged from G & G data to calculate an outerenvelope of how close drivers are to losing control.Figure 1 shows a perfect circle traction map of a tireable to handle the same acceleration vector in anydirection.

Adaptive Traction Control Sensor Network(ATCSN) is a suite aimed at automatically finding acar's traction limits. ATCSN's second objective is toinvestigate if acceleration alone can be used forknowing if a car slips or not.

2. Problem Statement

2.1. TractionA vehicle's traction limit is when the friction

coefficient of the tire is overwhelmed by the forcesapplied in any direction causing the tire to slip. Thetraction limit, or limit of grip, is the point at which theGs applied to the vehicle causes it to go into a slide. Ata certain point, the lateral acceleration in a corner (X-axis, Figure 1) or backwards in a breaking situation(Y-axis, Figure 1) overwhelms the stickiness of the tireand the car continues on its current trajectoryregardless of other inputs. In theory, the maximumamount of grip a tire has is based on the roadconditions, tire compound and acceleration loading.This complex set of parameters is equivalent to itsmaximum acceleration achieved in any singledirection, to not include gravity. This can be shownwith a series of vignettes about a car turning left.

This means a vehicle navigates a corner, its massaccelerates in the direction of the turn. If the tires canonly withstand .5G of acceleration, the vehicle canonly turn at .5Gs without losing control.

Figure 2: Left Turn

Lateral Acceleration

Forward Acceleration

.5 G Vector

Figure 1: G&G Meter with Traction Map

Page 3: Adaptive Traction Control Sensor Network

If the vehicle breaks in the middle of the .5G turn,it begins accelerating backwards (Y-axis) while it mustalso maintain its lateral (X-axis) acceleration. Thiscauses the tires to lose traction if the total accelerationvector is greater than .5G. Once traction is lost thevehicle continues in its current direction ofmomentum.

2.2. Hypothesis

There is a maximum amount of acceleration that atire is able to handle before slipping and there existsan algorithm using acceleration data that can learn thefriction limit and predict how close the vehicle is tolosing. Building this algorithm is the first step inbuilding an acceleration-based traction control system.The majority of modern vehicles have built-in tractioncontrol systems, but use differential wheel spin as theindicator of lost traction. However, there are scenarioswhere wheel spin alone is not an adequate measureand the only sign of traction loss is the accelerationchange. Using acceleration to find how close a vehicleis to losing traction will allow preventative measuresto be enforced before the tires begin to slide.

Once a traction map is built and knowledge of howclose a car is to sliding, the follow-on step is tounderstand when the car is sliding. This is a far moreinvolved problem given the input because slidinglooks like a car is on the edge of traction and there areno acceleration signs. The second objective of thisproject aims to find out if an algorithm exists that cantell loss of traction from acceleration alone.

Two algorithms are investigated to build thetraction map. The original algorithm explored uses aself organizing map/clustering algorithm that attemptsto learn a moving circular maximum. The secondapproach is Sorted Wrap, a convex hull algorithm thatpulls from several current known solutions. Both are

evaluated at the track to demonstrate effectiveness orlack their of.

3. Background and Approach

3.1. Background

ATCSN is the continuation of several projects. It isbuilt on a Factory Five Racing (FFR) kit car,investigates Jaguar-Landrover's (JLR) Tizen platform,utilizes an Android platform for processing and aimsto validate a previous project (Track-T) that uses aneural network to learn acceleration limits.

FFR designed a kit car around a Subaru WRXdonor. It transformed the all wheel drive rally car intoan ultralight, mid-engine, rear wheel drive sports car,figure 4. After a year of work, the car is complete andstreet legal (figure 4) – ready for its next phase as anelectronics test bed.

Jaguar-Landrover (JLR) and Samsung have joinedforces to mold Tizen, Automotive Grade Linux, into astandard operating system for the automotivecommunity at large. Tizen's goal is to be an operatingsystem that is modular, flexible and app based,updatable on the fly and current enough to handle theextreme speed and security requirements ofautomobiles. Currently there is no standard, open orclosed-source, in infotainment. Tizen is aiming to bethe first.

Besides pouring resources into the open-sourceproject of Tizen, JLR is getting schools involved in theprocess. UCCS received two flyaway kits with JLRreference architecture that is “car ready”.

OmegaSensor was an option for a remote BluetoothLow Energy (BLE) accelerometer. First designed as asenior design project at UCCS, OmegaSensor providesa BLE agnostic sensor bus that can wirelessly passdata from anywhere on the car. This allows multiple 9-axis sensors including acceleration, gyroscope andmagnetometer, at specific points in the car to pass dataover BLE4.0 to a centralized control unit. The

Figure 4: FFR 818 kit car

Figure 3: Left Turn While Braking

Forward Acceleration

Lateral Acceleration

.5 G Vector

Page 4: Adaptive Traction Control Sensor Network

OmegaSensor team will continue to provide hardwareand software reachback as the hardware isinvestigated.

The Android OS has solidified its sensor API in thepast several years that has lended itself to a fantastictestbed OS. When combined with Nexus 7 hardware,it satisfies all the requirements to include the sensoryinput, computation power and display capability.

A previous application, Track-T, had been writtento demo a Self Organizing Map that is capable oflearning expanding acceleration limits. ATCSN willbuild on this previous project in hopes of validatingthat acceleration alone can be used to find grip limits.

3.2. Approach

ATCSN is built in a three phased approach. Phaseone includes evaluating possible developmentenvironments to host the traction application. As aminimum set, the environment must have access to anaccelerometer, powerful processor and a displayfunction. This eliminates hardware solutions such asArduino or Teensy development boards and narrowsthe options down to using a Tizen Linux platform oran Android tablet.

Phase two is data collection and validation thatgood acceleration data is received and captured. Thisrequires a track day to calibrate the fusion algorithmand test several competing acceleration filters.Scenario capture via video is also fine tuned duringthe calibration track event.

Phase three is testing the outbound limit algorithmsto validate if the traction limits themselves are beinglearned correctly and predictably. The final evaluationvia video capture and application analysis is done onthe second track day.

4. Development Environment

Tizen was considered the prime candidate to runATCSN on. After a several week investigation intoTizen and the Linux BlueZ stack, they have provendifficult to integrate with outside programs. Therewere several key limitations in both hardware andsoftware that caused a shift on to an Android platform.

Tizen is being developed by Samsung and Intel asan open-source project to run as the next automotiveOS. It is being used primarily for display function andnot for development; building applications on outsidesystems in the Tizen software development kit andimporting them in. To build native applications orchange the environment requires a rebuild of theLinux kernel and a reflash to the hard drive. This isnot a trivial task because Tizen uses a Samsungproprietary bootloader. Building native applications onTizen itself, while it does have most of the supportneeded (gcc, g++, Qt5 and Boost), it runs Wayland

window server – an alternative to Xwindows that hasbeen the standard for decades. Tizen's Wayland setupdoes not have support for any text editors that work,even vi and nano crash, and a reinstall of X isinfeasible without a kernel rebuild.

Tizen applications that are semi-opensource, builtby Samsung/JLR but open to the public, work andproduce beautiful display demos. These demos,however, do not constitute a fully workingdevelopment system. The lack of support anddocumentation when compared to a distribution likeUbuntu, Fedora or even Slax is apparent and usingTizen as a development platform causes moreproblems than it solves.

BlueZ 5 and Bluetooth Low Energy devices provedto be the second largest hurdle when investigating thedevelopment environment. BLE is a recently adoptedstandard (Bluetooth 4.0) and hardware is just enteringthe market. Older 3.0 antennas generally do not havethe correct firmware or radio capability to use the 4.0low energy protocol. Bluetooth 4.0 dongles do exist,but are mainly Windows based and have specificfirmware loads that do not work well with Linux andits BT stack. Pre 2012 Dell laptops did not have a BT4.0 compatible antenna, a major concern because thedevelopment environment is hosted on one. Trying todevelop on Tizen which does have the correct antennaproves far more tedious than expected as explainedabove.

Utilizing the BlueZ 5 stack for Linux is a difficulttask. BlueZ 4 comes standard on most Linuxdistributions (Ubuntu, RedHat, anything runningGnome), and it does not support BLE integration.BlueZ 5 does support BLE connectivity, however itchanges many of the BlueZ API calls and breaksseveral things on upgrade. The BlueZ 5 stack isgenerally considered a problematic module by thecommunity. Fedora 21 does run BlueZ 5, but thehardware limitation remained.

Both the inability to develop on Tizen and theBluetooth incompatibility caused a design change touse an Nexus 7 Android platform as a complete sensorand processor integrated unit. It has a 9-axis sensor,decent processing capability and a touch screengraphical interface that is easily manipulated.

5. Scope

This project has three distinct parts, each buildingon the previous. The first piece is sensor integrationwith Android's built in 9-axis and GPS sensors. Atraction control center will take incoming data andbuild the traction map. Finally a visual display mustbe built to present the transformed data into humanreadable graphics. Behind, and connected to all threeparts is a database scheme to support both capture andplayback.

Page 5: Adaptive Traction Control Sensor Network

5.1. Sensor Integration

Two sensors are used that reside onboard theAndroid platform itself. The 9-axis sensor providesXYZ acceleration, XYZ rotation, and a XYZ compassheading. All 9-axis information is smoothed andtranslated to produce the best possible accelerationvector. The onboard GPS provides speed and currentlocation. Both sensors are exposed to the Androidnative application that houses the Traction ControlCenter.

A fusion algorithm is used to take XYZacceleration data and XYZ gyroscope rotational dataand produce an always forward, down, leftacceleration vector. This is important for dealing withbody roll on the car as the z gravity vector can showup in x and y.

Lastly a filter is required to take forward, down, lefttranslated acceleration data and smooth it into usabledata.

5.2. Traction Control Center

The traction control center (TCC) is the actualprocessing algorithms for the ATCSN. It ingestsstabilized acceleration data from the sensor managerand then uses an algorithm to form a coherent tractionmap. The TCC algorithm is unsupervised, capable ofhandling over 50 pieces of incoming data a second(20ms polling time), and persists data across reboots.The output of the traction control center is a 2Dpolygon, array of XY coordinates, of the currenttraction map. This output is exposed to the front enddisplay.

5.3. Display

The display for ATCSN handles both the gaugecluster user interface for the car and the graphicalportion of the traction control center.

The electronic gauge cluster has a total accelerationand a dual X Y acceleration graph (G & G meter). Thetraction map is displayed on one side with an indicatorof current acceleration. The indicator has a persistentmode that can be toggled to display a history ofacceleration.

5.4. Backend Database

The database requirement for ACTSN was realizedbefore the first track day for data capture and wasintroduced after the original proposal. A databasesystem is required to store and retrieveincoming/outgoing data. Snapshots of the database are

used for data analysis on a PC device as well as forfuture playback capability on the Android device itself.

5.5. Playback requirement

Testing in the real environment, pushing a car to itstraction limit, is not a simple or cheap thing to dosafely. Playback became a new requirement to answerthe difficulty of real testing. The data is saved in aformat that allows playback on the android device todirectly reproduce what it was doing on the track inreal time. Playback also includes a scanning functionthat allows the application to line up the androidplayback with the GoPro footage. This is especiallyhelpful when looking and listening for points of slipafter the fact rather than during the lap itself.

Playback uses the same filters, displays and settingsas the live application but instead of using sensors anddumping data to the database it runs sensor free andpulls data from the database. This produces an exactreplica due to the ability to pick up data in the samepipeline as the live version.

6. Data capture

Processing power was the key watch item whenmoving from a Tizen x86 architecture to an AndroidARM architecture. Testing and characterization wasrequired to see how quickly acceleration data could bepolled, processed and stored. This was done bychanging the ms poll time, stress testing theapplication and running statistics on the resultingdatabase.

XYZ acceleration floats were saved along with themilliseconds since epoch for each poll. Theapplication was run for 1 minute for the trial testingand 10 minutes for the extended test. Once finished,the SQLite database was pulled to the PC to analyze.Time between events was used to identify the overallaverage and standard deviation. A close inspectionalso showed how many large pauses there were whilethe processing caught up.

TotalTime

Poll ms (Thread)

Avg ms (Actual)

StandardDev

# Long Pauses

1 Min 5 10.217 17 2

1 Min 8 10.64 16.14 4

1Min 10 11.07 20.15 4

1Min 20 19.99 16.14 1

10 Min 10 10.37 18.1 medium

10 Min 12 12.01 14.2 low

Page 6: Adaptive Traction Control Sensor Network

The polling limit is right around ~12 ms before itbegins getting elongated greatly. This is a combinationbetween the SQLite storing function and thefusion/filtering/limit algorithms that all run with everypoint of data. Although polling can support, albeit notsmoothly, speeds of up to 4ms, storing information hasa fairly set limit of 12ms. While testing on the track,the polling time was moved to 15ms as it stillproduced very fluid graphs and provided additionalprocessing time for more intensive filters (such as Zdamp).

7. Data Fusion

7.1. Data Fusion Requirement

9 and 10 axis sensors are becoming the standard forhobby electronics, but getting the data to be useful canprove difficult. Each sensor, accelerometer, gyroscopeand compass, is useful in individual applications ifone is trying to find an acceleration, rotation orheading in their individual capacity, but additionalpost processing is required to get acceleration in astatic coordinate system where hardware can moveinside that space. This project requires a front, left,down acceleration vector regardless of the orientationof the device.

7.2. Algorithm

This front, left, down orientation is achieved bycombining the accelerometer, gyroscope and compassdata with several basic assumptions. Gravity producesa measurable acceleration vector of 9.81m/s2 on the -Zaxis. The second assumption is that the device is“pointed” forward with the screen perpendicular to thefront allowing a passenger to read it. At rest these twoassumptions provide a forward and down direction.The compass can be used to find the orientation ofnorth/east to find if the device is flipped upside down.The north vector is then pointed to the front of the car,producing a front, left, down coordinate space.

With a correct starting vector the gyroscope can beused to handle rotation of the device (from body roll orturning) to translate the original vector appropriately.Gyroscopes are both far faster at detecting change thanthe accelerometer and compass and are not influencedby outside forces such as magnets or lateral Gs. Thekey limitation of cheap gyroscopes is their inherentdrift over time. The way to solve this is to slowlyrebaseline the movements with the accelerometer andcompass. The fusion algorithm used in ATCSN has avariable rate to re-baseline the gyro.

Android provides several calls as part of itsSensorManager class that were used to build thefusion algorithm. Setup of the accelerometer,

gyroscope and magnetometer are all done byregistering with the service:

mSensorManager.registerListener(sensor,mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER),SensorManager.SENSOR_DELAY_FASTEST);

mSensorManager.registerListener(sensor,mSensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE),SensorManager.SENSOR_DELAY_FASTEST);

mSensorManager.registerListener(sensor,mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD),SensorManager.SENSOR_DELAY_FASTEST);

From here, a timer is set up to poll each theSensorManager every N ms, receive a new set of dataand process it. Orientation is then calculated fromboth the acceleration/magnetometer data and the gyrodata. While there is a simple Android call to calculateorientation from an acceleration vector, it is moreconvoluted to get an orientation vector from thegyroscope. The gyro vector must be calculated on a perstep basis. Fusion of both orientation vectors happensin this step as well.

previousOrientation = acceleration StabilizedOrientation();coefficient = .995;

orientation onTimerTick(gyroData, accelData){ accelOrientation = getOrientationFromAccel(accelData);

delta = getDeltaRotation(gyroData); gyroOrientation = previousOrientation + delta;

fusedOrientation = (coefficient * gyroOrientation) + ((1 – coefficient) * accelOrientation);

return fusedOrientation;}

The fusion logic is borrowed from a previousproject, Table, that rotates a static 3d table display invirtual space and can be viewed from any angle. Thecoefficient provides the tradespace between using theaccelerometer and magnetometer completely fororientation, or letting the gyro be used for rapidrotation changes.

7.3. Metrics

Hard metrics were difficult to test to with the datafusion algorithm. It is fairly easy to tell that the axisare skewed or the acceleration values are incorrect, butrelatively difficult know that the data is right.Repetition and theoretical limits were used to informthe decision to lock down the algorithm.

The goal of a forward, left, down orientedacceleration vector first requires good orientation.This was achieved by holding the device and shiftingit linearly in each direction. The orientation was thenchanged, and the same movements were performed.

Page 7: Adaptive Traction Control Sensor Network

This test proved the gyro/acceleration orientation wasable to shift based on real world orientation whenhandheld.

Orientation during G loads was the next problem totackle. In the case of ATCSN, Gs are felt for upward of6 seconds during a long turn, causing the applicationto believe gravity has shifted to a different vector. Thiscauses significant Z axis thrashing and spikes in theacceleration data. Dramatically slowing down rate atwhich the gyro re-baselines produces far better results.Testing on a flat, smooth track showed the Z axisremaining straight up and down as well asacceleration in the directions expected. Correlationwith the playback and GoPro video was used todetermine the validity.

Lastly, data verification was required to see howgood the data was after orientation. Theoretical limitsof the car should be ~1.0G. The android 9-axis sensoris relatively noisy when compared to an expensivededicated one, so as long as the range was close,within 10%, it was deemed acceptable. Through tightturns the acceleration was reading 1.0-1.1Gs whichlooked correct

8. Data Filtering

8.1. Data Filtering Requirement

Acceleration data is inherently noisy even. Filtersare required to take the noisy data, smooth it out andproduce usable points. Several filters were looked at aspart of the development and testing process.

8.2. Algorithm Exploration

Four filters were built, tested and promptlyremoved. A lowpass filter was considered as it iscommonly used in acceleration data to remove largespikes. The data received from the track reachesgreater than 1G. A lowpass filter capable of handlingthe data correctly either requires the upper thresholdto be large enough that nothing was suppressed or itwas lower, smoother, and dampening the upper end.Lowpass filters are especially useful for handlingacceleration data that is integrated to produce velocity,but as the acceleration data is the final product, itbecomes far less useful. A highpass filter clips thelower values and does nothing to smooth the spikes inthe data. A bandpass filter combines both previousfilters and produces poor data. An averaging filterproduces good, but spares results. A rolling averagefilter is a far more effective filter when implementedcorrectly.

Z axis noise was prevalent throughout testing,causing a Z dampening filter to be theorized. Thenoise was thought to originate from bumps, but was

realized to be a false baseline as the fusion algorithmlearned a false orientation. The Z dampening filterrelies on several key assumptions: clean X (lateral)and Y (forward) data is the end goal which means Zdata can be thrown out, Z should not change in acorner and a change in Z is generally noise from abump. Using these three pieces of information a filtercan be built that subtracts the amplitude of Z from theamplitude of the X and Y components producing“bumpless” data. The filter is easier to understandwith Java code:

public float[] processData(float[] accel) {

boolean zPositive;

if(accel[z] > 0){zPositive = true;

} else {zPositive = false;

}

if(accel[z] > 0){if(zPositive){

accel[x] = accel[x] – accel[z];} else {

accel[x] = accel[x] + accel[z];}

} else {if(zPositive){

accel[x] = accel[x] + accel[z];} else {

accel[x] = accel[x] - accel[z];}

}

if(accel[y] > 0){if(zPositive){

accel[y] = accel[y] - accel[z];} else {

accel[y] = accel[y] + accel[z];}

} else {if(zPositive){

accel[y] = accel[y] + accel[z];} else {

accel[y] = accel[y] - accel[z];}

}

output[x] = accel[x];output[y] = accel[y];output[z] = accel[z];

return output;}

This filter was tested prior to fixing the fusionalgorithm, but did not produce expected results. Oncethe fusion algorithm was fixed the Z spikes no longerappeared and the entire premise for the filter wasnegated. In a future effort it would be interesting to seeif the Z dampening filter is more effective on a bumpyroad such as a pot-holed street

Finally, a simple rolling average filter was built totake an average of the last n points. With processing

Page 8: Adaptive Traction Control Sensor Network

requirements in mind, n was originally set to 4 pointsbeing processed. After discussion with several DSPengineers and track enthusiasts the number of pointswas upped to 30 to account for the time spent in acorner (several seconds) vs the ms polling time (15ms). Java code for the rolling filter is below:

public float[] processData(float[] accel) {if(currentInput == numInputs){

currentInput = 0;}

xAverage -= x[currentInput];xAverage += accel[x];yAverage -= y[currentInput];yAverage += accel[y];zAverage -= z[currentInput];zAverage += accel[z];

x[currentInput] = accel[x];y[currentInput] = accel[y];z[currentInput] = accel[z];

currentInput++;

float output[] = new float[3];output[0] = xAverage / numInputs;output[1] = yAverage / numInputs;output[2] = zAverage / numInputs;

return output;}

Once both the filter and fusion pieces werecorrectly implemented, recognizable data appeared onthe screen with fantastic clarity and smoothness.

8.3. Metrics

The filtering algorithm metrics were based purelyon smoothness of the acceleration output while stillattaining the maximum acceleration value. The key isto clip noise but not data. This was judged by lookingat the raw value stream vs the filtered value stream.Each filter was checked using the playback capability,loading a new filter, and assessing by eye it’s ability tobe lossless and smooth.

9. Edge Detection

Given the acceleration dataset available, theTraction Control Center problem can be boiled downto a convex hull problem – finding the smallestpolygon that wraps all points in an envelope. Thereare several algorithms that do this already, but are notoptimized for a stream of incoming data. This problembecomes bounded as a progressive convex hullproblem where additional points are added as timecontinues.

Quick, iterative hull building is a very importantaspect of the algorithms being built. The hull must berecalculated at each step and the timestep is very small

(15ms). This is running on a mobile device and haslimited processing capability, so a full recompute, likethe gift wrap algorithm or the quick-hull algorithmwould be wasteful.

The first approach was using a Self OrganizingMap approach. The second borrowed inspiration fromsorted lists and the graham scan algorithm to producea self-coined Sorted Wrap.

9.1. Max Self Organizing Map

Ingesting circular or continuous input is generally adifficult problem for learning algorithms, especiallyneural networks. The algorithm had to work on theXY plane, handle circular input, and provide a flexibleshell. Several options were considered with a SelfOrganizing Map (SOM) proving the best startingplace.

The initial map begins as a circle of 12 centroidscentered around (0,0). As XY acceleration coordinatesare fed in, the map is trained to move the closestcentroids in the border toward the acceleration input.This concept was borrowed from using a SOM toanswer the traveling salesman problem[4][5]. Thisparadigm uses the SOM as an outward clusteringalgorithm.

Because the limit of grip is the maximum force atire can handle before it begins sliding, the G forcereported by an accelerometer at the edge of grip is themaximum G force reportable, slipping or not. Usingthis knowledge, a max SOM can be built that onlytrains on acceleration data that is “outside” thecurrently understood limit of grip is.

Selective training is done by using a modified Pointin Polygon equation. Resting acceleration and thecenter of the polygon is always known, (0,0). A line is

Figure 5: XY accel point (redtriangle) inside the G polygon (left)and outside the G polygon (right)

Page 9: Adaptive Traction Control Sensor Network

built from (0,0) to the acceleration point (Xa,Ya). It isthen checked for an intersection with each linebetween every centroid n (Xn,Yn) and centroid n+1(Xn+1,Yn+1). If there are an odd number of intersections,that means the point (Xa,Ya) is outside of the polygon.The pseudocode and equations used to find theintersection are as follows:

For all centroids{Centroid c = current centroid;Centroid d = next centroid;

float g = –(Xn*Yn+1–Xn+1*Yn) / (Xa(Yn–Yn+1)–Ya(Xn–Xn+1));

float k = (Xa*Yn–Xn*Ya) / (Xa(Yn – Yn+1)–Ya(Xn–Xn+1));

if (0 <= g <= 1 && 0 <= k <= 1){train SOM on (Xa,Ya);

}}

Floats g and k represent the intersection variablesbetween the two line segments. If both g and k arewithin 0 to 1 that means the two line segments crosswithin one segment their starting point, and thereexists an intersection.

As this max SOM only trains on points outside ofthe current polygon, the closest centroids are pulledoutward and as limits are found, the polygon expandsto capture the data.

The Self Organizing map is, at its heart, aclustering algorithm that only trains on data outside ofits current polygon. The points being trained on mustbe uniformly spaced around the circle to produce agood result for spreading out the polygon. Testingduring the first track test showed the algorithm gettingpinned in a corner during a long corner. The SOMimplementation is very interesting from its ability tohave a variable polygon, but in the end it is to volatileto be a viable solution.

10.2. Sorted Wrap

The second approach pulled inspiration fromseveral published convex hull algorithms as well asinsights from the data itself. Many “incrementalconvex hull” algorithms have been built, however themajority are using the stepped incremental approachto attain a hull given a set of static points. Algorithmssuch as Jarvis March[15] and Grahams Scan[14] areincremental, however run into problems if the data isnot incremented in order. ATCSN requires anincremental approach that is able to ingest out of orderincremental data while keeping the processing aslightweight as possible.

The goal of sorted wrap is to produce a fastalgorithm for adding new points and expanding thehull as time progresses. This approach has less hull

flexibility than the SOM implementation and can onlyexpand, not contract. Pre Traction Control Centerfiltering is crucial due to this algorithm takingacceleration points as absolute truth.

Data from the ATCSN is centered around (0,0) andthat center point will always be inside the hull. SortedWrap relies on this assumption, although as ageneralized algorithm, Sorted Wrap will work as longas a “center point” is used that is inside the hull andnot touching an edge.

Each point that is added has both Polar andCartesian coordinates calculated and stored. The list ofpoints stored is a sorted list based on radian angle.The center point for all angle measurement is (0,0)which makes converting from (x,y) acceleration asimple tan -1(y,x) function.

Two helper functions were built to help facilitatethe traversal of the sorted list:

nextCoordinateClockwise(Coordinate a){temp = nextHighestByAngle(a);if(temp == null)

return firstCoordinateInList();else

return temp;}

nextCoordinateCounterClockwise(Coordinate a);temp = nextLowestByAngle(a);if(temp == null)

return lastCoordinateInList();else

return temp;}

These functions allow a circular traversal of the listfinding next in line or previous in line without thechecked coordinate being in the sorted list to beginwith.

With the previous functions in mind, step one is todetermine if the new point is inside or outside thecurrent hull. This is done by forming a line betweenthe nextCoordinateClockwise and nextCoordinate-CounterClockwise for the given point in question.That line is checked for an intersection with the centerpoint (0,0) and the point in question line. If anintersection occurs, the point is outside of the currentconvex hull and should be incorporated.

Once found to be outside the circle, the point is“added” to the hull's exterior. The hull's exterior is aset of sorted points based on angle, making insert anO(logn) cost. The hull at this point may or may notmaintain its convexity, requiring a final step to stepthrough nearby points and determine if they are stillneeded in the hull.

Convexity is maintained in a similar fashion toGraham Scan, however it traverses both directions,forward and backward, traverse starting at theintroduced point rather than make one sweeping loop.An intersection is calculated between the introduced

Page 10: Adaptive Traction Control Sensor Network

point and second adjacent (Pn+2) point with theadjacent (Pn+1) point and (0,0). If the lines do notintersect, the adjacent point is removed from the hulland the process is repeated. If the lines do intersectthat means the adjacent point is still part of the hull.A pseudocode representation is below. Each n+x pointis found using nextCoordinateClockwise andnextCoordinateCounter.

//Clockwise check for unneeded pointsLet point n be the new nodepointInQuestion = n+1;pointPastQuestion = n+2;while(line from n to pointPastQuestion does not intersect line from(0,0) to pointInQuestion){

delete pointInQuestion;pointInQuestion = new n+1;pointPastQuestion = new n+2;

}

//Counterclockwise check for unneeded pointsLet point n be the new node;pointInQuestion = n-1;pointPastQuestion = n-2;while(line from n to pointPastQuestion does not intersect line from(0,0) to pointInQuestion){

delete pointInQuestion;pointInQuestion = new n-1;pointPastQuestion = new n-2;

}

Once the unused points of the hull have beenremoved, the hull is the minimum spanning hull for aset of points. This is also a worst case O(n) operation.In total, calculation if the point is outside the hull,insertion into the list and re-factoring the hull isO(logn + n). The average case is expected to be farsmaller. The 2D complex hull algorithm has a worstcase limit of O(nlogn), and sorted wrap performswithin those bounds, and in reality far better, thanmany of the tested convex hull algorithms[13].

The focus on real time performance allows anAndroid device to add a point in sub 1 ms timeframes.Sorted Wrap, however, is specifically tailored toincrementing data and absorbing new points. It doesnot handle n new points at a time gracefully, such asstarting with n points vs 0, and complexity goes toO(n2).

11. Final Design

11.1. Hardware

Android Platform: Nexus 7-Android 5.01 Lollipop-ARM 1.5 GHz quadcore processor-2 GB Memory, 32 GB Flash-7 Inch touchscreen

11.2. Development Environment

Android Development: Eclipse-Written in Java-Utilizes Android 5.01 libraries

Android Display: -Written in Java and OpenGL2.0-Utilizes home built graphic plotting library

Android Database: -SQLite native implementation

11.3 Overall Design

ATSCN is a native Android application built on topof Android 5.01 Lollipop. The application is designedto be a cheap, open source replacement of currenttraction capture systems currently used by racecardrivers. With additional work it could prove useful toan everyday driver to alert them of how currentconditions impact their driving.

ATSCN currently has two basic modes, live andplayback. A single OpenGLES display supports bothmodes with two different data pipes. A third mode,outside of the scope of this project, will integrate amap display with the data produced to provide directfeedback on each curve taken.

The application itself has a straightforward datapipeline. For the live mode, the 9-axis sensor is firstpolled for raw data. Those values are then fused andstored in the SQLite database. From this point thepipeline is the same for both live and playback as theplayback “data” is being pulled from the SQLitedatabase. The data is then filtered, sent to the TractionControl Center algorithm and then displayed withOpenGLES.

11.4. Sensor Integration

The 9-axis sensor is managed through theSensorManager API that Android has provided. Itprovides additional math for manipulating all 9 inputsinto manipulable matrices. From those matrices,iterative math can be used to produce a true xyzacceleration map as explained in section 6.1.

Section 6.2 lays out several filtering algorithms thatwere tested with the incoming fused data. A rollingaverage filter with 30 points of incoming data ischosen for the final implementation based trackvalidation.

Additional sensors and data pulls are implementedto facilitate future work. The GPS is available throughthe Android GPSManager API providing a clean,simple way to access fine location services and alsobreak out speed in miles per hour. Speed may play a

Page 11: Adaptive Traction Control Sensor Network

role in the theoretical limit and must be taken intoaccount.

OpenWeatherAPI is a website that collects anddisseminates weather data in JSON format based onthe LatLong requested. Using the GPS location and anInternet connection if available, weather informationis pulled down and stored for data correlation.

11.5. Traction Control Center

The Traction Control Center portion of theapplication specifically aims to build the tractioncircle based on the fused and filtered acceleration data.TCC is effectively trying to solve an iterative convexhull problem with a very quick time step (15ms).Sorted Wrap, discussed in section 7.2, is used to buildthe traction map.

Several components of the TCC Sorted Wrap wereimplemented with JAVA specific architecture. ACoordinate class was built specifically to handle a xycoordinate set that implemented a comparablefunction based on the angle from (0,0). This facilitatesthe sorted list of hull coordinates. The sorted listutilizes TreeSet, a JAVA class that is both sorted andnavigable. Using TreeSet limits insert and getfunctions to O(logn) and keeps the interface simple.

11.6. Display

There are two screens currently part of the ATSCNapplication. The startup screen is built in Android'snative application builder to provide a setup interfacefor the live or playback function. Included is theability to choose a specific saved file, scan through itto choose a start time or numerically set the start timein milliseconds.

The gauge cluster display is built on OpenGLES2.0that is purpose built to be a front end for both theTraction Control Center traction map output and rawsensor data. Data is pulled directly from both thesensor managers and the Traction Control Center.

Figure 6 is a screen capture of the live application.The gauge cluster screen includes a total accelerationgraph in the top left (light blue), X (red), Y (green),and Z (yellow) acceleration graphs in the bottom left,the Traction Control Center graph in the right centerand traction meter based on current total accelerationand the current traction bar in the far right.

11.7. Database

ATSCN uses the android native SQLite backend asits database store. This allows logs to be saved, pulledand analyzed after each testing event. The databasehas three tables, weather, hull and raw data, used torecreate a driving session.

The most important table for testing and playbackis the raw data. A millisecond timestamp along withfused X, Y and Z coordinates get saved at every poll(15ms). The timestamp is crucial to the playbackfunctionality to start at certain times and willeventually allow a bounding function for looking atcurves one at a time.

Currently on every pause of the application (backbutton or interruption), the database is time taggedand saved to a location on the sd card for later use.From there it can be pulled to a PC for analysis. It canalso be “reloaded” through the start screen where it iscopied from the sd card into the current databasepointer and then played back.

12. Testing procedures

12.1. Testing Environment

Testing the ATCSN requires the Factory Five 818, amounting system, an open track, a GoPro and a driverwilling to push the car to its limits. PuebloMotorsports Park was the track we used for the firstround of testing on their 2.2 mile 10 turn road course.The second round of testing was at Pikes PeakInternational Roadway on an infield autocross course.Both venues provide an open and safe track to get tiresto their limit, and sometimes past.

Most of the testing was done with two people – adriver and a navigator. The navigator was in charge ofmonitoring the application and validating if the datawas good. A GoPro was set up to capture the track andthe Android device, however with the sun the Androiddevice was unable to be seen. Glare was the maindriving factor for requiring playback.

12.2. Data Capture

The test rhythm was generally two laps at a timewith a break for engine cooldown, data analysis andcode changes. The cycle generally ran 5 minutes on

Figure 6: ATCSN Display

Page 12: Adaptive Traction Control Sensor Network

the track 15-20 minutes off. This generated 1-3 GBvideos and 1-3 MB SQLite database tables.

13. Results

13.1. Test one

Test one was conducted at Pueblo Motorsports Park(Figure 7) on March 28, 2015. Prior to the test thebasic functionality of the application was implementedand the database/timing characterization had beendone. Several filters had been built and were easilyswapped at the track.

Several large takeaways came from the track day.As explained earlier, correcting the fusion rebaselinerate and identifying the correct filter were very bigsteps towards getting good looking data. It took thefirst 6 hours of testing to reach the final state.

Timing is everything. Linking up the GoPro startwith the Android start makes things much easier inthe future for playback. Along with good timing is arequirement of tagging good data once it is gathered.Due to both the GoPro's and Android's clocks beingout of sync it was impossible to correlate a GoProvideo with a playback sequence.

Max SOM did not effectively capture theacceleration portfolio. Once it is pulled in a singledirection for an extended period of time the entireSOM collapses toward one side, converges and cannotrecover. This forced a look at a different TractionControl Center algorithm.

Overall this test gleaned several important pieces ofinformation and set up the success of test two.

13.2. Test two

Test two was at Pikes Peak International Racewayon April 5, 2015. This test was conducted during aTime Attack event, lowering the number of runspossible but providing a very handling intensive

course. Nine runs were done with two drivers over thecourse of the day.

Using the information gleaned from test one, rawdata was collected after fusion (incorporation of thegyroscope) and prior to the rolling average filter. Thisallowed filters and the Traction Control Centeralgorithm to be changed and tweaked post playback tosee best results. Careful attention was used lining upthe timing of the GoPro and the application start usingtimestamps and an obvious hand signal.

It appears Sorted Wrap produces a very goodenvelope and once established produces a goodrepresentation of how close the car is to actuallyslipping. A side by side comparison of GoPro videoand the application screencast (figure 8) shows a greatrepresentation of how close the car was in the slalomsection of the course.

The learning function of Sorted Wrap works well,although it is mono-directional and can only growoutward. With good filtering in front of the wrappingalgorithm, erroneous data gets stripped out, howeverthis application does not address changing conditionssuch as dry vs wet.

Although the traction map is correctly learned, skiddetection is a far harder problem to solve. A skid wascaptured by both GoPro and Android device that gavean accurate representation of what one looks like indata (Figure 9). This was captured going through theslalom section of the course and the entire car

Figure 7: Track Test 1

Figure 8: Track Test 2

Figure 9: Loss of Traction

Page 13: Adaptive Traction Control Sensor Network

spinning 360 degrees. The plot itself, except for beingless smooth, looks very similar to the normal slalomplot (Figure 10) just 3 seconds before the spinout.

Acceleration data is unable to give instantaneousfeedback on if the car is losing traction or not becausethe peak of traction looks very similar to a sharp turnin the other direction. It is, however, able to predicthow close the car is leading up to the skid based onthe traction map it has learned.

14. Evaluation Criteria

This project breaks metrics into three categories;data handling, neural network capability and userinterface.

a) Data Handling– Acceleration update < 100ms apart– Success – 15 ms polling time

b) Traction Control Center Capability– Unsupervised network– Success – Unsupervised algorithm (not neural

network)– Learns Traction Map– Success – able to build accurate traction map

– 80% accurate at flagging slippage– Unsuccessful – Does not flag a skid but does

tell how close to the edge the car isc) User Interface– User interface on Android– Success – Usable Android Interface

– Traction display– Success – Useful G&G Meter with learned

traction circle

15. Conclusion

This project's main goal is to investigate if analgorithm can be built to learn a traction map andcapture how close to the limit of traction a car is.ATSCN is able to learn a traction map over time anduse that to give an estimate on how close to the limitthe car currently is. It is unable to tell when the car is

actually slipping because of data limitations on what aslip looks like.

The fusion, filtering and edge detection algorithmsall proved to work in the race track environment. TheAndroid device was able to run through each, toinclude storing the data for future use, in 15 ms timesteps. Sorted Wrap was able to achieve the theoreticallimit of convex hull algorithms, O(nlogn), with anextremely small real-time processing requirement.

Future work using the database with several selectqueries could add a persistence function acrossmultiple environments. Integrating the GPS speed aspart of the data captured will also play a role in futuredevelopment. Finally displaying the known data on amap overlay will allow a quick overview function forracing events.

Overall this was an extremely interesting projectintegrating physics, mechanics and computers to testand find a solution to acceleration based traction. Itcertainly pushed the car and the application to theirlimits.

16. References

[1] de Wit, Carlos Canudas, and Panagiotis Tsiotrasн."Dynamic tire friction models for vehicle tractioncontrol." (1999).

[2] Muller, Steffen, Michael Uchanski, and Karl Hedrick."Estimation of the maximum tire-road frictioncoefficient." Journal of dynamic systems, measurement,and control 125.4 (2003): 607-617.

[3] Liu, Chia-Shang, and Huei Peng. "Road frictioncoefficient estimation for vehicle path prediction."Vehicle system dynamics 25.S1 (1996): 413-425.

[4] Angeniol, Bernard, Gaël de La Croix Vaubois, andJean-Yves Le Texier. "Self-organizing feature maps andthe travelling salesman problem." Neural Networks 1.4(1988): 289-293.

[5] Jin, Hui-Dong, et al. "An efficient self-organizing mapdesigned by genetic algorithms for the travelingsalesman problem." Systems, Man, and Cybernetics,Part B: Cybernetics, IEEE Transactions on 33.6 (2003):877-888.

[6] Matuško, Jadranko, Ivan Petrović, and Nedjeljko Perić."Neural network based tire/road friction forceestimation." Engineering Applications of ArtificialIntelligence 21.3 (2008): 442-456.

[7] Borrelli, Francesco, et al. "A hybrid approach totraction control." Hybrid Systems: Computation andControl. Springer Berlin Heidelberg, 2001. 162-174.

[8] Richard D and Mathew D (2014, September 14). Tiresand Grip. Available:http://www.mrfizzix.com/autoracing/tiresgrip.htm

[9] Technical Instruments. (2013). AutomotiveInvotainment Guide. [technical document]. Available:http://www.ti.com/lit/sg/slyb139a/slyb139a.pdf

Figure 10: Slalom Section

Page 14: Adaptive Traction Control Sensor Network

[10] Psaltis, Demetri, Athanasios Sideris, and AlanYamamura. "A multilayered neural network controller."IEEE control systems magazine 8.2 (1988): 17-21.

[11] Daugman, John G. "Complete discrete 2-D Gabortransforms by neural networks for image analysis andcompression." Acoustics, Speech and SignalProcessing, IEEE Transactions on 36.7 (1988): 1169-1179.

[12] Barber, C. Bradford, David P. Dobkin, and HannuHuhdanpaa. "The quickhull algorithm for convexhulls." ACM Transactions on Mathematical Software(TOMS) 22.4 (1996): 469-483.

[13] Allison, Donald C. S., and M. T. Noga. "Someperformance tests of convex hull algorithms." BITNumerical Mathematics 24.1 (1984): 2-13.

[14] Graham, Ronald L. "An efficient algorith fordetermining the convex hull of a finite planarset." Information processing letters 1.4 (1972): 132-133.

[15] Jarvis, Ray A. "On the identification of the convex hullof a finite set of points in the plane." InformationProcessing Letters 2.1 (1973): 18-21.

Page 15: Adaptive Traction Control Sensor Network

Appendix A. Installation and Configuration of ATCSN

Requirements:

– Android device with Lollipop 5.01 or higher

– 9-axis sensor

– GPS

– 3G/4G or Wifi connectivity for Weather (opt)

– Environment

– Eclipse or Android Studio with Android SDK

– Android 5.01 packages installed

ATSCN can be built from source or the APK installeddirectly. Building from source will produce an APK thatmust be installed.

Source available on request from UCCS. Internal link:

http://walrus.uccs.edu/~gsc/pub/master/cheistan/src/InControl.tar

Download source to workspace of choice. Untar packagewith:

tar -xvf InControl.tar

Open Eclipse or Android Studio pointing to new InControlfolder.

Clean and build to refresh R. file. This produces an APK.

Eclipse install an APK

– Connect Android device to PC

– “Run” android project

– Point to connected Android device

– Application will automatically load and run

Manually install an APK

– Connect Android device to PC

– Transfer .APK to SD card of device

– On device, open .APK in a file manager

– Install application

– Run Application

ATSCN is already configured on startup. Once a live capturehas been performed, databases will begin showing up in thedrop down menu on the startup page. It also is able togracefully handle lack of Internet connectivity.

Page 16: Adaptive Traction Control Sensor Network

Appendix B. Demonstration

Figure 11: Initial Startup Screen

Figure 12: To start in live mode, choose Live Data and the click Start App

Page 17: Adaptive Traction Control Sensor Network

Figure 13: ATCSN running in live capture mode. The gauge cluster screen includes a totalacceleration graph in the top left (light blue), X (red), Y (green), and Z (yellow) accelerationgraphs in the bottom left, the Traction Control Center graph in the right center and tractionmeter based on current total acceleration and the current traction bar in the far right.

Page 18: Adaptive Traction Control Sensor Network

Figure 14: To playback a previous capture, click Playback 1x and choose the database run

Figure 16: The database will load and the slide will adjust the start time. In the bottom right thetrue time is on top and an adjustable field of milliseconds since beginning is on the bottom.

Page 19: Adaptive Traction Control Sensor Network

Figure 17: ATCSN will then play back the exact recording. No outside inputs are used.