5
A Comparison of Sensorfusion Methods for Localization on Mobile Phones Martin Sch¨ ussel Institute of Communications Engineering Ulm University 89081 Ulm, Germany Email: [email protected] Florian Pregizer Institute of Communications Engineering Ulm University 89081 Ulm, Germany Email: fl[email protected] Abstract—In this paper we propose several methods to com- bine heterogeneous location technologies. At first we evaluated dead reckoning and its accuracy on commercial off-the-shelf (COTS) mobile phones. We combined this data with absolute positioning techniques to improve the accuracy and availability of the location system. Several sensor fusion frameworks were validated in a simulated environment. All frameworks show improved accuracy and availability. But the results indicate that choosing the right framework is largely dependent on the information that the location systems supply to the fusion framework. I. I NTRODUCTION Localization of a user is important for ubiquitous comput- ing and ambient intelligence applications. Outside of buildings GPS is providing a reasonably accurate position measurement. In indoor environments, where GPS is usually not available, many different systems have been proposed to determine the location of a user. While some positioning techniques only require widely available infrastructure like Wi-Fi [1,2,3], others use dedicated hardware to achieve better accuracy or availability [4,5]. In addition to the position measurement provided by these systems modern mobile phones are equipped with several different sensors. These can be used to provide additional information about the location of the user. With this knowledge the accuracy of the positioning can be enhanced, or it can be used to provide an estimate of the position in the case of failure or absence of absolute positioning systems. In section II the inertial navigation on a COTS mobile phone, which makes use of these sensors, is explained. Section III discusses the filter methods we used for the sensor fusion. In Section IV we introduce the fusion frameworks, which are extensions to the filters. Section V shows the evaluation of the proposed methods. II. I NERTIAL NAVIGATION Modern smartphones are equipped with a variety of sen- sors. Among those are accelerometers, magnetometers, gy- roscopes and barometers. While it is in theory possible to achieve dead reckoning by integrating the measurements of the accelerometer, which have been rotated from the sensor coordinate system into an earth reference frame, we found this method to produce unreliable results. To achieve better inertial navigation we implemented a step detection algorithm similar to the one described in [6] on a COTS phone. In order to detect steps we apply a low pass Fig. 1. Acceleration in vertical direction after low pass filtering. The Phone was placed in the trouser pocket for the green line, in the hand for the yellow line, and in the Jacket pocket for the red line. filter to the vertical acceleration in the earth frame. If a peak above a certain threshold and enough time has passed after a previous peak was detected, we correlate that peak to a step. Figure 1 shows the processed acceleration measured for three different smartphone placements. The direction of movement also has to be known to accomplish dead reckoning. [6] used an inertial measurement unit (IMU) and were able to produce good results, even with the sensors placed in the trouser pocket during walking. We were not able to reproduce these results with the sensor measurements from a mobile phone. This is most likely due to worse MEMS- Sensors and signal processing used in phones in comparison to a dedicated IMU. Using a COTS phone, we where able to achieve dead reckon- ing, as long as the orientation of the phone in regard to the direction of movement is known. III. SENSORFUSION There exist a large variety of location systems like GPS, Wifi, ultrasound or infrared-based methods, systems that work with image recognition and others. Sometimes additional po- sition information is also available, if e.g. a QR or NFC-Tag is scanned. In indoor environments the movement of the user might also be restricted by walls. This large diversity of location systems, with different accura- cies and availabilities, raises the question, how to best combine their data. We propose several sensor fusion frameworks, that can be used to solve this problem. They are based on two well known filtering algorithms, the Kalman and the Particle filter. All of the proposed fusion mechanisms are able to combine 978-1-4799-1412-8/13/$31.00 ©2013 IEEE 2013 IEEE Third International Conference on Consumer Electronics - Berlin (ICCE-Berlin) 397

[IEEE 2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin) - IFA Fairground, Berlin, Germany (2013.09.9-2013.09.11)] 2013 IEEE Third International

  • Upload
    florian

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin) - IFA Fairground, Berlin, Germany (2013.09.9-2013.09.11)] 2013 IEEE Third International

A Comparison of Sensorfusion Methods forLocalization on Mobile Phones

Martin SchusselInstitute of Communications Engineering

Ulm University

89081 Ulm, Germany

Email: [email protected]

Florian PregizerInstitute of Communications Engineering

Ulm University

89081 Ulm, Germany

Email: [email protected]

Abstract—In this paper we propose several methods to com-bine heterogeneous location technologies. At first we evaluateddead reckoning and its accuracy on commercial off-the-shelf(COTS) mobile phones. We combined this data with absolutepositioning techniques to improve the accuracy and availabilityof the location system. Several sensor fusion frameworks werevalidated in a simulated environment. All frameworks showimproved accuracy and availability. But the results indicatethat choosing the right framework is largely dependent onthe information that the location systems supply to the fusionframework.

I. INTRODUCTION

Localization of a user is important for ubiquitous comput-ing and ambient intelligence applications. Outside of buildingsGPS is providing a reasonably accurate position measurement.In indoor environments, where GPS is usually not available,many different systems have been proposed to determinethe location of a user. While some positioning techniquesonly require widely available infrastructure like Wi-Fi [1,2,3],others use dedicated hardware to achieve better accuracy oravailability [4,5]. In addition to the position measurementprovided by these systems modern mobile phones are equippedwith several different sensors. These can be used to provideadditional information about the location of the user. With thisknowledge the accuracy of the positioning can be enhanced,or it can be used to provide an estimate of the position in thecase of failure or absence of absolute positioning systems.In section II the inertial navigation on a COTS mobile phone,which makes use of these sensors, is explained. Section IIIdiscusses the filter methods we used for the sensor fusion.In Section IV we introduce the fusion frameworks, which areextensions to the filters. Section V shows the evaluation of theproposed methods.

II. INERTIAL NAVIGATION

Modern smartphones are equipped with a variety of sen-sors. Among those are accelerometers, magnetometers, gy-roscopes and barometers. While it is in theory possible toachieve dead reckoning by integrating the measurements ofthe accelerometer, which have been rotated from the sensorcoordinate system into an earth reference frame, we found thismethod to produce unreliable results.To achieve better inertial navigation we implemented a stepdetection algorithm similar to the one described in [6] on aCOTS phone. In order to detect steps we apply a low pass

Fig. 1. Acceleration in vertical direction after low pass filtering. The Phonewas placed in the trouser pocket for the green line, in the hand for the yellowline, and in the Jacket pocket for the red line.

filter to the vertical acceleration in the earth frame. If a peakabove a certain threshold and enough time has passed after aprevious peak was detected, we correlate that peak to a step.Figure 1 shows the processed acceleration measured for threedifferent smartphone placements.The direction of movement also has to be known to accomplishdead reckoning. [6] used an inertial measurement unit (IMU)and were able to produce good results, even with the sensorsplaced in the trouser pocket during walking. We were not ableto reproduce these results with the sensor measurements froma mobile phone. This is most likely due to worse MEMS-Sensors and signal processing used in phones in comparisonto a dedicated IMU.Using a COTS phone, we where able to achieve dead reckon-ing, as long as the orientation of the phone in regard to thedirection of movement is known.

III. SENSORFUSION

There exist a large variety of location systems like GPS,Wifi, ultrasound or infrared-based methods, systems that workwith image recognition and others. Sometimes additional po-sition information is also available, if e.g. a QR or NFC-Tagis scanned. In indoor environments the movement of the usermight also be restricted by walls.This large diversity of location systems, with different accura-cies and availabilities, raises the question, how to best combinetheir data. We propose several sensor fusion frameworks, thatcan be used to solve this problem. They are based on two wellknown filtering algorithms, the Kalman and the Particle filter.All of the proposed fusion mechanisms are able to combine

978-1-4799-1412-8/13/$31.00 ©2013 IEEE

2013 IEEE Third International Conference on Consumer Electronics - Berlin (ICCE-Berlin)

397

Page 2: [IEEE 2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin) - IFA Fairground, Berlin, Germany (2013.09.9-2013.09.11)] 2013 IEEE Third International

Fig. 2. Block diagram of the evaluated fusion frameworks. As an examplethere are four different positioning techniques pictured (GPS, Wifi-based,scanning QR-Tags, NFC).

the data from absolute positioning sources, with varying ac-curacies, with the information, which is determined by thealgorithm described in II. The simplified overall architectureof our fusion framework is depicted in Fig.2.

At the core of the framework lies a method to fuse theInformation. The methods examined in this paper are a Kalmanfilter, an extended Kalman Filter, and a particle filter. A shortintroduction to the three methods is given here.

A. Kalman Filter

The Kalman filter is a recursive filter that estimates thestates of a dynamic system from a series of noisy measure-ments. It assumes that the underlying probability distributionsare Gaussian. The filter then estimates the state xk with theerror covariance Pk at timestep k. This is called time updateor prediction. Then a measurement update is performed, whichadjusts the estimate by an actual measurement at the time. Theequations for the time update are:

x−k = Akxk−1 (1)

P−k = AkPk−1A

Tk +Qk (2)

Here x is the state of the system, A is the state transitionmatrix, P is the error covariance and Q is the process noisecovariance. The hat over the state vector is used to makea difference between the true and the estimated state. Thesuperscript minus is used every time the corresponding variableis a priori, which means before a measurement was taken intoaccount.Every time a measurement is available the estimate is adjustedaccording to the following equations.

Kk = P−k H

T (HP−k H

T +R)−1 (3)

xk = x−k +Kk(zk −Hx−

k ) (4)

Pk = (I−KkH)P−k (5)

The Kalman gain K is calculated first. It is dependent onthe error covariance P, the measurement model H and theobservation noise covariance R. Then the state estimate isupdated with the actual measurement zk, which leads to thea posteriori state estimate. The last step is to obtain an aposteriori error covaraiance.

In the system model, that we used for the Kalman filter, thedirection and stepdetection are used to calculate the velocity inx and y direction. The velocity is subject to noise. This modelleads to the state vector

x =

⎡⎢⎣pxpyvxvy

⎤⎥⎦ (6)

, where px and py are the X and Y Positions and vx, vy arethe corresponding velocities. The state transition from the pastxk−1 to the present becomes:

xk = A · xk−1 +wk−1 =

⎡⎢⎣1 0 Δt 00 1 0 Δt0 0 1 00 0 0 1

⎤⎥⎦ · xk−1 +wk−1

(7)Here xk and xk−1 are the state vectors at time k and k-1,A is the state transition matrix, and w is the process noisewhich is assumed to be zero mean and normally distributedwith covariance Q.

B. Extended Kalman Filter

The Kalman filter requires a linear system and measure-ment model. If the system model is nonlinear it is describedby a state transition function f in the following way:

xk = f(xk−1,wk) (8)

To use a Kalman filter with such a system it has to be linearizedaround the current estimation. This is done by calculating theJacobian of f with respect to x.

A[i,j] =δf[i]

δx[j](xk−1, 0) (9)

Basically this is a linearization of the state transition function,at the current estimate and assuming no noise.The other matrices that the Kalman filter uses have to beapproximated in a similar fashion. More information about theextended Kalman filter can be found in [7].Now the System can be described through the use ofsteplength, stepfrequency and heading. In this case the statevector becomes

x =

⎡⎢⎢⎢⎣

pxpylfα

⎤⎥⎥⎥⎦ (10)

,with px and py ,the X and Y Positions, l the steplength, fthe stepfrequency and α the heading. The state transition isdescribed by the following equation:

xk = f(xk−1,wk) =

px + (l + w3)(f + w4) sin(α+ w5)Δtpy + (l + w3)(f + w4) cos(α+ w5)Δt

l + w3

f + w4

α+ w5(11)

Here the process noise consist of w3, the noise of thesteplength, w4, the noise of the stepfrequency, and w5, thenoise of the heading. All are assumed to be zero mean andnormally distributed.

398

Page 3: [IEEE 2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin) - IFA Fairground, Berlin, Germany (2013.09.9-2013.09.11)] 2013 IEEE Third International

C. Particle Filter

The particle filter is another popular sensorfusion method.It can handle nonlinear and non-Gaussian estimation problems.The filter represents a probability distribution with particles,which are a set of random samples with weights. Our particleFilter is a sample importance resampling (SIR) particle filter.Each particle x[i] carries the information about the state. Herethe filter is only explained briefly, further explanation can befound in [8]. The SIR Particle filter executes the followingsteps:

• Initialization:At time t = 0 N particles are generated, accordingto the initial pdf p(x0). All particles have the sameweight.

• Recursion:1) Prediction Sampling: At every timestep a new

particle is generated from the transition pdf

p(xt|x[i]t−1)

2) Importance Sampling: Every Particle is as-signed a new weight according to

w[i]t = w

[i]t−1p(zt|x[i]

t )

3) Normalization: The weight of each particle isnormalized according to:

w[i]t =

w∗[i]t∑M

i=1 w∗[i]t

4) Resampling: New particles are generated fromthe current ones. The probability that a newparticle is the same as a old one is pro-portional to the weight of the old particle.This means that particles with higher weight,and therefore higher propability, have a betterchance to survive or even duplicate during theresampling stage.

The main advantage of particle filters is that they can trackmultiple hypotheses. It is relatively easy to consider mapconstraints. If a wall model of a building is present, onecan check if particles move through walls. This happens bycomparing the old to the new particles after the PredictionSampling stage. If a particle moves through a wall it is deleted.

IV. FUSION FRAMEWORK

All three presented filter methods are able to combinethe inertial navigation with additional position measurements.There are a couple of other factors to consider when choosinga method.

• If the position measurements are not Gaussian, particlefilters are a good option to handle this problem.

• If the only available information about the position isthe measurement with its standard deviation one hasto assume that it is normally distributed. In this caseKalman filters are the optimal solution.

Fig. 3. Concept of a multiple Kalman filter approach, here the estimates ofthree different Filters are combined to produce the end result.

• Map Constraints are easy to consider in a particlefilter.

• When several position measurements are available itshould be possible to detect faulty measurements

These considerations led us to some extensions of the alreadyintroduced filtering methods.

A. Central Kalman Filter

To combine the data of several position measurements acentral filter can be used. In case of the Kalman filter themeasurement model H might have to be adjusted every timethe incoming information changes. Since the measurementsdo not have to arrive at the same time intervals it is necessaryto express all matrices of the filter in dependance of Δt andrecalculate them at every time step.

B. Fusion with weighted mean

If the position measurements are only available with meanand standard deviation, there is another way to fuse theinformation. Assuming there are N measurements available. Ifthey are independent of each other, and the i-th measurementhas the variance σ2

i , the weighted mean is given by:

zc =

∑Ni=1

ziσ2i∑N

i=11σ2i

(12)

σ2zc =

1∑Ni=1

1σ2i

(13)

The subscript c indicates that the result is the combinedmeasurement, with mean zc and variance σ2

zc . In [9] it is shownthat this method minimizes the squared error.If the noise of a position measurement in x and y directionis independent of each other, this method can be used beforedelivering the measurement to the filter.

C. Multiple Kalman Filters

In [10] a method for sensorfusion was introduced that isbased on multiple Kalman filters. The idea behind the conceptis to use several filters in parallel and fuse the estimate ofthe filters afterwards. The approach is shown in Figure 3. Inour scenario we were using the information from the inertialsensors in every Kalman Filter, while every filter was given a

399

Page 4: [IEEE 2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin) - IFA Fairground, Berlin, Germany (2013.09.9-2013.09.11)] 2013 IEEE Third International

position measurement from a different sensor. The combinedestimation is given by:

xc =x1

P1+ x2

P2

1P1

+ 1P2

(14)

Here xc is the combined estimate, x1 and x2 are the estimatesof the two filters and P1 and P2 are their error covariancematrices. In [11] a similar approach was presented. The maindifference is that their method is not only able to fuse theinformation of multiple filters, but can also detect faultymeasurements and remove them from the combined estimate.

V. EVALUATION

In order to evaluate the performance of the systems wefirst took a look at the differnt parts by themselves. Atfirst we tested the inertial navigation, then the filters, whichcompose the core of the sensorfusion, and at last we evaluatethe different fusion frameworks and their advantages anddisandvantages.

A. Stepdetection and Direction

We did several test runs with a Galaxy Nexus to test ourstep detection algorithm. The results are shown in Table I.It can be seen that the step detection work reliable even for

measured steps counted Steps procentual error

Hand 984 991 0,07%

Jacket 1015 998 1,7%

Trousers 1027 995 3,2%

TABLE I. RESULTS OF THE STEPDETECTION

different smartphone placements.Next, a test for the detection of the direction of movement wasperformed. While [6] report that they were able to determinethe direction, even when the smartphone was placed in thepocket, we were not able to reproduce their results. This ismost likely because in [6] a dedicated Inertial MeasurementUnit was used, which has better sensors and a better updatetime compared to the sensors in our smartphone.We need to know the direction of the smartphone in relationto the direction of movement. If this is the case the compassof the smartphone can be used to determine the direction ofmovement.We also tested the accuracy of the heading detection overseveral test runs. The results are shown in Table II.

σ [] RMS []

Hand track 1 1. 9.3745 7.7499Hand track 1 2. 9.9566 7.9386Hand track 1 3. 10.5275 8.5821

Hand track 2 16.2168 12.9801

TABLE II. ROOT MEAN SQUARE (RMS) AND STANDARDDEVIATION

OF THE ERROR IN THE HEADING DETECTION.

B. Comparison of the filter methods

Since we want to implement the algorithm on a smart-phone computation time was a important factor. To test thecomputation time we implemented the different algorithms in

MATLAB and tested them with inertial data from test runswith the smartphone.The results of one test run are shown in Figure 4. For this

Fig. 4. A testrun using only the data from the inertial sensors comparing thethree different filter methods.

run we used a linear Kalman filter with σposition = 0.1m2

and σvelocity = 0.01m2/s2 the update time was 0.5 seconds.The particle filter was run with the same configuration and200 particles. The extended Kalman filter was initialized withσfreq = 0.1Hz2, σalpha = 20 and σsteplength = 0.1m2.The measurement noise was set to σvelocity = 0.2m2/s2

in both directions, for the linear systems, and σalpha = 10,σfreq = 0.1Hz2 for the extended Kalman filter.The results show that all filters are able to follow the groundtruth, at least roughly, even if only the intertial sensors areused. The Kalman and the particle filter perform very similar,since the same system model was used. The extended Kalmanfilter performs a little better, since it is able to follow theground truth around corners faster than the linear systems.This is the case, because of the decoupling of velocity anddirection in the nonlinear system, this allows faster turns, whilestill keeping the overall amount of the velocity.On a desktop PC the Kalmanfilter took 0,055s to compute,the extended Kalman filter with 0,148s roughly three timesas long. The particle filter took 1,61s so about ten times asmuch computation time then the EKF. Under the aspect thatthe particle filter ran with a simpler model compared to thenonlinear EKF, and did not use map constraints, we decidedto focus on the Kalman filters in our further study.We simulated a Indoor positioning system by adding noise tothe ground truth. The results of such a run which now uses theinertial sensors and additional position information is shownin Figure 5. The filter received a position update every fiveseconds, with a standard deviation of 2m. It was still able tofollow the ground truth closely (with a mean error of 1.8m).

C. Comparison of the sensorfusion frameworks

In further initial tests with real data from the inertial sensorsand simulated position measurements we were able to accessthe strength and weaknesses of the different frameworks.

400

Page 5: [IEEE 2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin) - IFA Fairground, Berlin, Germany (2013.09.9-2013.09.11)] 2013 IEEE Third International

Fig. 5. A testrun using only the data from the inertial sensors and combiningthem with additional position information with an extended Kalman filter.

• Central Filter: Using a central Filter to fuse all of thedata is the standard method of fusing the information.If the noise of the position in x and y direction iscorrelated, but one is still able to express the noise ina covariance matrix it performed best in our test.

• Fusion with weighted mean: If there are several posi-tion measurements available at a certain time step, andthe noise of each direction is independent, this methodperforms exactly as a Central filter. This means thatthe weighted mean works correctly. The advantage ofthis method is that it can be faster, since it preventsthe measurement matrix from becoming too big.

• Multiple Kalman Filters: The results of the multipleKalman filter approach were identical to the results ofthe previous two architectures. The main advantage ofthis method is the ability to recognize a filter that con-sistently produces wrong measurements. The methodto do this is explained in [11] and might provideuseful, especially for GPS measurements inside ofbuilding, which tend to give biased measurements.

A first evaluation of a fusion architecture, using a Kalman filterwith weighted averaging, was implemented on a mobile phoneand showed promising results. A test run is pictured in Fig. 6.

VI. CONCLUSION

We were able to show, that the best architecture dependson what kind of information is available. In addition tothat we were able to identify parameters to consider, whenchoosing a solution.If map information, especially walls, and detailed informationabout the measurement noise of the positioning is available,a Particle Filter provides the best solution.If there is no map information available, and if the positioningsystems only give a estimate of the noise (like variance orstandard deviation) a Kalman Filter-based approach delivered

Fig. 6. This Picture shows the combination of inertial navigation (red), GPSdata (yellow) and the scanning of a QR-Tag for a highly accurate position(magenta). The green line indicates where GPS was merged with the inertalnavigation. GPS was manually turned of outside a building (where the yellowline ends).

the same accuracy for less computational cost.

REFERENCES

[1] P. Bahl and V. N. Padmanabhan, “Radar: An in-building rf-based userlocation and tracking system,” in INFOCOM’00, 2000, pp. 775–784.

[2] T. King, S. Kopf, T. Haenselmann, C. Lubberger, and W. Effelsberg,“Compass: A probabilistic indoor positioning system,” in Proceedingsof the 1st international workshop on Wireless network testbeds, ser.WiNTECH ’06, New York, NY, USA, 2006, pp. 34–40.

[3] K. Chintalapudi, A. P. Iyer, and V. N. Padmanabhan, “Indoor lo-calization without the pain,” in Proceedings of the sixteenth annualinternational conference on Mobile computing and networking, ser.MobiCom ’10. New York, NY, USA: ACM, 2010, pp. 173–184.

[4] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan, “The cricketlocation-support system,” in Proceedings of the 6th annual internationalconference on Mobile computing and networking. ACM, 2000, pp. 32–43.

[5] R. Want, A. Hopper, V. Falcao, and J. Gibbons, “The active badgelocation system,” ACM Transactions on Information Systems (TOIS),vol. 10, no. 1, pp. 91–102, 1992.

[6] U. Steinhoff and B. Schiele, “Dead reckoning from the pocket -an experimental study,” in Pervasive Computing and Communications(PerCom), 2010 IEEE International Conference on, 29 2010-april 22010, p. 162.

[7] G. Welch and G. Bishop, “An introduction to the kalman filter,” 1995.

[8] S. Thrun, Probabilistic Robotics (Intelligent Robotics and AutonomousAgents). The MIT Press, 2005.

[9] R. Rojas, “The kalman filter,” Frei Universitt Berlin, Institut fr Infor-matik, Tech. Rep., 2002.

[10] L. Drolet, F. Michaud, and J. Ct, “Adaptable sensor fusion usingmultiple kalman filters,” in In: Proc. IEEE/RSJ Intl. Conf. on IntelligentRobots and Systems, 2000, pp. 1434–1439.

[11] L. Tae-Gyoo, “Centralized kalman filter with adaptive measurement fu-sion: its application to a gps/sdins integration system with an additionalsensor,” in International Journal of Control, Automation, and SystemsVol. 1, No. 4, December 2003, 2003.

401