54
Range-based Wireless Sensor Network Localization for Planetary Rovers August Svensson Space Engineering, master's level 2020 Luleå University of Technology Department of Computer Science, Electrical and Space Engineering

Range-based Wireless Sensor Network Localization for

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Range-based Wireless Sensor Network Localization for

Range-based Wireless Sensor Network

Localization for Planetary Rovers

August Svensson

Space Engineering, master's level

2020

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

Page 2: Range-based Wireless Sensor Network Localization for

Range-based Wireless SensorNetwork Localization for

Planetary Rovers

Department of Computer Science, Electrical and SpaceEngineering

Master’s Thesis

Lulea University of Technology

August Svensson

Examiner: Dr Johnny EjemalmExternal Supervisors: Michael Dille and Uland Wong

Page 3: Range-based Wireless Sensor Network Localization for

Abstract

Obstacles faced in planetary surface exploration require innovation inmany areas, primarily that of robotics. To be able to study interesting ar-eas that are by current means hard to reach, such as steep slopes, ravines,caves and lava tubes, the surface vehicles of today need to be modifiedor augmented. One augmentation with such a goal is PHALANX (Pro-jectile Hordes for Advanced Long-term and Networked eXploration), aprototype system being developed at the NASA Ames Research Center.PHALANX uses remote deployment of expendable sensor nodes from alander or rover vehicle. This enables in-situ measurements in hard-to-reach areas with reduced risk to the rover. The deployed sensor nodes areequipped with capabilities to transmit data wirelessly back to the roverand to form a network with the rover and other nodes.

Knowledge of the location of deployed sensor nodes and the momentarylocation of the rover is greatly desired. PHALANX can be of aid in thisaspect as well. With the addition of inter-node and rover-to-node rangemeasurements, a range-based network SLAM (Simultaneous Localizationand Mapping) system can be implemented for the rover to use while it isdriving within the network. The resulting SLAM system in PHALANXshares characteristics with others in the SLAM literature, but with someadditions that make it unique. One crucial addition is that the rover itselfdeploys the nodes. Another is the ability for the rover to more accuratelylocalize deployed nodes by external sensing, such as by utilizing the rovercameras.

In this thesis, the SLAM of PHALANX is studied by means of com-puter simulation. The simulation software is created using real missionvalues and values resulting from testing of the PHALANX prototype hard-ware. An overview of issues that a SLAM solution has to face as presentin the literature is given in the context of the PHALANX SLAM sys-tem, such as poor connectivity, and highly collinear placements of nodes.The system performance and sensitivities are then investigated for the de-scribed issues, using predicted typical PHALANX application scenarios.

The results are presented as errors in estimated positions of the sensornodes and in the estimated position of the rover. I find that there arerelative sensitivities to the investigated parameters, but that in generalSLAM in PHALANX is fairly insensitive. This gives mission planners andoperators greater flexibility to prioritize other aspects important to themission at hand. The simulation software developed in this thesis workalso has the potential to be expanded on as a tool for mission planners toprepare for specific mission scenarios using PHALANX.

Page 4: Range-based Wireless Sensor Network Localization for

Acknowledgements

I thank NASA and SNSA for my opportunity to participate in theNASA International Internship Project (I2).

I also want to heartily thank my mentors Michael Dille and UlandWong at the Intelligent Robotics Group at NASA Ames for their help,advice and discussions throughout the work process.

Page 5: Range-based Wireless Sensor Network Localization for

Contents

1 Introduction 1

2 Formalization 52.1 SLAM: Simultaneous Localization and Mapping . . . . . . . . . . 52.2 The full SLAM problem . . . . . . . . . . . . . . . . . . . . . . . 62.3 The online SLAM problem . . . . . . . . . . . . . . . . . . . . . . 7

2.3.1 Optimal estimation with the Kalman Filter . . . . . . . . 72.3.2 The Extended Kalman Filter . . . . . . . . . . . . . . . . 92.3.3 Alternatives to the EKF . . . . . . . . . . . . . . . . . . . 122.3.4 The Particle Filter . . . . . . . . . . . . . . . . . . . . . . 12

2.4 Common issues faced in SLAM . . . . . . . . . . . . . . . . . . . 132.5 Issues faced in range-only SLAM . . . . . . . . . . . . . . . . . . 14

2.5.1 Collinearity of landmarks . . . . . . . . . . . . . . . . . . 162.5.2 Rover trajectory . . . . . . . . . . . . . . . . . . . . . . . 182.5.3 Wireless sensor networks . . . . . . . . . . . . . . . . . . . 192.5.4 Articulation points . . . . . . . . . . . . . . . . . . . . . . 212.5.5 Anchor nodes . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.6 SLAM in PHALANX . . . . . . . . . . . . . . . . . . . . . . . . . 222.6.1 Data association . . . . . . . . . . . . . . . . . . . . . . . 232.6.2 Anchorization of nodes . . . . . . . . . . . . . . . . . . . . 232.6.3 Deployment of nodes . . . . . . . . . . . . . . . . . . . . . 24

3 Implementing the Simulation Model 253.1 The System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.1 The process model . . . . . . . . . . . . . . . . . . . . . . 263.1.2 Measurements and mutual localization . . . . . . . . . . . 273.1.3 Process and measurement function Jacobians . . . . . . . 283.1.4 Missing measurements . . . . . . . . . . . . . . . . . . . . 293.1.5 Deployment of sensor nodes . . . . . . . . . . . . . . . . . 303.1.6 Anchorization . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Example scenarios 33

5 Results 345.1 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.2 Anchorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.3 Collinear node-placements . . . . . . . . . . . . . . . . . . . . . . 385.4 Rover trajectory and collinearity . . . . . . . . . . . . . . . . . . 405.5 General trajectory strategies . . . . . . . . . . . . . . . . . . . . . 415.6 Articulation points . . . . . . . . . . . . . . . . . . . . . . . . . . 43

6 Discussion 446.1 Discussion of results . . . . . . . . . . . . . . . . . . . . . . . . . 446.2 Limitations of the simulation program . . . . . . . . . . . . . . . 456.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Page 6: Range-based Wireless Sensor Network Localization for

1 Introduction

Planetary rovers continue to play a major role in planetary exploration. Theyenable a range of science instruments not feasible for orbital sensing, such assample extraction or analysis and in situ atmospheric measurements. However,the high mission costs result in them being operated in a very risk-averse man-ner. Frequently, scientific targets are located beyond obstacles such as steepslopes and rock formations and are therefore avoided. Many locations, such aslunar pits[23], and lunar or Martian lava tubes[3, 6] are not only difficult forrovers to maneuver, but severely limit return communication links.

(a) (b)

(a) a collapsed lava tube north of Arsia Mons on Mars, as seen fromthe Mars Orbiter Camera (image ID: R0800159). (b) a slope on“Bloodstone Hill”, which was beyond the steepness comfort zoneof Curiosity.[11] Image from the Left Navigation Camera onboardCuriosity, taken on Sol 2797 of its mission. (Credit: NASA/JetPropulsion Laboratory, Caltech)

In response to this, a prototype system, PHALANX (Projectile Hordes forAdvanced Long-term and Networked eXploration), is under development atNASA Ames Research Center[7]. It employs ranged deployments of wirelesssensor nodes: small sensor-equipped units meant to collect scientific data andwirelessly transmit it back to the rover. The nodes can be equipped with in-struments that measure gas composition, temperature, microscopic imagery andeven in-flight overhead imaging. Placed nodes can collect data over long time-periods to measure ephemeral and dynamic effects, otherwise not tractable con-sidering valuable rover mission time. Together nodes form a network, enabling

1

Page 7: Range-based Wireless Sensor Network Localization for

communication by relay over longer distances, around obstacles and along caves.

(a) (b)

(a) a prototype launcher in the Roverscape at NASA Ames, and(b) a prototype node projectile with a microscope payload

For the science data to be useful, their geographical locations need to berecorded. But without a convenient satellite positioning system (such as ourearthly GPS), we have to establish these coordinates by some other means.

Localization means are limited given the size, shape and use-case of the sensornodes. Considering the limitations, RF-transceivers are a good choice. Theyenable easy node-to-node measurements, make for trivial data association, allowto some extent measurements out of line-of-sight, all using minimal additionalhardware.

With ranging RF-transceivers, it is possible to measure the distance betweenconnected members of the formed network, be it node to node or rover to node.The estimating or calculating of positions is called localization. Performinglocalization using distances between nodes or landmarks is called multilatera-tion[18].

Using only a few known positions and the measured ranges, the remaining po-sitions can thus be established by solving a system of equations. However, inreality, the availability and the accuracy of prior knowledge are often insufficient,and the range measurements are never exact. The solution to the localizationproblem is therefore in reality an estimate calculated by minimizing a cost func-tion, rather than a single solution from a system of equations.

Because the rover is part of the formed network, an opportunity also arises toaid rover localization, commonly performed by a combination of dead-reckoningand visual odometry.[13] The latter is an accurate and precise, although ex-

2

Page 8: Range-based Wireless Sensor Network Localization for

pensive computation, typically performed rather infrequently, except in specifichigh-precision slow-driving modes. The problem of localizing a moving robotor rover while simultaneously localizing (or mapping) surrounding landmarks,features or nodes, is known as simultaneous localization and mapping, or SLAM.

When it comes to range-based localization in wireless networks, performanceis dependent on various qualities, such as node connectivity, measurement noiseand network geometry. While measurement noise is hard to affect during themission, with the ability to add sensor nodes rather flexibly and shape thenetwork geometry, this becomes an interesting topic for mission planners andoperators. Additionally, regarding the rover as a mobile node in the network,the path that it takes directly influences network geometry.

The required level of accuracy and precision varies by mission and situation.Real-time rover localization, for example, is of heightened importance in poorterrain such as lunar regolith or soft soils where failure to detect slippage canresult in mission failure if it gets irrecoverably embedded[2]. Subject to scien-tific demand, certain measurements taken from node sensors need more exactposition estimates accompanying them.

In this thesis I investigate the performance and sensitivities of such a local-ization system using computer simulations, in regards to peculiarities of similarsystems in the literature. In section 2, I introduce the reader to range-basedlocalization and SLAM in the context of the PHALANX system. In section 3, Ithen describe the models used in the implementation of the simulation software.In sections 4 and 5 I perform experiments using the implemented software andpresent the results. Finally, in section 6, I discuss the results, the simulationsoftware and future work.

3

Page 9: Range-based Wireless Sensor Network Localization for

Annotation and Abbreviations

A quick list of abbreviations and mathematical notation conventions is includedhere for conveniency.

The general mathematical convention

All matrices are represented by capital letters. Some quantities are writtenas N or M , and the Jacobian operator as J , but context should make this dis-tinction clear.

Vectors are written with a bold font. Functions are written in non-bold font,regardless of their dimension.

Abbreviations

EKF - Extended Kalman Filter

GPS - Global Positioning System

IMU - Inertial measurement unit

NASA - National Aeronautics and Space Administration

PHALANX - Projectile Hordes for Advanced Long-term and Networked eXploration

RF - Radio-frequency

RMSE - Root mean squared error

RO-SLAM - Range-only SLAM

SLAM - Simultaneous Localization and Mapping

WSN - Wireless sensor network

4

Page 10: Range-based Wireless Sensor Network Localization for

2 Formalization

For rovers to perform their tasks effectively, they need to be able to navigatein their environment. This amounts to two tasks: creating and maintaining anaccurate map of the surroundings, and accurately estimating one’s own positionin relation to this map. In robotics this is referred to as simultaneous localizationand mapping, or SLAM. In this section I describe SLAM generally, and morespecifically to PHALANX: range-only and wireless sensor network SLAM. Iintroduce and describe common issues in these types of SLAM and how theyrelate to the PHALANX system.

2.1 SLAM: Simultaneous Localization and Mapping

The SLAM problem has a big presence in the field of robotics. It applies when-ever robots must navigate in an unknown or uncertain environment. To doso effectively, they must map their surroundings while also keeping track oftheir own position and bearing. The information available to the robot is ata minimum typically in the form of measurements and control input. Usingmeasurements of quantities such as speed and direction to estimate position isknown as dead reckoning or deduced reckoning. This is very computationallycheap, and is a legitimate strategy when traversing areas where there are noobstacles to avoid and that requires no mapping, such as wide open flat areasin between mapping objectives. However, this form of localization is very er-ror prone, as the errors are essentially free to accumulate. By incorporatingmeasurements of the surroundings, the error can be constrained in relation tothese surroundings. A map of the surroundings thus needs to be maintained toestimate the position of a robot in movement.

Figure 1: A robot is updating its position estimate using deadreckoning. The error grows (represented by the blue ellipses) as itdrives and integrates its measurements.

There are many SLAM algorithms each with their own strengths and weak-nesses. At a high level, they can be roughly divided into full SLAM and onlineSLAM. Many prominent algorithms in the latter category use variations and

5

Page 11: Range-based Wireless Sensor Network Localization for

extensions of the Kalman Filter and the Particle Filter. These are known asstate observers in control theory. The state of a system is a collection of quanti-ties typically relevant to the control of the system. In SLAM the state consistsof the robot pose and the map. The role of the observer in a control system isto as accurately and precisely as possible estimate the current state given theinformation available through both measurements and control input.

Next I will briefly describe the full SLAM problem and some algorithms fit-ting this category. After that, I will describe the online SLAM problem infurther detail, as the algorithm implemented later in this document falls underthis category of SLAM.

2.2 The full SLAM problem

In the full (sometimes “global” or “offline”) SLAM problem, the system state,measurements and control inputs are saved in time series throughout the oper-ation. All of these values are used to calculate the estimate of the entire pathand map. This commonly amounts to minimizing a nonlinear cost function. Avery simple example of such a function is

C =∑t

(δTxWxδx)t + (δTz Wzδz)t (1)

where the superscripted T denote the transpose, and where δx,t and δz,t aredifferences between actual pose and map (variables to estimate by minimizingC) and constraints or prior estimates. For example, if a function f estimatesthe next position given a previous position and control input, then

δx,t = xt − f(xt−1,ut)

would be the difference between the actual pose xt at time t and the priorestimate given the prior pose xt−1 and the control input ut. Similarly,

δz,t = zt − h(xt)

would be the difference between the actual measurements zt and the expectedmeasurements. Wx and Wz are weight matrices that can assign different impor-tances to the estimates.

The cost function can be solved by numerous numerical methods such as theiterative Nelder-Mead method.[19]

The GraphSLAM algorithm is another example that solves the full SLAM prob-lem.[21] During robot operation, it populates a graph with soft constraints inthe form of measurement and control data across time. When calculating anestimate, it uses this graph to reconstruct the map and the path.

6

Page 12: Range-based Wireless Sensor Network Localization for

2.3 The online SLAM problem

The online SLAM algorithms maintain a momentary, current estimate of therobot pose and the map. This is in contrast to those under the full SLAMcategory, where historical values are saved. Online SLAM is therefore iterative,and updated continually. Where full SLAM algorithms integrates all past val-ues for each new estimate, in online SLAM the integration is in a sense ongoingthroughout their operation.

The simulation used in this thesis uses the Extended Kalman Filter which isan online SLAM algorithm. It is relatively simple to implement and convenientto use. Additionally, a continually updated estimate is desired in the case ofinterplanetary rovers, as momentary changes in pose are significant informationfor ensuring mission safety.

I will now first introduce the Kalman Filter and then briefly the general principleof the Particle Filter.

2.3.1 Optimal estimation with the Kalman Filter

In the late 1950s to early 1960s, Rudolf E. Kalman and others developed whathas become known as the Kalman Filter. The Kalman Filter is an optimal filterprovided that certain assumptions can be made about the system. Consider asystem that can be described by the equations

xt = Atxt−1 +Btut + vt (2)

zt = Ctxt + wt (3)

where xt and ut are the state and control input vectors respectively, zt is themeasurement vector and vt and wt are vectors of zero-mean Gaussian (nor-mally distributed) noise. The state vector contains the “state” of the entiresystem, which in the case of SLAM typically includes the position and bearingof the robot, as well as the positions of landmarks in the environment. The firstequation is called the process equation and vt is therefore called the processnoise. The A matrix describes the system dynamics: how the state evolves inthe absence of control input. The B matrix simply models how the control inputaffects the state. The second equation is called the measurement equation andso wt is the measurement noise. The C matrix models how the state relates tothe measurements, which are observations of the state. Given that the systemcan be modeled with these equations, one can make use of the Kalman Filter.

The Kalman Filter maintains a covariance matrix Pt, the understanding ofwhich is important in understanding the Kalman Filter. It can be thought of asa measure of the certainty or uncertainty of each state variable. It is a squarematrix with a number of columns and rows equal to the number of state vari-ables in xt. The diagonal entries of Pt are the variances of the state variablesand the off-diagonal entries are the covariances of each pair of state variables.

7

Page 13: Range-based Wireless Sensor Network Localization for

A large variance means that the corresponding state variable is very uncertainand conversely, a small variance means that the variable is certain in its estimate.

The optimality of the filter follows from theory assuming that equations 2 and3 match the system exactly and that the variances of the noises are known andmutually uncorrelated. I will omit the derivation and proof of optimality forthe Kalman Filter in this document. (For the interested reader these are read-ily found online and in most robotics and control theory textbooks due to thefilter’s popularity. Some good resources include Welch and Bishop [24], Thrun,Burgard, and Fox [22], and Maybeck [16].) However, below I describe the stepsand the equations that make up the algorithm when used in a control loop.

0. Initialize the state x0 and its covariance matrix P0.

1. Apply control (e.g. move forward) and add the modeled result to theestimated state according to the process equation:

xt = Atxt−1 +Btut (4)

where the circumflex (ˆ) in xt implies that it is an estimate. This estimate iscalled the prior estimate.

2. Update the covariance matrix corresponding to the above trans-formation. This amounts to applying the equation:

Pt = AtPt−1ATt +Qt (5)

where At is the state-transition matrix and Qt is the covariance matrix corre-sponding to the process noise vt in equation 2.

3. Take measurements and calculate expected measurements using themeasurement equation and the prior state estimate:

zt = Ctxt (6)

Here again the ˆ in zt implies that it is an estimate. That is, given the priorestimate xt and the measurement model, zt is the measurement vector that isexpected. The actual measurement vector zt gotten from the sensors will differslightly from the expected.

4. Calculate the Kalman gain matrix:

Kt = PtCTt (CtPtC

Tt +Rt)

−1 (7)

where Ct is the observation matrix and Rt is the covariance matrix of the mea-surement noise wt in equation 3.

8

Page 14: Range-based Wireless Sensor Network Localization for

5. Calculate the posterior estimate using the Kalman gain, the priorestimate and the actual and expected measurement vectors:

x∗t = xt +Kt(zt − zt) (8)

The asterisk on x∗t is used to denote that it is the posterior estimate.

6. Calculate the posterior covariance matrix using the equation:

P ∗t = (I −KtCt)Pt (9)

That summarizes the Kalman Filter used in a control loop. Steps 1 and 2are repeated for every time step.1 Steps 3 through 6 are repeated every timestep for which there are measurements available to be used. The loop can besummarized neatly as in figure 2.

Figure 2: Summary of the main processes of the Kalman Filter

One big limitation of the Kalman Filter stems from the fact that most sys-tems in SLAM are not linear. They can therefore not be described by equations2 and 3. To use the Kalman Filter with nonlinear systems, one can look to theExtended Kalman Filter (EKF), as we shall do next.

2.3.2 The Extended Kalman Filter

As mentioned, the Kalman Filter in its original form does not apply to nonlinearsystems. This is a problem because most SLAM problems are of this kind.

1As a side note, in SLAM systems where the state doesn’t change in the absence of controlinput ut, the state-transition matrix At is equal to the identity matrix. It therefore sufficesto repeat these steps every time step that also has a non-zero ut.

9

Page 15: Range-based Wireless Sensor Network Localization for

Consider for example a bearing measurement from a robot r to a landmark i:

θri = arctanyi − yrxi − xr

− θr (10)

or a range measurement:dri = ‖xr − xi‖ (11)

These types of measurement are very commonly used in SLAM.

If equations 2 and 3 are rewritten in a more general form, the Extended KalmanFilter can be used:

xt = f(xt−1,ut) + vt (12)

zt = h(xt) + wt (13)

where, as in the original equations, vt and wt are Gaussian, zero-mean noise.f and h are the state-transition and measurement functions.

The EKF follows the same steps as outlined in the Kalman Filter section pre-viously, but At, Bt and Ct are replaced by linearizations of f and h around xt.To perform these linearizations we use the Jacobian:

Jf (x) =δf

δx=

δf1δx1

δf1δx2

. . . δf1δxn

δf2δx1

δf2δx2

. . . δf2δxn

......

...δfnδx1

δfnδx2

. . . δfnδxn

(14)

The linearizations are updated each time they are to be used. The Jacobian off around xt is written as Ft below, and similarly that of h as Ht. We can thenadjust the algorithm as below.

0. Initialize x0 and P0

1. Calculate prior estimate

xt = f(xt−1,ut) (15)

2. Calculate prior covariance

Pt = FtPt−1FTt +Qt (16)

3. Measurements and expected measurements

zt = h(xt) (17)

10

Page 16: Range-based Wireless Sensor Network Localization for

4. Calculate the Kalman gain

Kt = PtHTt (HtPtH

Tt +Rt)

−1 (18)

5. Calculate the posterior estimate

x∗t = xt +Kt(zt − zt) (19)

6. Calculate the posterior covariance matrix

P ∗t = (I −KtCt)Pt (20)

The EKF process is shown in figure 3.

Figure 3: Summary of the main processes of the Extended KalmanFilter

Crucial to SLAM is the addition of features as new ones are discovered. InEKF SLAM, this amounts to augmenting the state vector x and the covariancematrix P . This can be as simple as appending the Cartesian coordinates of thefeature to x and the corresponding rows and columns to P .

Clearly a linearization around a point is an approximation of the function lin-earized around that point. The EKF is therefore not the optimal filter, asthe model no longer describes the system fully, as was a required condition foroptimality in the original Kalman Filter.

11

Page 17: Range-based Wireless Sensor Network Localization for

2.3.3 Alternatives to the EKF

As mentioned, the EKF performance relies on the quality of the linearizationsit uses to approximate the system. The linearizations are poor if the systemis highly nonlinear. In those cases, the linearization diverges quickly from theoriginal modeling function. The noise present in the input, whether processnoise or measurement noise, becomes the distance from the point around whichthe function was linearized. This can result in big modeling errors.

The Unscented Kalman Filter[22] similar to the EKF is another way to applythe Kalman Filter to nonlinear systems. Instead of linearizing with Jacobians,the nonlinear functions are evaluated at several points surrounding the point oflinearization. From these values a new Gaussian mean and covariance is con-structed for the function. As this method does not use derivatives, it can beapplied to systems whose functions aren’t differentiable.

The Kalman Filter assumes normal distributions for the noise and state vari-ables, which aren’t always good representations of the true distributions. Thisis the case in range-only SLAM as we shall see in a later section. Djugash,Singh, and Grocholsky [9] solve this by maintaining a multi-hypothesis EKF.The multi-hypothesis filter works analogously to a particle filter, a category ofalgorithms which I will briefly describe next.

2.3.4 The Particle Filter

One drawback of the Kalman Filter is that it assumes normal distributions ofnoise and state. Normal distributions are unimodal, so Kalman Filters are veryvulnerable to measurements that produce solutions with multimodal distribu-tions.

The particle filter is a category of algorithms that can be applied to SLAMin various forms. It uses particles: N-dimensional points where each point is apossible state vector, and that collectively represent the state probability distri-bution. In this way, the particle filter can represent any distribution, includingmultimodal ones.

12

Page 18: Range-based Wireless Sensor Network Localization for

Figure 4: A 1-dimensional example of a bimodal distribution thatis approximated using a unimodal distribution, vs. using particles

There are many extensions of the particle filter, but the general algorithmcan be roughly outlined as follows.

1. Produce M particles estimating the state vector and assign them appropri-ate weights (also known as importance factors), a high weight representsa higher probability. If completely uncertain, a uniform distribution isachieved by assigning equal weights.

2. Apply the process update (f in this document) to each particle, but in-clude a sample of the noise model. This is the prior of the particle filter.

3. Get the measurement (z). Update the particle weights by calculating thelikelihood of the measurements given each particle. I.e. calculate theprobability of the noise z− h(x) in your noise model.

4. Use the new weights to calculate the weighted average of the particles.This is the state estimate of the particle filter.

5. Back to step 2.

Sampling the noise model in the second step will make the particles “spreadout” to account for the uncertainty of the true state value.

2.4 Common issues faced in SLAM

In most SLAM problems, one has to deal with relative positioning. As the robotmoves through and observes an unknown environment, given that the measure-ments are relative to the robot, the resulting map will be relative as well. Whilethe errors may be small local to the robot, their accumulation throughout therobot’s journey can produce a significantly warped map in comparison withground truth.

13

Page 19: Range-based Wireless Sensor Network Localization for

Another issue to consider arises in feature-based SLAM. A feature, such asa beacon or a landmark needs to be identified correctly. Is it a newly discoveredfeature to be added to the map, or is it a repeated measurement that shouldupdate the location of a previously discovered one? This is an important issue,as a few misidentifications can quickly lead to confusion such as mapping severalpositions that in reality belongs to the same feature. It is easy to understandhow error that arises of such duplicities propagates in the map of a system usingmeasurements such as bearing between two features.

The choice of SLAM method and technology is often secondary to other mis-sion requirements and design limits. There are therefore many subcategories ofSLAM, using different instruments and types of measurements, such as camerasfor visual SLAM methods[20], or lidar SLAM[5]. Each such subcategory en-counters unique problems to solve and mitigate. The PHALANX system dealswith range-based measurements in wireless sensor networks, and so that is whatwe shall look at in the following section.

2.5 Issues faced in range-only SLAM

Range-only SLAM (RO-SLAM) is a subcategory of SLAM where all measure-ments are distances to features. Range measurements can be obtained by manydifferent sensors and techniques. Time-of-arrival techniques[14, 12] are tractablemethods when using RF transmitters.

Figure 5: Range-only measurements from a robot to several land-marks.

On its own with no prior knowledge, a single measurement would not beable to yield a resulting position estimate, even with zero measurement error.The possible positions given a perfect measurement to a landmark lie on a circlewith the measured distance as its radius, as illustrated in figure 6.

14

Page 20: Range-based Wireless Sensor Network Localization for

Figure 6: A single measurement (blue dashed line) from the robotto the feature (black triangle) produces a circle of possible locationsthat said feature can be in, relative to the robot.

Typically this circle would be constrained to a point if an additional bearing-measurement would be available. In RO-SLAM however, this is clearly not pos-sible and more involved methods must be used. One way of dealing with thisis to iteratively update each feature with multiple measurements from differentpoints in the robot’s path. Using the combined data of the robot’s odometryand of the multiple measurements, the positions of the features are constrained.

With additional landmarks and associated measurements, the number of possi-ble solutions decrease. The use of circles to visualize the range-only localizationproblem is a very useful one, as we shall see. Consider adding one additionallandmark, shown in figure 7. This time, we center the measurement circles onthe landmarks. The circles representing the set of possible positions have onlytwo common points, their intersections. Adding a third landmark and measure-ment further constrains the solution set to one point (in two-dimensional space).

A more realistic illustration takes into account the noise present in the mea-surement. In the case of a zero-mean, unimodal error distribution, this is thenrepresented by an annulus rather than a circle, see figure 8. The annular area isa confidence interval that to some degree of certainty contains the true distance.

15

Page 21: Range-based Wireless Sensor Network Localization for

Figure 7: Measurements to two landmarks are available. The re-sulting possible robot positions are marked by green crosses.

Figure 8: The true measurement (dashed blue) forms a circle(dashed black), and the noisy measurement with an error σ shapesthe circle into an annular area (dashed red).

The visualization of a range-measurement as an annulus can provide furtherinsight into the limitations of RO-SLAM. We will see that the relative positionsof landmarks and robot become more significant. Specifically, we are talkingabout the collinearity of the positions of the involved landmarks and robot.

2.5.1 Collinearity of landmarks

To illustrate the issue of collinearity in landmarks, we consider again a robotmeasuring ranges to two landmarks. First, in figure 9, we place two landmarksto either side of the ranging robot.

16

Page 22: Range-based Wireless Sensor Network Localization for

Figure 9: A robot is right between two landmarks, forming almost aperfect line. The overlapping region of the annuli, the area dashedred, is large.

Next, for comparison, in figure 10 we move one node to the side, so that thecollinearity is decreased.

Figure 10: The robot is no longer forming a line together with thelandmarks. The overlapping region of the annuli, dashed in blue,is smaller.

These illustrations provide an intuition of why collinearity is bad in range-based localization. This is analogous to “dilution of precision,” a term commonlyseen in the area of GPS technology. The overlapping area grows (is diluted) forcollinear positions. Remember that these annuli are confidence areas, i.e. theycontain the real measurement value with some specific probability. For thesame probability, a larger area implies a less certain solution. The smaller theoverlapping area of these annuli, therefore, the better the localization result islikely to be.

17

Page 23: Range-based Wireless Sensor Network Localization for

2.5.2 Rover trajectory

The path of a robot in relation to landmarks can also be significant for thelocalization performance. Consider a robot moving in a path tangentially to thecircle of measurement from a landmark, shown in figure 11.

Figure 11: A robot is driving tangentially to the circle of the rangemeasurement.

The information gain from measurements along the robot trajectory is zero,as the range to the landmark doesn’t change. As a consequence the positionuncertainty will grow large in this direction as there is no information to correctthe errors that accumulate in the robot’s odometric estimate. Now consideradding another landmark to this scenario, as seen in figure 12.

Figure 12: The robot is driving tangentially to the circle of therange measurement of one landmark, but not for the other.

Again, there is the issue of collinearity. But in addition to the overlappingareas of the annuli, there is the issue of the confidence area of the robot’sposition. This is the previously mentioned error bound that grows as the robotdrives, due to process noise. For many models of this process noise, it can bemodeled fairly well as a growing ellipse centered on the moving robot. The

18

Page 24: Range-based Wireless Sensor Network Localization for

difference in noise magnitudes between directional (bearing) and along-pathtranslation (speed) shapes the ellipse.

(a) (b)

Figure 13: The direction in which the robot drives relative to thelandmark affects the size of the overlapping area (in dashed red).In (a) the robot drives tangentially to the measurement circle, andin (b) it drives towards the landmark.

Typically, the along-path noise is greater and so the error and the ellipsegrow faster in this direction. In such cases, driving tangentially to the line of themeasurement (i.e. perpendicularly to the circle) produces a smaller overlappingarea, illustrated in figure 13. Again, the smaller area implies a more certainposition estimate.

2.5.3 Wireless sensor networks

For RO-SLAM in wireless sensor networks (WSNs) the addition of distancemeasurements in between landmarks, then referred to as nodes, are available.Range measurements between nodes are necessary for solving the multilaterationproblem.2

2If the term multilateration is unfamiliar to the reader, it is similar to the more colloquiallywell-known triangulation, only here it is distances that are used rather than angles.

19

Page 25: Range-based Wireless Sensor Network Localization for

Figure 14: Range-only measurements between nodes in a wirelesssensor network

However, without prior knowledge of any node locations, the multilatera-tion problem cannot be solved. Even with some known locations, if the networkconnectivity is low, the equation is likely underconstrained and the resultingsolution set large.

A typical erroneous result of an underconstrained equation is a mirror imageof the actual solution in some subnet of nodes. It is fairly straight-forward tointuit: measured ranges between a group of nodes would be the same if saidgroup was flipped about some axis. Lacking additional information, the erro-neous solution and the correct one are equally probable. The most obvious flipambiguity is that of the whole network, as shown in figure 15.

(a) (b)

Figure 15: In this illustration of a flipped network, both sets ofnode positions are valid given the measured distances.

But flip ambiguity can also occur for subnets or single nodes, as is shown infigure 16.

20

Page 26: Range-based Wireless Sensor Network Localization for

(a) (b)

Figure 16: The position of the single node “E” within the networkcannot be disambiguated, since both positions are valid given theexact same measured distances.

A general solution strategy to mitigate flip ambiguities of single nodes andsubnets is to increase network rigidity, i.e. increasing the amount of connectionsfor each node. Take the network in figure 16 again, for example. An additionalmeasurement from node B to node E would be enough to resolve the ambiguityin node E’s position. The nodes are then more constrained in relation to eachother by the additional distance measurements in the equation.

2.5.4 Articulation points

When the number of connections in a network is low as in a sparse network,it is likely that some nodes are what is known in graph theory as articulationpoints (or cut vertices). In short, an articulation point is a point in a graph,which if removed, disconnects the graph.

Figure 17: 5 nodes form a small network, with one articulationpoint in the middle, connecting the two nodes on the lower left tothe two nodes in the upper right. Available range measurementsare marked by the dashed lines.

21

Page 27: Range-based Wireless Sensor Network Localization for

An example of an articulation point in a network is shown in figure 17. Ifthe node in the center is removed, the network is disconnected. While a single-point failure is of concern for communication, the main concern in the contextof localization is the resulting poor constraints on the relative node positions.The range measurements between the nodes constrain the relative positions tosome extent, but the occurrence of the articulation point allows the subnets ofnodes to rotate around it. Figure 18 shows this where a subnet of two nodesmove along with the marked “axis” line. Of course, the subnet is also free toflip about this line within the constraints of the measurements.

Figure 18: Subnets are free to rotate around articulation pointsdue to the low constraints resulting from such poor connectivity.

2.5.5 Anchor nodes

While increasing node connectivity reduces the risk of flip ambiguities for indi-vidual nodes and subnets, prior knowledge of some nodes are still necessary, asthe entire network would otherwise be free to rotate and flip as we saw before. Itis for this reason that anchors, sometimes referred to as beacons, are important.Anchors are nodes for which positions have been established to high accuracy,often by external means. The external means vary depending on availabilityand can be careful hand-placement utilizing analog or laser measurements in re-lation to the environment, or assigning position by GPS. These methods are notavailable for PHALANX, operating on another planet. However, I will describesome examples of feasible methods later in this section.

2.6 SLAM in PHALANX

The PHALANX system uses range-only, wireless sensor network SLAM. Recentwork done in similar RO-WSN SLAM problems include Menegatti et al. [17]and the extensive work done by Djugash et al. [10, 9, 8], where a mobile robotdrives through an environment populated with sensor nodes. In the PHALANX

22

Page 28: Range-based Wireless Sensor Network Localization for

system, there are some additional distinctions that are important: crucially, thedeployment of nodes and the possible process of anchorization of nodes.

Rovers operating today and in the past have implemented SLAM successfullyusing onboard equipment. The SLAM possible with PHALANX can serve toaugment the current systems, offloading them while operating around deployednodes. It would also enable the rover to navigate in the dark, something thatpose difficulties for visually based SLAM, such as caves or craters. Conversely,existing SLAM systems can aid PHALANX SLAM, by for example decreasingprocess noise and correcting the rover pose estimate.

2.6.1 Data association

The issue of data association is trivial in wireless sensor networks as all featuresare controlled and digital signatures can be sent with each measurement andcommunication. In each range measurement it is then clear which nodes areinvolved. This is one of the foremost arguments for using RF transceivers, asdata association can be very complex for other forms of measurement.

2.6.2 Anchorization of nodes

Anchor nodes in PHALANX cannot be established in before-hand, as all nodesare deployed from the rover. They must therefore be generated after their de-ployment. This becomes a unique issue, as most means by which anchors wouldbe established are unavailable, such as manual placement and measurements orusage of GPS. However, I will suggest some tractable methods of achieving this,given circumstances and tools typical to that of a rover mission.

One way of anchorizing nodes involves visual observations, using the rovercameras. Upon seeing a node with the high resolution rover cameras, a high-precision bearing measurement is in effect gotten. Several such measurements todistant landmarks in the environment can be used for triangulation of the roverafter which the bearing measurement and the range measurement can providea precise and accurate node location estimate.

Another way of anchorizing nodes is to do so early in a node cluster. In thatway, one can crucially avoid adding excessive process noise to the anchor place-ment. Lacking other alternatives, a node can be dropped right at the rover’sposition, making for a more precise deployment than the usual launch. If therover includes a high-precision range finder such as a laser, the rover can performa precise drive in a straight line to a second location where it can drop anotheranchor node. If such a high-precision range finder is not available, the originalrange-finding measurements can be used. The error of these measurements inthe prototype units typically lie below 20 cm according to the testing that hasbeen done at the time of this writing.

23

Page 29: Range-based Wireless Sensor Network Localization for

2.6.3 Deployment of nodes

Deploying nodes remotely in a precise manner is difficult, largely owing to theunpredictability of the node landings. Still, the actual landing position canbe roughly estimated as the sample of a distribution centered around the de-ployment target position. This initial information gain is highly valuable inrange-only SLAM where the solution set is large. In the case of EKF SLAM,the landing distribution is approximated as a Gaussian distribution, with thelaunch target as its mean. It is especially useful for when the solutions arefar from each other, as then even a large and uncertain landing area will besufficient for that disambiguation. This is visualized in figure 19.

Figure 19: A rover deploys a sensor node that has two possiblesolutions (blue triangles) based on measurements to the rover andanother previously deployed node (red triangle). The green area isthe estimated landing zone based on the deployment target loca-tion.

The issue of collinearity enters here again, though, as for two nodes thatare measuring to a third node where all three nodes are collinearly placed,the possible solutions lie nearer each other. The solutions are then harder todisambiguate to the correct position. This is shown in figure 20.

Figure 20: Here is the same scenario as in figure 19, but here thepossible solutions lie closer to each other.

24

Page 30: Range-based Wireless Sensor Network Localization for

3 Implementing the Simulation Model

In this section I will describe the model that is used in my simulations. The ex-periments are done using 2D simulations. Each simulation samples the involvednoise distributions with its unique seed. For one scenario setup and configura-tion, many simulations are run. From all the simulations, time-series data aresaved and processed to produce the result statistics. The use of the simulationprogram to generate results is illustrated in the diagram in figure 21.

Figure 21: A diagram describing how results are generated in anexperiment, using the simulation software.

The simulations assume a coordinate system relative to the start position ofthe rover. This means that all the results are relative to the error of the initialrover pose estimate. For instance, if the initial rover pose estimate is assumedperfect, then the ground truth of the simulation correspond to global “absolute”coordinates.

The system will be mostly modeled by an Extended Kalman Filter (EKF) asdescribed in section 2. Two state vectors will be kept separately: the groundtruth state vector that is updated with errors sampled from the noise models,and the state vector that is maintained by the EKF. The difference betweeneach state vector give rise to the error statistics.

25

Page 31: Range-based Wireless Sensor Network Localization for

3.1 The System Model

To begin with, recall the general non-linear state equations 12 and 13 fromsection 2, which we write again here:

xk = f(xk−1,uk) + vk (21)

zk = h(xk) + wk (22)

Where the k-subscript denotes the discrete time instance. The system statevector x consists of the rover pose, i.e. the 2-dimensional coordinates and itsbearing, and the 2-dimensional coordinates of all the sensor nodes. For a net-work of N sensor nodes, the state vector will therefore be a vector with 2N + 3elements. The elements of the input vector u = [u1,k, u2,k] are simply speedand bearing rate, respectively. Since this is a range-based SLAM problem, themeasurement vector z consists of measured ranges to all the other nodes includ-ing the rover, and is therefore of size N .

There are two state vectors, one belonging to the EKF and the other beingthe ground truth that is to be estimated, which is “unknown” to the rover.Associated with the former, there is the covariance matrix Pk that is also main-tained by the EKF. This is a square matrix with 2N + 3 rows and columns,where the diagonal entries are the state variable variances and the off-diagonalentries are the covariances between the state variables.

3.1.1 The process model

In our case, the process function f is very simple as the only change in the statevector due to our process is the rover motion; the sensor nodes are not moving.The function is

f(xk−1,uk, tk) =

xr,k−1 + τku1,k cos θr,k−1

yr,k−1 + τku1,k sin θr,k−1

θr,k−1 + τku2,k

x1,k−1

y1,k−1

...xN,k−1

yN,k−1

(23)

where xr, yr and θr are the rover coordinates and bearing, and τ is the time step.

The ground truth state vector is updated with process noise added throughthe control vector uk.

26

Page 32: Range-based Wireless Sensor Network Localization for

The wheel odometry error is modeled after a conservative rule-of-thumb fig-ure3 for what rovers can achieve: that is, a 5 % error on the distance driven.This is implemented via the speed control noise in the process update. Thespeed input noise is governed by two sample processes. First, in each individualsimulation, a mean-error is sampled once, this error will remain for the wholesimulation. The mean-error is a ratio with distribution mean of 1.0 that is mul-tiplied with the speed input value to simulate a consistent model error. Thesecond is sampled at each process update, with a mean around the input speedafter the previous mean-error sample.

The bearing rate input noise is a simplified model of an inertial measurementunit (IMU). Error figures of IMUs used in rover applications are around 1 ° h−1

to 2 ° h−1. This is modeled as a Gaussian distribution with a time-dependentincreasing variance.

3.1.2 Measurements and mutual localization

The measurement function h is a bit more complicated in a network where nodescan all measure to other nodes. The result is several measurement vectors andfunctions. I will write the kth measurement vector and measurement predictionfunction for node i as zi,k and hi,k respectively.

hi,k(xk) =

‖xi,k − xr,k‖‖xi,k − x1,k‖

...‖xi,k − xj,k‖

...‖xi,k − xN,k‖

, j ∈ {r, 1, 2, ..N}, j 6= i (24)

Like before, the r subscript denotes rover quantities. The usage of multiplemeasurement vectors will be detailed later in this section when describing theimplementation of the EKF.

The ranging noise is modeled as samples from a Gaussian distribution cen-tered around ground truth value. Due to the workings of RF ranging, the errordoesn’t change with distance for signals with good identification. Therefore thestandard deviation is an absolute value. Results from the testing of prototypeunits yield an error figure of about 0.20 m, and is therefore used in the simula-tions as the standard deviation in the Gaussian distribution. For two nodes iand j, the ranging-measurement becomes

zij,k ∼ N(‖xi,k − xj,k‖ , (0.20 m)

2)

(25)

3This rule of thumb is based on results for rover-missions, gotten from private conversationwith mentor Michael Dille.

27

Page 33: Range-based Wireless Sensor Network Localization for

3.1.3 Process and measurement function Jacobians

The EKF linearizes the process and measurement functions by calculating theirJacobian matrices (see equation 14). First we look at the process function, ffrom equation 23.

Fk =δf

δx= I2N+3 +

0 0 −τku1,k sin θr,k−1 0 . . . 00 0 τku1,k cos θr,k−1 0 . . . 00 0 0 0 . . . 0...

......

.... . .

...0 0 0 0 . . . 0

(26)

This equation is used in calculating the prior estimate covariance matrix

Pk = FkPk−1FTk +Qk (27)

where Qk is the process noise covariance matrix. However, currently we havethe noise variances in terms of speed and bearing rate, uk:

Q′k =

(σ2u1

00 σ2

u2

)(28)

We can solve this by calculating the Jacobian of f with respect to u:

Gk =δf

δu=

τk cos θr,k−1 0τk sin θr,k−1 0

0 τk0 0...

...0 0

(29)

and then we get Qk asQk = GkQ

′kG

Tk (30)

Next we look at the measurement function h from equation 24. As we sawpreviously, measurements between multiple pairs of nodes results in multiplemeasurement vectors. Therefore, there will be multiple linearizations. First welook at measurements taken from the rover to other nodes,

hr,k(xk) =

‖xr,k − x1,k‖...

‖xr,k − xi,k‖...

‖xr,k − xN,k‖

, i = 1..N

28

Page 34: Range-based Wireless Sensor Network Localization for

For convenience we write ∆xij = xi,k − xj,k and ∆yij = yi,k − yj,k and thedistance dij = ‖xi,k − xj,k‖. The Jacobian of hr,k with respect to x is

Hr,k =δhr,kδxk

= (31)

∆xr1

dr1

∆yr1dr1

0 −∆xr1

dr1−∆yr1

dr10 0 . . . 0 0

∆xr2

dr2

∆yr2dr2

0 0 0 −∆xr2

dr2−∆yr2

dr2. . . 0 0

......

......

......

.... . .

......

∆xrN

drN

∆yrNdrN

0 0 0 0 0 . . . −∆xrN

drN−∆yrN

drN

The matrix is large but there is a clear pattern. Note that the third columncorresponding to the bearing state variable is filled with zeros as bearing does notenter the measurement function. For other nodes, a few columns are swapped.

Hi,k =δhi,kδxk

= (32)

−∆xir

dir−∆yir

dir0 0 0 . . . ∆xir

dir

∆yirdir

. . . 0 0

0 0 0 −∆xi1

di1−∆yi1

di1. . . ∆xi1

di1

∆yi1di1

. . . 0 0

......

......

......

......

. . ....

...

0 0 0 0 0 . . . ∆xiN

diN

∆yiNdiN

. . . −∆xiN

diN−∆yiN

diN

3.1.4 Missing measurements

In reality, not all nodes in the network may have range measurements availabledue to an obscured line of sight, being out of range, or other connectivity issues.In such cases, the measurement functions and therefore the resulting Jacobiansmust be adjusted before calculating the Kalman gain K.

By replacing columns in Hk in accordance with the missing measurement valuesin zk, implementation becomes easier. This can be done efficiently by matrixmultiplication. First we create the measurement matrix without the rows forwhich the measurements are missing. If node 6 is measuring to nodes 1, 3 and4 in a network of 7 nodes, for example, this matrix becomes

H∗6,k =∆x6, 1

d6, 1

∆y6, 1d6, 1

−∆x6, 1

d6, 1−∆y6, 1

d6, 10 0 0 0

∆x6, 3

d6, 3

∆y6, 3d6, 3

0 0 −∆x6, 3

d6, 3−∆y6, 3

d6, 30 0

∆x6, 4

d6, 4

∆y6, 4d6, 4

0 0 0 0 −∆x6, 4

d6, 4−∆y6, 4

d6, 4

29

Page 35: Range-based Wireless Sensor Network Localization for

We then construct a matrix to adjust its shape and reorder the columns

M =

0 0 0 0 0 0 0 I2 00 0 I2 0 0 0 0 0 00 0 0 0 I2 0 0 0 00 0 0 0 0 I2 0 0 0

where each bold-font 0 is a 2-by-2 zero-matrix and I2 is the 2-by-2 identitymatrix. We also have to remember the bearing state variable, and becausebearing is not involved in the range measurements, we add a column of zeros toM . The resulting Jacobian Hk is then

Hi,k = H∗i,kM (33)

which has the shape of Hi,k in equation 32, but with the proper entries replacedby zeros, in regards to the missing measurements.

3.1.5 Deployment of sensor nodes

The continual deployment of sensor nodes is a crucial component of PHALANX.As a new node is deployed, it has a target location and an actual location whereit landed. Upon deployment, both the ground truth and the EKF state vectorare augmented with the new node’s two-dimensional coordinates added at theirends. Since the target is known to the rover, its coordinates can be used forthe initial estimate. The ground truth differs from the target to simulate theimperfections of the launch, and is constructed from

xi = xr + ρ

(cos (θr + β)sin (θr + β)

)(34)

where ρ and β are the launch range and azimuth polar coordinates sampledfrom Gaussian distributions, visualized in figure 22.

ρ ∼ N (ρtarget, σ2ρ) (35)

β ∼ N (βtarget, σ2β) (36)

The initial estimate is

xi = xr + ρtarget

(cos (θr + βtarget)

sin (θr + βtarget)

)(37)

and the resulting initial covariance matrix becomes

Pi = Pr + JlPlJTl (38)

where Pl is the covariance matrix

Pl =

(σ2ρ 0

0 σ2β

)(39)

30

Page 36: Range-based Wireless Sensor Network Localization for

Figure 22: Visualization of the remote deployment of a sensor nodein the simulation, with azimuthal and radial noise.

and Jl is the Jacobian of the second term in equation 37 with respect to ρtargetand βtarget:

Jl =

(cos (θr + βtarget) −ρtarget sin (θr + βtarget)

sin (θr + βtarget) ρtarget cos (θr + βtarget)

)(40)

3.1.6 Anchorization

As described in section 2, anchorization is the process where the position es-timate of a node is improved as well as possible using external means. Afterthis process, the node is called an anchor. In the simulation model, the positionestimate of an anchorized node is sampled from a Gaussian distribution withthe ground truth as the mean. Rover cameras make for high precision bearingmeasurements. The Mars Science Laboratory (Curiosity rover) for example, isequipped with cameras with 5.1° field-of-view and 1200 pixels on the horizontal.This results in a resolution of 0.004° per pixel. Using the ranging measurements,the error is approximately 20 cm, based on prototype testing. The standard de-viation used in sampling the anchorization estimate is therefore conservativelyset to σx = 0.20 m:

xanchor ∼ N(xtruth,

(σ2x 0

0 σ2x

))

3.2 Evaluation metrics

In this section I will define the metrics that are used in the experiments, RMSEand a collinearity measure.

The root mean squared error (RMSE) is simple to understand, especiallyintuitive in the case of physical positions, and simple to implement. The defi-nition of RMSE is spelled out in its name: the square root of the mean of the

31

Page 37: Range-based Wireless Sensor Network Localization for

squares of the errors,

RMSE =

√∑Ni e

2i

N(41)

where ei is one error out of N total. In this document RMSE will be appliedon the rover position and the sensor node positions. The error is the distancebetween the estimate position and the ground truth position. In the case ofrover position, the errors are evaluated at evenly spaced time points throughouteach simulation. This is because rover localization is important for its entiredrive. For sensor nodes, the errors are only evaluated at the final time. Whileaccurate node position estimates, i.e. an accurate map, is important for good lo-calization during the drive, the accuracy of the final map is more relevant whenconsidering mission-level goals. For node RMSE, the errors of anchor nodes willnot be considered as these are by definition localized as accurately as possible,often by means external to PHALANX.

When applied on large sets of simulations, the resulting distribution that ismade up of all RMSE values is telling of the “robustness” of the simulatedscenario configuration. For example, a large variance where the percentiles arewidely spaced from the median, suggests that the configuration is unreliableand prone to failure.

Collinearity for a set of positions describes how well a straight line approxi-mates the positions. The measure of collinearity should therefore reflect this.Positions that are approximated well by a line have a higher collinearity value.There are two extremes of collinearity: the most collinear, where positions lieon a single line, and the least collinear, where positions lie evenly distributedon a circle.

I will define the collinearity measure I use that follow the above requirements.For a set of N positions x, calculate the covariance matrix

Q =1

N − 1

N∑i

(xi − x)(xi − x)T

where x is the mean of x and xi is the coordinates vector of the ith position.The covariance matrix Q is a square matrix, 2-by-2 for two-dimensional posi-tions. Then we calculate the eigenvalues and eigenvectors for this matrix λi,and vi. The eigenvectors are new orthogonal axes, or components, and thecorresponding eigenvalues are the “strengths” of the components. The vectorswith the largest eigenvalues are called the principal components of the sampleset. To get our collinearity measure, I will use a ratio of the largest and small-est eigenvalues, call them λ+ and λ− respectively. We define the collinearitymeasure as follows:

C = 2λ+

λ+ + λ−− 1 (42)

32

Page 38: Range-based Wireless Sensor Network Localization for

From the above definition, we note that:

• it evaluates to 0 when the positions are least collinear (λ+ = λ−);

• it approaches 1 when the positions are approaching perfect collinearity(λ+ � λ−);

• and since a covariance matrix is positive semidefinite, its eigenvalues arenon-negative and λ+ ≥ λ− ≥ 0 holds true, thus C ∈ [0, 1].

4 Example scenarios

Before we head into the experiments and the results, we should take a momentto think about how the typical use-case scenario of PHALANX might look like.There are some constraints that will shape missions. For a start, the maximumcommunication range between nodes in the network plays a big role. In thisdocument, this range is set to 30 m based on prototype units. Another factorconstraining network size is the amount of nodes that will be carried by therover. Mass and volume are in short supply in space missions and the final sizeand mass of a sensor node unit is yet to be known. Using a rough estimate, thesimulations and case studies in the next section will use a maximum of around15 nodes. Depending on mission goals, several smaller networks at differentlocations might be of better use than a single large network, as such, there isno lower bound for the number of nodes per scenario.

As for the placement of individual sensor nodes, they will likely not often belocated on the planned rover trajectory, since a major function they fulfill isproviding access to places where the rover has difficulty in reaching. Based onan existing prototype launcher, the node deployment range is set to have anupper bound of 40 m.

33

Page 39: Range-based Wireless Sensor Network Localization for

(a)(b)

Figure 23: Two examples of scenario configurations are displayed.The node placements are shown as blue crosses, and the rover pathas the dashed black line. In (a) the distance measurement rangesare also shown as dashed blue circles.

Figure 23 shows two examples of scenarios, showing the planned node posi-tion placements and the planned rover trajectory.

5 Results

In this section we use the simulation program to isolate and investigate phenom-ena that are significant for performance, by varying node placement, numberof anchors, rover trajectory and connectivity. It may be helpful to review thediagram in figure 21 in section 3 to understand how experiments translate intosimulations. Each simulation configuration in the order of 1000 times.

To begin with, we run an experiment with a few likely application scenarioswithout considering concepts introduced in section 2. Next, we investigate thoseconcepts in turn. The experiments are designed with effort to vary only the pa-rameter under investigation.

5.1 Baseline

To appreciate the results in this document, it is helpful to run a broad ex-periment before delving into specifics. We define a few different paths andnode-placements without anchor nodes. The paths and node-placements aredesigned to resemble typical application scenarios. The nodes are therefore de-ployed off the rover path to locations where they will be able to communicate

34

Page 40: Range-based Wireless Sensor Network Localization for

with the rover for at least a small portion of the drive and where they are notdisconnected from the rest of the node network. The nodes are also placed sothat there are no articulation points in the network graph.

The number of nodes used are varied from none (thus simulating the deadreckoning drive of the path) to 15. The full paths are 100 m to 200 m long. Foreach number of nodes, the paths are adjusted to the length that the rover willalways be in communication range to at least one node.

Figure 24: RMSE for rover position estimate

Figure 25: RMSE for final sensor position estimates

35

Page 41: Range-based Wireless Sensor Network Localization for

As can be seen in figures 24 and 25, the error figures are quite large, espe-cially considering that the mean error of the ranging measurements are 20 cm.Given our previous discussion of anchor nodes, this is to be expected.

Next we take a look at anchorizing varying numbers of nodes for the samepaths and node placements.

5.2 Anchorization

In the previous section we saw some results for scenarios where we didn’t an-chorize any nodes. This serves as a good baseline to compare against scenarioswhere we do anchorize some nodes. We therefore run the exact same scenar-ios, but anchorize an increasing amount of nodes as the rover comes withinrange. We plot a few of the different node counts against a varying amount ofanchorized nodes.

Figure 26: RMSE for rover position estimate, split into many plotsfor ease of viewing

From the resulting figures 26 and 27, we can observe that the errors are

36

Page 42: Range-based Wireless Sensor Network Localization for

significantly lower when anchors are used. We can also see that there is a sharpdecrease in error after which diminish returns sets in quickly.

Figure 27: RMSE for final sensor position estimates, split up forease of viewing

If we plot the same data, but with number of nodes on the horizontal axis,we can see in figure 28 that the error is increasing slightly as the node countgoes up. The same trend is seen in the plots where no anchors were used. Thismight seem strange but it is to be expected. While node density plays a part indecreasing the error, the anchor-to-node (A/N) ratio is more significant. Thatratio is of course decreasing as nodes are added and the number of anchorsremains constant. This example hints at the difficulty of isolating and varyingsingle parameters influencing performance.

If we again reorganize the data, but keep the A/N -ratio constant at a fewdifferent values, displayed in figure 29 we can see that increasing node density

37

Page 43: Range-based Wireless Sensor Network Localization for

Figure 28: RMSE for sensor position estimates, for 1, 2 and 3anchors, plotted against number of nodes

does in fact produce a downward trend in the error, however slight.

(a) (b)

Figure 29: RMSE for constant anchor-to-node ratios for (a) roverestimate and (b) final node position estimates

So far, we have disregarded the effects of node placement. In the nextexperiment we look at node placement collinearity.

5.3 Collinear node-placements

As discussed back in section 2, collinearity is generally a point of concern inrange-only localization. While complete node collinearity is uncommon to occurin practice, it is interesting to investigate to what degree it impacts localizationand conversely then to what degree a conscious decision to avoid collinear nodeplacement is beneficial.

38

Page 44: Range-based Wireless Sensor Network Localization for

We define several pairs of node placements and path trajectories. We groupup the different pairs according to their degree of node collinearity. All scenar-ios are within a 100 m by 100 m area. In each node placement, two nodes areanchor nodes.

Figure 30: RMSE for rover position estimate for node placementsof varying collinearity

Figure 31: RMSE for final node position estimates for node place-ments of varying collinearity

The results are presented in figures 30 and 31. We see that for varyingcollinearity, the RMSE is largely unaffected, with the exception of when thenodes are completely collinear.

39

Page 45: Range-based Wireless Sensor Network Localization for

In this experiment, only sensor node collinearity is investigated, i.e. the rover isnot taken into account. However, the rover takes part in the network localiza-tion process and is in effect a node that is mobile. Next we will therefore lookat the impact of rover trajectories on these results.

5.4 Rover trajectory and collinearity

Because the rover is a mobile node in the network, the path of the rover ispart of the network geometry. Looking at the simulations in the collinear node-placement experiments, we want to account for the rover position. We do thisby computing the mean of the momentary collinearities of all the placed nodesincluding the rover throughout the rover drive.

Figure 32: RMSE for rover position estimate for node placementsof varying mean momentary node collinearity

40

Page 46: Range-based Wireless Sensor Network Localization for

Figure 33: RMSE for final node position estimates for node place-ments of varying mean momentary node collinearity

The results are displayed in figures 32 and 33. We see that the results areslightly more sensitive above a collinearity of 0.7.

5.5 General trajectory strategies

Next, again because of the rover path being part of the network geometry, andbecause of the various reasons outlined in section 2, the rover trajectory canbe expected to affect the localization results. To investigate this, we define8 node placements, and for these node placements we construct different setstrajectories for the rover to follow. All trajectories in a set are made followingthe same general strategy for each node placement. As an example, figure 34shows trajectories of two strategies for the same node placement. The strategiesare numbered and summarized here:

# 0-1: Greedy, shortest path to range to each node;

# 2-3: decrease local collinearity between the rover and the ranging nodes;

# 4-9: avoid driving tangentially to the measurement circle of the rangingnode;

# 10-11: minimize local dilution of precision.

The sets of trajectories sharing a strategy in the above list are very similar, butslightly offset from each other to test reproducibility.

41

Page 47: Range-based Wireless Sensor Network Localization for

(a) (b)

Figure 34: Two different strategies for the same node placement:(a) is in the simplest, greedy path strategy and (b) is reducingcollinearity of measurement members.

The results are shown in figures 35 and 36.

Figure 35: Final sensor location estimate RMSE for different rovertrajectory strategies

42

Page 48: Range-based Wireless Sensor Network Localization for

Figure 36: Rover location estimate RMSE for different rover tra-jectory strategies

The results suggest that none of the high-level heuristics, or strategies, usedin planning the path lead to consistently better performance. Trajectory setsmade using the same strategy produce varying results, mostly in the form ofa higher 75th percentile error, i.e. larger tails on the error distributions. Thissuggests that there is a sensitivity in the results to varying rover path, but thatthey are found using more sophisticated and numerical methods rather thanhigh-level heuristics.

5.6 Articulation points

As mentioned back in section 2, articulation points in a network are featuresthat can impact performance negatively. To investigate their impact, we createthree groups of scenarios. The first group contains scenarios with no articula-tion points. The second group is a copy of the first group, only here, one nodein each scenario is moved slightly to produce exactly one articulation point. Inthe third group, similarly, nodes are moved slightly to produce a total of twoarticulation points. By keeping the groups very similar to one another, differ-ences in the result are mostly due to the presence of the articulation points.The node quantity varies from 3 to 12.

As seen in figure 37, the presence of articulation points doesn’t impact the me-dian error much. What is interesting, however, is the increase in the varianceof the results, as seen in the percentiles.

43

Page 49: Range-based Wireless Sensor Network Localization for

Figure 37: RMSE for final node position estimates for node place-ments with different numbers of articulation points

6 Discussion

In this section I will conclude this thesis by discussing the results and the limi-tations of the simulation program. And then, finally, I will describe future workfor SLAM in PHALANX.

6.1 Discussion of results

The simulation results suggest that SLAM in the PHALANX system is quiteinsensitive to the investigated parameters.

The parameter that proved most impactful is the number of anchor nodes used ina network. Although the true parameter might be the anchor-to-node ratio, thequantity of nodes that are deployed in scenarios of PHALANX use is relativelysmall. This is because all nodes that are to be used in a full rover mission needto be transported to the destination, either along with the rover or potentiallyseparately. In either case this places harsh limits on the total number of nodes.From this, we can surmise that for any number of nodes used in a scenario,anchorizing one or two nodes decreases the root-mean-squared localization er-ror significantly for both the sensors and the rover itself. Beyond two anchornodes, there is considerable diminishing returns in this aspect. From a missionresources perspective, the fewer nodes to anchorize, the better. Anchorizing anode by external sensing, such as by using the rover cameras, takes valuablemission time, if done manually via mission operators or even if done automat-ically. This time could be spent on scientific exploration or engineering analysis.

44

Page 50: Range-based Wireless Sensor Network Localization for

For other parameters, such as node connectivity (presence of articulation points)and node collinearity, the system is much less sensitive. This is very desirable,as it leaves room for mission operators and planners to prioritize other aspectsof the rover mission more flexibly. However, it is important to consider thetails of the error distributions, i.e. the outer percentiles. Narrow, low errordistributions are desired, but the variance increases with collinearity as wellas with number of articulation points. This owes to outlier events producingharsher results in more sensitive scenarios. As an example, in a network withpoor node connectivity, a node deployment that misses the target location by awide margin is more likely to disconnect the network. In a network with betternode connectivity, the same event is not as important, as it is more likely to beredundantly connected for many nodes.

The fact that in PHALANX, the nodes are deployed by the rover itself, islikely the major contributor to the insensitivity of the system. Many of theissues highlighted in this document are important for range-only and network-based SLAM systems. Typically in network-based systems, the positions of mostnodes are completely unknown. In PHALANX, a good initial guess is suppliedto the estimator because we can assume that a launched node will land at leastin the proximity of its target. This aspect is crucial, because in range-basedlocalization, flip ambiguities are common and rather hard to solve. The candi-date solutions in such ambiguities are separated from each other, so that justan approximate initial guess is often sufficient for a unimodal estimator such asthe Extended Kalman Filter to converge on the correct position.

Concerning the rover high-level strategies that were tested, the results wereinconclusive, as one strategy didn’t consistently outperform other strategies.The reason for using high-level strategies, is that if such a heuristic were found,it would be easy for a mission planner to implement. The results suggest thatthere is variation however, and therefore there is value in finding the optimalpath. As it might not be found in these kinds of high-level strategies to fol-low, more intricate numerical methods are likely necessary, the development forwhich were not pursued in this work.

6.2 Limitations of the simulation program

There are a few limitations in the simulation program that need to be discussed.The simulations assume two-dimensional space. There are several reasons forthis choice, the first being scope of work. The estimator on its own lends itselfto easy extension: simply add the additional coordinate in the state vector foreach node and rover. With map and movement it is more complicated: con-sider slight slope implications for node landings for example. In compensationof this aspect I use the noise models for such volatilities. Additionally, in manycases, elevation shifts for a rover are small relative to horizontal scale, so thatthe projection from the three-dimensional map to the two-dimensional is jus-

45

Page 51: Range-based Wireless Sensor Network Localization for

tified in such cases. For other peculiar map features, it is difficult to producemeaningful general results regarding the system. For example, simulating arare, peculiar scenario gives results for such scenarios. But for other peculiarscenarios, those results are likely not as applicable. These kinds of case studiesare interesting and useful to simulate, particularly in preparation for a knowntarget environment, but do not yield as generalizable results as simulations ofless distinctive maps.

The noise in the range measurements are currently modeled as Gaussian distri-butions, as described in section 3.1.2. In reality, RF range-measurements aresusceptible to multipath error, where the receiver erroneously identifies indirectsignals as direct signals, and thus registers a longer distance. This has not beentaken into account in the simulations, due to prioritizing more important as-pects. The current prototype units use ultra-wideband frequencies that are lessprone to multipath effects[25].

6.3 Future work

As for future work for SLAM in PHALANX, there are many interesting pathsto explore.

Most importantly for the nearest future is to conduct field testing of the sys-tem. This will be necessary to credit or discredit the simulation results andwill give valuable information for improving and expanding on the simulationprogram. An example of this is to implement more accurate noise models ofthe distance measurements of the RF-transceivers. There is much research onmodeling UWB distance measurement error in the literature.[1, 4, 15].

Another aspect to investigate concerns the estimator. The estimator used inthis thesis is the Extended Kalman Filter. There are many other algorithmsavailable for use, and which one that is best for PHALANX is not clear, interms of computational resources, but also in terms of their localization perfor-mance. A simulation software such as has been implemented in this work canbe used to test such candidates.

Suggested in the discussion of the results, the development of a path planningtool might be valuable or necessary to find routes that are less error-prone.

It is of particular interest to be able to predict results in specific scenarios asmentioned in the section about limitations. This requires that the simulationsimplement three-dimensional space for target environments where such mapscan be at least approximated: using planetary surface satellite imagery and el-evation maps for instance. This would add great value not only for showing thecapabilities of PHALANX SLAM, but also for its future application. It couldprovide mission operators and planners with a valuable tool to use to preparefor specific planned and predicted scenarios. If a certain placement of nodes and

46

Page 52: Range-based Wireless Sensor Network Localization for

a certain rover trajectory causes particular sensitivity to small errors, it mightbe prudent to make some adjustments, such as adding another node to increaseconnectivity or planning a different rover path. A sophisticated tool could evengive such suggestions where the predicted performance and sensitivity is good.

Another interesting aspect is to include global batch optimizations ran inter-mittently throughout the scenario. That is, to keep a history of estimates andmeasurements and at some points spend the extra resources to improve theestimates gotten from the running online estimator. This could be done pe-riodically, but another strategy could be to perform these optimizations whenencountering frequent measurement outliers or other such irregularities.

47

Page 53: Range-based Wireless Sensor Network Localization for

References

[1] B. Alavi and K. Pahlavan. “Modeling of the TOA-Based Distance Mea-surement Error Using UWB Indoor Radio Measurements”. In: Communi-cation Letters, IEEE 10 (May 2006), pp. 275–277. doi: 10.1109/LCOMM.2006.1613745.

[2] J. L. Callas. Mars Exploration Rover Spirit end of mission report. Tech.rep. Pasadena, California: Jet Propulsion Laboratory, National Aeronaticsand Space Administration, 2015.

[3] L. Chappaz et al. “Evidence of large empty lava tubes on the Moon usingGRAIL gravity”. In: Geophysical Research Letters. Vol. 44. 2017.

[4] Y. K. Cho, J. H. Youn, and D. Martinez. “Error modeling for an unteth-ered ultra-wideband system for construction indoor asset tracking”. In:Automation in construction 19.1 (2010), pp. 43–54.

[5] D. M. Cole and P. M. Newman. “Using laser range data for 3D SLAM inoutdoor environments”. In: IEEE International Conference on Roboticsand Automation. IEEE. 2006, pp. 1556–1563.

[6] G. E. Cushing et al. “THEMIS observes possible cave skylights on Mars”.In: Geophysical Research Letters. Vol. 34. 2007.

[7] M. Dille et al. “PHALANX: Expendable projectile sensor networks forplanetary exploration”. In: IEEE Aerospace Conference. 2020.

[8] J. Djugash and S. Singh. “Motion-aided network SLAM with range”. In:31.5 (2012), pp. 604–625.

[9] J. Djugash, S. Singh, and B. Grocholsky. “Decentralized Mapping of Robot-Aided Sensor Networks”. In: 2008.

[10] J. Djugash et al. “Range-Only SLAM for Robots Operating Cooperativelywith Sensor Networks”. In: IEEE Journal of Robotics and Automation.2006.

[11] Abigail Fraeman. Sols 2799-2800: We’ve Gone About as Far (Up ’Blood-stone Hill’) as We Can Go. https://mars.nasa.gov/msl/mission-updates/8697/sols- 2799- 2800- weve- gone- about- as- far- up-

bloodstone-hill-as-we-can-go/. June 2020.

[12] S. Gezici et al. “Localization via Ultra-Wideband Radios”. In: IEEE SignalProcessing Magazine. Vol. 22. 4. 2005.

[13] J. P. Grotzinger et al. “Mars Science Laboratory Mission and ScienceInvestigation”. In: Space Science Reviews. Vol. 170. 2012.

[14] M. Kwak and J. Chong. “A new double two-way ranging algorithm forranging system”. In: IEEE International Conference on Network Infras-tructure and Digital Content. Sept. 2010.

[15] A. Ledergerber and R. D’Andrea. “Ultra-wideband range measurementmodel with Gaussian processes”. In: IEEE Conference on Control Tech-nology and Applications (CCTA). IEEE. 2017, pp. 1929–1934.

48

Page 54: Range-based Wireless Sensor Network Localization for

[16] Peter S. Maybeck. Stochastic models, estimation, and control. Vol. 1.Chapter 1. Academic Press, 1979.

[17] E. Menegatti et al. “Range-only slam with a mobile robot and a wire-less sensor networks”. In: IEEE International Conference on Robotics andAutomation. IEEE. 2009, pp. 8–14.

[18] W. S. Murphy Jr. and W. Hereman. Determination of a position in threedimensions using trilateration and approximate distances. Tech. rep. Golden,Colorado: Department of Mathematical and Computer Sciences, ColoradoSchoold of Mines, Nov. 1999.

[19] S. Singer and J. Nelder. “Nelder-Mead algorithm”. In: Scholarpedia 4.7(2009). revision #91557, p. 2928. doi: 10.4249/scholarpedia.2928.

[20] T. Taketomi, H. Uchiyama, and S. Ikeda. “Visual SLAM algorithms: asurvey from 2010 to 2016”. In: IPSJ Transactions on Computer Visionand Applications 9.16 (2017). doi: 10.1186/s41074-017-0027-2.

[21] S. Thrun and M. Montemerlo. “The graph SLAM algorithm with appli-cations to large-scale mapping of urban structures”. In: The InternationalJournal of Robotics Research. 2006.

[22] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics.The MIT Press, 2006.

[23] R. V. Wagner and M. S. Robinson. “Distribution, formation mechanisms,and significance of lunar pits”. In: Icarus. Vol. 237. 2014.

[24] G. Welch and G. Bishop. An Introduction to the Kalman Filter. Tech. rep.Department of Computer Science, University of North Carolina at ChapelHill, 1997.

[25] M. Z Win and R. A Scholtz. “On the robustness of ultra-wide bandwidthsignals in dense multipath environments”. In: IEEE Communications let-ters 2.2 (1998), pp. 51–53.

49