107
Localization and tracking using an heterogeneous sensor network JOS ´ E ARA ´ UJO Master’s Degree Project Stockholm, Sweden July 2008

Localization and tracking using an heterogeneous sensor

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Localization and tracking using an heterogeneous sensor

Localization and tracking using anheterogeneous sensor network

JOSE ARAUJO

Master’s Degree ProjectStockholm, Sweden July 2008

Page 2: Localization and tracking using an heterogeneous sensor

Contents

1 Introduction 91.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . 101.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Localization and tracking using an heterogeneous sensor net-work 122.1 Overview of techniques and methods to determine location . . 12

2.1.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2 Dynamical Models . . . . . . . . . . . . . . . . . . . . . . . . 172.2.1 Model 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.2 Model 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.3 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.4 Proposed Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 212.5 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.5.1 Offline scheduler . . . . . . . . . . . . . . . . . . . . . 242.5.2 Covariance-Based scheduler . . . . . . . . . . . . . . . 30

3 Experimental set-up 393.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.1.1 Wireless Sensor Network Testbed . . . . . . . . . . . . 393.1.2 Ultrasound sensor . . . . . . . . . . . . . . . . . . . . . 503.1.3 Vision based system . . . . . . . . . . . . . . . . . . . 533.1.4 Mobile agent . . . . . . . . . . . . . . . . . . . . . . . 553.1.5 Fusion Center . . . . . . . . . . . . . . . . . . . . . . . 55

3.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.2.1 Ultrasound system . . . . . . . . . . . . . . . . . . . . 563.2.2 Vision based system . . . . . . . . . . . . . . . . . . . 593.2.3 Fusion Center . . . . . . . . . . . . . . . . . . . . . . . 68

1

Page 3: Localization and tracking using an heterogeneous sensor

4 Experimental validation 754.1 Ultrasound System . . . . . . . . . . . . . . . . . . . . . . . . 75

4.1.1 Straight Line . . . . . . . . . . . . . . . . . . . . . . . 764.1.2 Localization . . . . . . . . . . . . . . . . . . . . . . . . 77

4.2 Vision Based System . . . . . . . . . . . . . . . . . . . . . . . 844.2.1 Localization . . . . . . . . . . . . . . . . . . . . . . . . 84

4.3 Fusion center . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5 Conclusions and Future Work 93

6 Appendix 96

2

Page 4: Localization and tracking using an heterogeneous sensor

List of Figures

1.1 The switched sensor problem that is considered. How shouldone switch between two heterogeneous sensors to get a goodestimate x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1 The trilateration method for 2D accurate measurement . . . . 162.2 Mobile agent moving according to model 1 with gaussian white

process noise with zero mean and variance of 10cm/step. Start-ing position at (0,0). . . . . . . . . . . . . . . . . . . . . . . . 18

2.3 Mobile agent moving according to model 2 with gaussian whiteprocess noise with zero mean and variance of 1cm2/step. Start-ing position at (0,0) with zero velocity. . . . . . . . . . . . . . 19

2.4 Linear gaussian state space model. . . . . . . . . . . . . . . . 202.5 The function p∗average(N) for two different delays d using model

1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.6 The function P ∗(k,N) for different periods N when delay d =

7 using model 1. . . . . . . . . . . . . . . . . . . . . . . . . . . 282.7 The performance cost VT as function of period N for two dif-

ferent delays d using model 1. . . . . . . . . . . . . . . . . . . 282.8 The function p∗average(N) for two different delays d using model

2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.9 The function P ∗(k,N) for different periods N when delay d =

7 using model 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 302.10 The performance cost VT as function of period N for two dif-

ferent delays d using model 2. . . . . . . . . . . . . . . . . . . 312.11 Tree search example for covariance-based switching with maxD=2. 322.12 Flow diagram of the covariance-based switching algorithm. . . 332.13 The function paverage(k,maxD) and the usage of the high-

quality sensor for the covariance-based scheduling when delayd = 3 and model 1. . . . . . . . . . . . . . . . . . . . . . . . . 34

2.14 The function paverage(k,maxD) and the usage of the high-quality sensor for the covariance-based scheduling when delayd = 7 and model 1. . . . . . . . . . . . . . . . . . . . . . . . . 35

3

Page 5: Localization and tracking using an heterogeneous sensor

2.15 The function paverage(k,maxD) and the usage of the high-quality sensor for the covariance-based scheduling when delayd = 3 and model 2. . . . . . . . . . . . . . . . . . . . . . . . . 36

2.16 The performance function V ∗(k,maxD) and the usage of thehigh-quality sensor for the covariance-based scheduling whendelay d = 3 and model 2. . . . . . . . . . . . . . . . . . . . . . 36

2.17 The function paverage(k,maxD) and the usage of the high-quality sensor for the covariance-based scheduling when delayd = 7 and model 2 . . . . . . . . . . . . . . . . . . . . . . . . 37

2.18 The performance function V ∗(k,maxD) and the usage of thehigh-quality sensor for the covariance-based scheduling whendelay d = 7 and model 2. . . . . . . . . . . . . . . . . . . . . . 38

3.1 Overview of the experimental set-up and operation based onthe wireless sensor network testbed. . . . . . . . . . . . . . . . 40

3.2 Standard KTH wireless sensor network testbed power supply . 483.3 Testbed deployment in the 6th floor of the Q building in KTH. 503.4 Picture shows the network 1 of the testbed in the corridor of

the 6th floor of the Q building at KTH. . . . . . . . . . . . . 513.5 Picture shows the wireless node Tmote Sky in casing with

Ultrasound Sensor. . . . . . . . . . . . . . . . . . . . . . . . . 513.6 Ultrasound transmitter cluster top view and side view. . . . . 523.7 Logitech fusion web-camera. . . . . . . . . . . . . . . . . . . . 543.8 Mobile Agent. RC electric car controlled by an human operator. 553.9 Interaction between ultrasound receiver and transmitter mod-

ules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.10 Transmitter flow Diagram . . . . . . . . . . . . . . . . . . . . 583.11 Receiver Flow Diagram . . . . . . . . . . . . . . . . . . . . . . 603.12 Image processing flow diagram . . . . . . . . . . . . . . . . . . 613.13 Acquired image with mobile agent. Image taken on the corri-

dor of the 6th on the Q Building at KTH. . . . . . . . . . . . 623.14 Cut image view. Binary image. . . . . . . . . . . . . . . . . . 643.15 Mobile agent detection. Star shows the detected centroid of

the the mobile agent. . . . . . . . . . . . . . . . . . . . . . . . 673.16 Vision based localization system with non-linearities. . . . . . 673.17 Data flow and treatment including the sensors and processing

unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.18 Flow diagram of the algorithm implemented on the fusion cen-

ter processing unit. . . . . . . . . . . . . . . . . . . . . . . . . 733.19 Graphical User Interface created for the user to be able to

watch the position of the robot with spacial references. . . . . 74

4

Page 6: Localization and tracking using an heterogeneous sensor

4.1 Straight line average measurements. Real distance(cm) VSMeasured distance(cm) . . . . . . . . . . . . . . . . . . . . . . 76

4.2 Straight line average error - Linear interpolation with 14 dis-tance points. Error(cm) VS Real distance(cm) . . . . . . . . . 77

4.3 The average error(cm) in four different position for four givenreceiver nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4 Ultrasound system performance when no outlier rejection methodis applied. Error and position values for real position X=50. . 80

4.5 Ultrasound system performance when an outlier rejection methodis applied. Error and position values for real position X=50. . 80

4.6 Ultrasound system performance when an outlier rejection methodand the model 1 estimator is applied. Error and position val-ues for real position X=50. . . . . . . . . . . . . . . . . . . . . 81

4.7 Ultrasound system performance when an outlier rejection methodand the model 2 estimator is applied. Error and position val-ues for real position X=50. . . . . . . . . . . . . . . . . . . . . 82

4.8 Localization performance of the vision based system for asteady mobile agent at (50,200). Outlier rejection methodand model 1 estimation performed. . . . . . . . . . . . . . . . 85

4.9 Localization performance of the vision based system for asteady mobile agent at (50,200). Outlier rejection methodand model 2 estimation performed for W = 0.003. . . . . . . . 86

4.10 Localization performance of the vision based system for asteady mobile agent at (50,200). Outlier rejection methodand model 2 estimation performed for W = 0.09. . . . . . . . 87

4.11 Estimated and raw position quadratic errors for offline sched-uler with N and Covariance-based scheduler for model 1 andmodel 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.12 Tested mobile agent trajectory for optimal high-quality sensorswitching N = 6. Real position, estimated position and posi-tion given by raw sensor measurements over the X coordinatewhen performing 29 measurements. . . . . . . . . . . . . . . . 92

6.1 Receiver Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . 976.2 Transmitter Circuit . . . . . . . . . . . . . . . . . . . . . . . . 986.3 Wireless Sensor Network Testbeds - a survey . . . . . . . . . . 996.4 System Breakdown Structure . . . . . . . . . . . . . . . . . . 1006.5 Floor plan - KTH Q 6th (SSS) . . . . . . . . . . . . . . . . . . 101

5

Page 7: Localization and tracking using an heterogeneous sensor

List of Tables

2.1 Maximum values of process noise W for a given delay d foreach model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.2 Optimal sensor scheduling approach for model 1 and 2 consid-ering different communication cost λ. . . . . . . . . . . . . . . 38

4.1 Optimal periodic high-quality sensor switching N∗ of VE formodel 1 and 2 considering different communication cost λ. . . 89

4.2 Optimal periodic high-quality sensor switching N∗ of VT formodel 1 and 2 considering different communication cost λ. . . 90

6

Page 8: Localization and tracking using an heterogeneous sensor

Abstract

Taking resource limitations into account in the design of wireless sensor net-works are important in many emerging applications. The need for minimizingthe communication and power consumption of individual nodes and otherunits pose interesting challenges for estimation and control strategies.Thisdocument describes the design, implementation, obtained results and conclu-sions of a cooperative localization and tracking system based on two typesof sensors. Practical investigations are made to reach the optimal sensorscheduling based on offline and covariance-based scheduler approaches. Onesensor gives low quality measurements and is based on an ultrasound sensorand the other is a web-camera with high precision but with delayed results.The ultrasound sensor is connected to wireless sensor nodes which are part ofthe KTH Wireless Sensor Network Testbed and the web-camera is connectedto a data processing unit and placed in the same area. An overview about lo-calization techniques and solutions are presented. The design, development,and implementation of the KTH Wireless Sensor Network Testbed is alsodiscussed. The software implemented on the system is fully detailed as wellas the necessary hardware. A presentation of the filtering methods used toperform localization and tracking is put forward. Analysis and conclusionsof all the different approaches used are discussed. Guidelines for future workare also proposed.

7

Page 9: Localization and tracking using an heterogeneous sensor

Acknowledgments

First, I would like to thank my supervisors Karl H. Johansson and JoaoBorges de Sousa for giving me the opportunity of developing this work atKTH. I am truly grateful for all your guidance and advices that made meaccomplish this thesis and also by letting me learn on my own.

Henrik Sandberg, you deserve many thanks for all your support, com-ments to the thesis and availability to answer my numerous doubts everyday.Without your support i would not be able to perform all this work. I alsohave to thank Maben Rabi for answering me all the everyday, out of officehours and weekend questions I posed you that helped me moving further.

I would like also to thank my friend, roomate and partner, BernardoMaciel for the one life time experience of living and working together in thislast year. His help and advices are greatly acknowledged.

Also I would like to thank my co-supervisors Erik Henriksson and PanGun Park for their inconditional support whenever i needed anything. Thanksto Magnus Lindhe for all your support developing the ultrasound system andadvices. I have to thank Prof. Ana Mendonca, Patric Jensfelt and SobhanNaderi for helping me with the image processing issues. Chithrupa Rameshthank you very much for all the discussions we had that helped me perform-ing a better job. I would like to thank my special colleagues PG Di Marco,Cesare Carreti and Stefan Gisler for their help, contributions and advices forthis thesis.

I also have to note the tremendous help that my ERASMUS colleaguesgave me during this year, my life will never be the same without you.

Tome Costa, Tiago Nunes, Nuno Medeiros, Jose Melo, Jose Barbosa andVitor Torres, you know that what I am now, I owe it to you also.

Last but not least, I would like to thank my parents, Henrique and Ar-naldina, and my brother Luis for all your support.

8

Page 10: Localization and tracking using an heterogeneous sensor

Chapter 1

Introduction

1.1 Motivation

The resource limitations of the wireless sensor network can be seen as animportant issue in the design of emerging applications [1], [2]. The need forminimizing the communication and power consumption of individual nodesand other units, poses interesting challenges for estimation and control strate-gies [3]. In this work we consider the novel networked estimation problemformulated in [8], in which two types of sensors with different resource de-mands are used share the same or different networks.

The problem of localization and tracking of a mobile agent using obser-vations from two types of sensors can then be seen as a motivation for thiswork. The sensors communicate their data to a central node that performthe processing. The first type of sensors used are ultrasound sensors withlow-quality measurements, small processing delay and a light communicationcost. The second type of sensor is a camera with high-quality measurements,but large processing delay and high communication cost. One can notice nowthat the scheduling of both in order to have the best estimation possible ofthe mobile position and at the same time reduce the communication costsand power consumption pose a big challenge to be solved. So in this workare made design trade-offs between estimation performance, processing delayand communication cost for a sensor scheduling problem with heterogeneoussensors. We show how optimal sensor schedules, periodic or not, can be foundby means of search over a finite set. As seen in [8], sensor selection problemshave been studied extensively, e.g., [4]. In [8] the approach is novel in thatit incorporate communication cost in the cost criterion together with pro-cessing delays. See [5] for another recently studied problem. The motivationfor this thesis comes then from the necessity to perform the experimental

9

Page 11: Localization and tracking using an heterogeneous sensor

validation and further developments of the work presented on [8] and alsobe the need of building an experimental set-up for testing all the theoriesdeveloped under the topic of Wireless Sensor Networks in KTH.

1.2 Problem Formulation

The problems studied in this thesis are:

• Perform localization of a mobile agent in indoor environments usingheterogeneous sensors within a wireless sensor network.

• Perform the scheduling of heterogeneous sensors.

• Design, development and implementation of a wireless sensor networktestbed.

As it can be seen, when using heterogeneous sensors, their schedulingposes interesting challenges for estimation, control and power managementstrategies design. Design trade-offs between estimation performance, pro-cessing delay and communication cost have to be taken into account. Thepurpose of this thesis is then to propose an estimator that improves the ac-curacy of the measurements taken by the sensors and also to develop properscheduling models for the sensors. Is also the intention of the author to showhow the solutions proposed perform in an a real experimental environment,a wireless sensor network testbed.

This situation is illustrated in Fig. 1.1.

1.3 Contributions

The main contributions of this thesis are:

• Design, development, implementation and experimental validation of awireless sensor network testbed at KTH on a joint work with anotherMsC Student, Bernardo Maciel.

• Implementation and experimental validation of the paper ”Estimationover heterogeneous sensor networks” of Henrik Sandberg et al , sub-mitted for the 47th IEEE Conference on Decision and Control. [8]

• Design, development, implementation and experimental validation ofan covariance-based sensor scheduler (posed as further developmenton [8]).

10

Page 12: Localization and tracking using an heterogeneous sensor

Plant

x(k)

y2(k) y1(k)

SchedulerFilter

x(k)

WhiteNoise

wk

Accurate but delayed

Coarse but fast

Figure 1.1: The switched sensor problem that is considered. How should oneswitch between two heterogeneous sensors to get a good estimate x

• Design, development, implementation and experimental validation ofan ultrasound based localization system in a joint work with PhD Stu-dent, Magnus Lindhe.

• Design, development, implementation and experimental validation of avision based localization system.

1.4 Outline

The ouline of this report is as follows.Chapter 2 presents the localization techniques and methods, dynamical

models, filters and schedules used in order to perform the localization basedon heterogeneous networks. It introduces and demonstrate with examples allthe proposed tools.

Chapter 3 illustrates the experimental setup designed, developed and im-plemented in order to validate the system developed. This chapter presentsin terms of hardware and software all the components that are part of thesystem. It is discussed the wireless sensor network designed and used, theultrasound system, vision based system and the fusion center system. Allthe components are thoroughly described.

In Chapter 4 is shown all the experimental validation performed to thealgorithms and methods developed in the previous chapters. Complete anal-ysis with illustrative simulations are presented. The report is then concludedin Chapter 5 where all the conclusions about the experimental validation andfuture work needed are presented.

11

Page 13: Localization and tracking using an heterogeneous sensor

Chapter 2

Localization and tracking usingan heterogeneous sensornetwork

This chapter introduces the localization techniques, the system modeling ap-proaches, presents the estimator used and gives an overview on the schedulingapproaches.

In order to perform localization two different sensors are used. One is anultrasound sensor giving information through a wireless sensor network withno delay, no communication cost and known as being a cheap sensor. Onthe other hand the other sensor used, a web-camera, has a certain delay dueto image processing time and data transmission and also a communicationcost due to the of substantial amount of energy spent on each transmissionthus is considered an expensive device. From now on the ultrasound sensoris denoted as the low quality sensor (lq) and the camera as the high qualitysensor (hq). Next is discussed the modeling approaches to cope with thistype of system.

2.1 Overview of techniques and methods to

determine location

As seen in [45] two basic approaches for determining the location of an objectcan be formulated.

• Location from landmarks. In this approach, the location system isimplemented by selecting a set of landmarks or reference points withknown coordinates. As it is also referred these reference points can be

12

Page 14: Localization and tracking using an heterogeneous sensor

moving iff their position is always known. It is easy to notice that ifone has the distance measurements for a given number n of referencepoints to the object O one needs to detect the location of the object Ois easy to achieve just by solving a system of equations. As an exampleof this technique is the localization technique used in [43], [45] and alsothis MsC Thesis, where a WSN is used to locate an object within itscoverage area.

• Location from dead-reckoning. From [45] dead-reckoning is thetechnique that determines the position of an object with respect tosome starting point using the dynamics of motion of the object. Anexample of this type of technique is when we have an object O thatstarts a movement at a given point P along a direction Θ at a constantvelocity v, where its position coordinates at time t are given by (vtcosΘ,vtsinΘ). Dead-reckoning can then be interpreted as the method withwhich an object is able to detect its own position by measuring itsown dynamics, without external known references or sensors. Thismethod as the drawback of accumulating measurement errors sincevarious embedded sensors are used. Because of this, most locationsystems are implemented using landmarks or a combination of both.

As said before our approach will be based on the location using land-marks. Next are presented the techniques used to determine location usinglandmarks and also the methods available. The techniques are interpreted asthe type of measurements that can be made to achieve localization of objectsand methods are interpreted as the solutions available to solve the equationsderived, given the measured distances and the position of the landmarks.

2.1.1 Techniques

As one can see if the landmark system is used, the object has to be able toknow his position based on the measurements of its position to each refer-ence point. The following methods may be used in order for the object todetermine its position given a landmark-based system.

• Distance and angle. This is one of the most used techniques for po-sition estimation. One of the reasons is due to the easy implementationand position computation and also because of the low price of sensorsavailable to perform this technique (ultrasounds, microphones,etc.).Usually are measured distances or angles from the landmark to theobject, using then trilateration or triangulation methods to compute

13

Page 15: Localization and tracking using an heterogeneous sensor

the object position. Examples of these systems are the GPS [46], theRADAR [47] and this MsC Thesis.

• Signal signature. In this approach, the object uses the signal strengthvalue usually of an RF message transmitted by the landmarks in orderto know its position in space. It is also possible to use the reversedway where the object transmits a RF message and the reference nodesknowing the signal strength can compute the object position. Thereare several projects implemented using this technique such as [50], [48]and [49].

Since one will use the distance measurement approach in order to measurethe distance from the object to the landmarks it will be described next. Thediscussion about the different techniques the distance measurement techniqueuses is made next.

Distance Measurement

There are two common techniques for measuring the distance to an objectgiven a reference point [45]:

• Time-of-flight. This technique measures the time t taken for somesignal to travel between two points (reference point and object or vice-versa). If the speed of the signal is v, the distance d is given by d = v×t.One example of this technique is the GPS [46]. In our solution thismethod is used since we use the time of flight of the ultrasound signaltaken from the transmitter to the receiver. In our case one knows thetime of departure in the receiver based on an RF message sent at thesame time as the ultrasound signal from the receiver (synchronization).Assuming that the speed of light is much greater then the speed ofsounds, one can calculate the time-of-flight.

• Time-of-Arrival. A TDOA-based system as discussed, measures thedistance between two or more given reference points, when a signalemitted arrives at those given points with a time difference. Withthis time difference is possible to calculate the distance between thereference points and the object transmitting the signal knowing thespeed of sound.

2.1.2 Methods

This sub-section discusses different methods used for determining the posi-tion of an object.

14

Page 16: Localization and tracking using an heterogeneous sensor

• Triangulation. The method of measuring the angle to a given objectfrom at least two known reference points to determine its position isknow as triangulation. See [51] for more details. The use of trian-gulation requires the ability of knowing the angle between object andreference points, which can be done by using microphones as sensorsfor example. In order to determine the object in 3D one needs to havethree reference points.

• Trilateration. Trilateration is a method of determining the relativepositions of objects using the geometry of triangles in a similar fashionas triangulation. Trilateration uses the known locations of two or morereference points, and the measured distance between the subject andeach reference point to accurately and uniquely determine the relativelocation of a point on a 3D plane using trilateration alone, generally atleast 4 reference points are needed [52].

Since this method is the one that is going to be used in this work oneshould discuss it further.

Assuming that one needs to achieve a 2D position in space, it is nec-essary to have at least 3 reference points. As explained before, thedistance between each reference point and the object has to be knownand is denoted by di where i is the reference point number. So for eachreference point i one has:

di =√

(x0 − xi)2 + (y0 − yi)2 + (z0 − zi)2 (2.1)

and assuming we have three reference points we can generate a equationsystem of three equations which we have to solve with respect to threevariables x0, y0 and z0. If one solves this equation the result will notbe unique, having two values for the z coordinate. In our case we canassume that the robot is always in the z0 = 0 plane, which makes ushaving three equation with only two variables giving a exact (x0, y0)solution. One should refer that for this method to hold the three pointscannot have the same x or y coordinate at the same time since if thatoccurs the solution cannot be achieve since one of the circles will notintersect the other two. In Figure 2.1 one can see the illustration of theprevious method. One should notice that is not necessary to have thepoints placed on the axis x and y as illustrated in the picture. Theycan take any position but always with the constrained of not being allin the same line plane x or y.

15

Page 17: Localization and tracking using an heterogeneous sensor

x0

y0

y

xP1

P3

P2

d3

d1

d2

Figure 2.1: The trilateration method for 2D accurate measurement

The equation system was solved by the MATLAB Symbolic Math Tool-box when implemented in the ultrasound based system described inChapter refch:setup.

• Multilateration. Multilateration, also known as hyperbolic position-ing, is the process of locating an object by accurately computing thetime difference of arrival (TDOA) of a signal emitted from the object tothree or more receivers. It also refers to the case of locating a receiverby measuring the TDOA of a signal transmitted from three or moresynchronised transmitters. Multilateration should not be confused withtrilateration, which uses distances or absolute measurements of time-of-flight from three or more sites [53]. To determine the position of anobject in a 3D space four reference points should be used.

One can summarize this Section putting forward the techniques and meth-ods that are going to be used in this work:

• Landmark Based System with one object (ultrasound transmitter) and16 reference points (ultrasound receivers).

• Distance measurement technique performed with Time-of-flight calcu-lation between the transmitter and receiver. Is used an RF signal inorder to perform the synchronization of the receiver node. The time-of-flight is then the time that the ultrasound signal takes from the trans-mitter node to the receiver node. This will suffer further discussion inSection 3.2.1.

16

Page 18: Localization and tracking using an heterogeneous sensor

• Trilateration method for computing the position given the Time-of-flight distance measurements.

2.2 Dynamical Models

In order to describe the dynamics of the mobile agent two different statespace models are proposed for evaluation.

One has to consider that since no real robot was applied on this work,there were only designed linear models that are approximations of robots orRadio Controlled cars non-linear models seen in [43] and [44]. As a resultfrom this the estimated position given the models will not be optimal. Itwas just used linear approximation models since the objective of this workwas to use the same approach as in [8], which was made to cope with linearsystems. For non-linear systems one had to take other considerations whendesigning the filter, which was not the objective of this work. Next the twodifferent models are described.

In order to model the dynamics of the mobile agent one should firstdefine that the sensors are used at each time step according to the schedulingdefined in Figure 1.1. It were defined the sets Thq and Tlq for each schedulingapproach that is, when k ∈ Thq the high quality sensor is used, and whenk ∈ Tlq the low quality sensor is used. The way both sets are defined forthe different scheduling approaches are shown in Section 2.5. Next the twodifferent models are described.

2.2.1 Model 1

The first model assumes that the plant describe a random walk movement interms of position, i.e., the movement of the current step varies randomly indirection from the movement of the previous one by a gaussian white processnoise w ∈ Rm with zero mean and non-zero variance. It is assumed that theplant we measure is a 1st order and linear plant,

x(k + 1) = Ax(k) + Bw(k), k ≥ 0, (2.2)

y1(k) = C1x(k) + v1(k), k ∈ Tlq (2.3)

y2(k) = C2x(k − d) + v2(k), k ∈ Thq (2.4)

with state vector x(k) ∈ Rn, measurements y1(k), y2(k) ∈ Rp with gaus-sian white measurement noises v1(k), v2(k) ∈ Rp. The covariance of theprocess noise is Ew(k)w(k′)T = Wδ(k− k′), and the covariances of the mea-surement noises Ev1(k)v1(k

′)T = Σδ(k−k′), and Ev2(k)v2(k′)T = σδ(k−k′).

17

Page 19: Localization and tracking using an heterogeneous sensor

-35 -30 -25 -20 -15 -10 -5 0 5-100

-80

-60

-40

-20

0

20Random Walk: Position

X coordinate

Y c

oord

inat

e

Agent Position

Figure 2.2: Mobile agent moving according to model 1 with gaussian whiteprocess noise with zero mean and variance of 10cm/step. Starting positionat (0,0).

It is assumed that the high-quality sensor measurement y2(k) is moreaccurate then y1(k), i.e., σ < Σ, but it is delayed by d samples because ofan higher processing time. It is assumed that the delay of the low qualitysensor can be neglected since its processing time is lower then one time step.Note that y1(k) is not defined when k ∈ Thq and y2(k) is not defined whenk ∈ Tlq.

Note that the dimensions of the measurements y1(k) and y2(k) can havedifferent sizes, i.e. p1 6= p2.

Also the values taken by A = B = C1 = C2 = 1 in order to be a randomwalk.

Figure 2.2 shows a possible result when this model is applied on an mo-bile agent placed at a starting position (0,0), moving with a gaussian whiteprocess noise with zero mean and variance equal to 10. This means that themobile agent is expected to have a velocity variance for each time step of10cm. One can see that it shows a total random walk with no connection onthe direction of movement between a previous step and the following one.

2.2.2 Model 2

The second model shows the case when instead of having a random walkin position, the mobile agent moves describing a random walk in velocityi.e. his velocity on the current step varies randomly from the velocity onthe previous step but the movement direction has a small variation. It isassumed that the plant we measure is a 2nd order and linear plant,

18

Page 20: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 100-25

-20

-15

-10

-5

0

5

10

15Random Walk: Velocity

X coordinate

Y c

oord

inat

e

Agent Position

Figure 2.3: Mobile agent moving according to model 2 with gaussian whiteprocess noise with zero mean and variance of 1cm2/step. Starting positionat (0,0) with zero velocity.

(xν

)k+1

= A

(xν

)k

+ Bw(k), k ≥ 0, (2.5)

y1(k) = C1

(xν

)k

+ v1(k), k ∈ Tlq (2.6)

y2(k) = C2

(xν

)k−d

+ v2(k), k ∈ Thq (2.7)

, where A =

(1 h0 1

), B =

(01

)and C1 = C2 =

(1 0

), where h is

the step time. The other variables have the same characteristics as the onespresented for the first model.

The result when applying this model for a mobile agent with initial po-sition at (0,0) and initial velocity equal to zero, can be seen on Figure 2.3.The variance of the process noise W = 1, which means that the accelerationvaries with a zero mean and variance 1cm2 each step. As it was expectedthe direction of the movement varies slowly, but one has random variationson the acceleration of the agent.

2.3 Kalman Filter

In this Section will be given an introduction to Kalman filter. Assuming aLinear Gaussian state space model (LGSSM),

19

Page 21: Localization and tracking using an heterogeneous sensor

� ��

� ��

- - -

6

-?

-ΣUnit

Delay

A

ΣC

vk

wk xk+1 xk yk

Figure 2.4: Linear gaussian state space model.

xk+1 = Axk + Bwk, wk ≈ white N(0, Wk) (2.8)

yk = Ckxk + vk, vk ≈ white N(0, Vk) (2.9)

and assuming that the parameters Ak, Bk, Ck, Wk and Vk are known. As-sume x0, vk, wk are mutually independent and also that x0 ≈ N (x0, P0).The aim of the Kalman filter is to compute the optimal state estimate, i.e.E{xk|y1, ..., yk}=E{Yk}, where Yk are all the observations until the currenttime step k. Figure 2.4 shows the block diagram of the state space model,equations (2.8) and (2.9).

Denote the Kalman filter state estimates as

xk|k = E {xk|Yk} , (2.10)

xk+1|k = E {xk+1|Yk} (2.11)

the Kalman filter covariance estimates as,

Pk|k = E{(xk − xk|k)(xk − xk|k)

}, (2.12)

Pk|k−1 = E{(xk − xk|k−1)(xk − xk|k−1)

′} , (2.13)

The Kalman filter equations in prediction form are:

xk+1|k = (Ak −KkCk) xk|k−1 + Kkyk (2.14)

Pk+1|k = Ak

[Pk|k−1 − Pk|k−1C

Tk ×

[CkPk|k−1C

Tk + Vk

]−1CkPk|k−1

]AT

k

+ BkWkBTk , P0|−1 = P0 (2.15)

Kk = AkPk|k−1CTk

(CkPk|k−1)C

Tk + Vk

)−1(2.16)

20

Page 22: Localization and tracking using an heterogeneous sensor

Where equation (2.14) is the new state estimation,(2.15) is the Ricatti equa-tion which calculates the error covariance matrix and (2.16) is the Kalmangain.

The Kalman filter is going to be used in order to have better estimates ofthe mobile agent position based on noisy measurements given by the sensors.It was chosen since it is known [12], that this filter minimizes the covariancematrix P for all k when the state space model is a LGSSP, which is true formodel 1 and model 2.

2.4 Proposed Filter

In order to apply the Kalman filter to the models described in Section 2.2one has to rewrite them to accommodate the time delay d.

Introducing then a new state vector x by,

x(k) =

x(k)

x(k − 1)...

x(k − d)

(2.17)

Then the model becomes,

x(k + 1) = Ax(k) + Bw(k), (2.18)

y(k) = C(k)x(k) + v(k), (2.19)

where,

A =

A 0 . . . 0 0In 0 0 00 In 0 0...

.... . .

......

0 0 . . . In 0

, B =

B00...0

(2.20)

C(k) =

{ [C1 0 . . . 0 0

], k ∈ Tlq[

0 0 . . . 0 C2

], k ∈ Thq

(2.21)

Ev(k)v(k + k′)T =: V (k)δ(k′) (2.22)

21

Page 23: Localization and tracking using an heterogeneous sensor

V (k) =

{Σ , k ∈ Tlq

σ , k ∈ Thq(2.23)

The system defined above in equations (2.19) to (2.23) is a linear time-periodic system of period N. The periodicity comes from having periodicsensing. After defining the new model one has to define the minimal possiblecovariance P ∗(k) (* denotes minimal) of the estimation error that satisfiesthe time-varying recursive Riccati equation of the form

P ∗(k + 1 | k) = A[P ∗(k | k)− P ∗(k | k)C(k)T

×[C(k)P ∗(k | k)C(k)T + V (k)

]−1C(k)P ∗(k | k)

]AT + BWBT (2.24)

where P ∗(k) is the covariance of the estimation error of the state x(k).

The time-varying Kalman filter that achieves the optimal accuracy P ∗(k)is given by

ˆx(k + 1) =(A− K(k)C(k)

)ˆx(k) + K(k)y(k). (2.25)

, and

K(k) = AP ∗(k)C(k)T(C(k)P ∗(k)C(k)T + V (k)

)−1(2.26)

where ˆx(k + 1) is the new state estimate and K(k) is the Kalman gain.There are properties of Kalman filter [11] that are interesting for the type

of problem that is posed, such as:

• Kalman filter is a linear, discrete-time, finite dimensional system.

• Covariance Pk|k can be precomputed if matrix C is independent of themeasurement. In the case of using two different sensors this does nothappen since the matrix C changes if k ∈ Tlq or k ∈ Thq.

• Steady State Kalman Filter. If A,B,C,W and V are time-invariant, thenunder stability conditions K and Pk converge to a constant. In the caseof using two sensors the matrix C could change, so only sometimes thisproperty can be applied. If the switching is periodic, as is presented inSections 2.5.1 and 2.5.2, the average of Pk converges to a constant.

• Amongst the class of linear estimators the Kalman filter is the minimumvariance estimator.

22

Page 24: Localization and tracking using an heterogeneous sensor

2.5 Scheduling

In this Section is presented the scheduling problem that is necessary to solveand the two possible approaches.

Since there exists two types of sensors with different characteristics andone need to perform a trade-off between communication cost for the high-quality sensor and estimation quality, the scheduling of them has to be per-formed. To perform the scheduling a performance criterion in order to enablea defined switching transition is necessary.

In the following subsections will be presented two different approachesof achieving an optimal scheduling based on the trade-off mentioned above.There will be presented an offline approach and covariance-based one. Bothcharacteristics and illustrative examples will be provided. Conclusions aboutthe best approach will be put forward.

As estimation quality criterion was chosen the average trace of the co-variance of the estimation error over the time interval [0, k],

paverage(k) :=1

k + 1

k∑i=0

traceP (i | i) (2.27)

The objective is then to minimize paverage(k), with a proper choice ofthe high-quality sensor usage, since it is a measure of how accurately oneknows the state and takes into account the measurements performed by bothsensors. Now one can define a performance criterion V ∗(k,M) which is thesum of the average communication cost and the average error covariance,

VT (k,M) :=M

k× λ + paverage(k) (2.28)

where M is the number of times the high-quality sensor is used on theinterval [0, k] and λ is the communication cost. One should notice that eventhough the cost function is denoted during this work in terms of M , its valuecan be written in terms of N such as M = k+1

N, where N is the high-quality

sensor periodic switching.We would like to minimize the criterion (2.28) with respect to the high-

quality sensor usage i.e.,min

|Thq |=MVT (k,M) (2.29)

, for all k, where |Thq| is equal to the number of elements in the set Thq.As explained in [8] one can compute p∗average(k) by computing the Riccati

equation (2.24), since p∗average(k) converges to a constant p∗average for large k.We will usually discuss only this limiting value.

23

Page 25: Localization and tracking using an heterogeneous sensor

One can compute the periodic solutions P (k) and K(k) by just iteratingequation (2.24) and (2.16) because of the global convergence property givenin [8]. There are two case of special interest: p∗average(1) and p∗average(∞).Both these cases collapse into time-invariant problems, and correspond tothe cases when the high-quality or the low-quality sensor, respectively, isused all the time.

One may think that p∗average is an decreasing function of M. The as-sumption that being the high-quality sensor more accurate, and more oftenused(M high), the better estimation is achieved is not correct, since there isa time delay d involved when the high-quality sensor is used. This reasonmakes the high-quality measurements being d times older then the measure-ments given by the low-quality sensor. If the process noise into the systemis sufficiently large (W large) i.e. the trust on the system model is low, thenwe can get p∗average(M = 0) < p∗average(M = ∞), which means that using thelow quality sensor all the time gives better estimates of the position.

One legitimate question to make then is that if it is useful to use thehigh-quality sensor due to having old measurements. This is discussed inSection 2.5.1 and 2.5.2 for both schedulers presented and is shown thatscheduling sequences with M 6= ∞ and M 6= 0 are given for both schedulingapproaches.

2.5.1 Offline scheduler

In the offline scheduling approach is assumed that the sets Thq and Tlq aredefined as follows:

Thq(N) = {N − 1, 2N − 1, 3N − 1, . . .}= {k ≥ 0 (k + 1) mod N = 0},

Tlq(N) = {1, 2, . . . , N − 2, N, . . .}= {k ≥ 0 (k + 1) mod N 6= 0},

where the period N ≥ 1. That is, when k ∈ Thq(N) the high quality sensoris used, and when k ∈ Tlq(N) the low quality sensor is used.

It was used the scheduler presented in [8] and it is intended to reach a Nperiodic switching, in which the value of N is pre-calculated offline. Now letus define the way the value of N is obtained. As it is seen in (2.30) one cansay that, at a time k, the optimal sensor cycle period is given by,

N∗(k) = arg minN

(M

k× λ + p∗average(k, N)

), (2.30)

24

Page 26: Localization and tracking using an heterogeneous sensor

where p∗average(k,N) is characterized as in Equation (2.27) but now his valueis taken over a given period switching N and where M is the sensor usage andin the offline case is given by, M = bk+1

Nc. It is clear that 1 ≤ N∗(k) ≤ k +1,

so that (2.30) is a simple minimization problem over a finite set.The steady-state (k → ∞) optimal period N∗ for the sensor schedule is

given by

N∗ = arg minN

(M

k× λ + p∗average(N)

)(2.31)

= arg minN

V ∗T (N) (2.32)

One can see that N∗ can be easily calculated with (2.32). One just needto calculate p∗average(N) for a delay d with (2.27). After this was possible tocalculate the value of V ∗

T (N) given by (2.28), with a proper λ for a giveninterval of N, and then achieve the minimum of this function, which givesthe N∗.

Next will be presented an example that illustrates the offline schedulerfor both model 1 and model 2.

Examples

Assuming that the parameters for model 1 and 2 are the ones presented inSections 2.2 and 2.4.

Assuming that P ∗(k) ∈ R. One can see that P ∗(k) ∈ R since P ∗(k) =P ∗

(0,0)(k) where P ∗(0,0)(k) is the (1,1)-block of (2.24). As seen it is only when

k ∈ Thq(N) that more information than P ∗(0,0)(k) is needed from P ∗(k).

The values of the functions p∗average(k, N) and V ∗(N) will be given byequations (2.27) and (2.28), and P ∗(k,N) = P ∗

(0,0)(k,N).In order to perform the offline scheduling for both models, the relationship

between the process noise W and the delay of the high-quality sensor dwas achieved. This was done in order to check, for a given interval of d ∈[3, 4, 5, 6, 7, 8] and W ∈ [0, 10], where the use of high-quality measurementsimproves the estimation. So one had to check when N∗(W, d) 6= ∞. Theresults are shown in table 2.1. For each value of d is identified the highestvalue Wmax in order for the high-quality measurements to be useful. Theparameters for the sensors measurement noise are Σ = 12 and σ = 1, whichrepresent the error variance for the low-quality sensor and high-quality sensorrespectively. This values were achieved with tests made for each sensor andare presented on Chapter 4. Also the reason for the values of the delay from[0, 2] not being taken into account is due to the fact that is impossible (as

25

Page 27: Localization and tracking using an heterogeneous sensor

d Wmax Model 1 Wmax Model 23 1.0 0.094 0.6 0.035 0.5 0.016 0.3 0.0067 0.2 0.0038 0.1 0.002

Table 2.1: Maximum values of process noise W for a given delay d for eachmodel.

it will be seen in Chapter 4) to have a delay lower then three for the high-quality sensor used. One can see that the value of Wmax = ∞ when there isno delay, since using the high-quality sensor is the optimal solution and themaximum process noise can be any value.

As it can be seen in table 2.1 the values of Wmax decrease for increasingdelay, which was expected since if the information received is older, the ve-hicle dynamics should vary slower. For each model the value of the processnoise has different interpretation as one could see on Section 2.2. So, for first-order model 1 the process noise W shows the variation in the velocity of theagent. For the second-order model 2 the process noise W means shows thevariation in the acceleration of the agent. So the values shown in table 2.1cannot be directly compared. As one could see the maximum variation onthe process noise for model 1 is between [0.2, 1] m/s and for model 2 is be-tween [0.002, 0.09] m/s2 for the referred delays. For the following examplesone will just make reference to the cases of the delay being 3s and 7s. Thischoice was made since the lowest processing time needed to take a picture,analyse it and achieve the robot position, which is the delay d, is 3s as it willbe seen in Chapter 4. The choice to analyse 7s comes to have a comparisonfrom a higher delay value d and evaluate how the system copes with it.

Example: Model 1

For the tests made with model 1 the parameters were adjusted as follows:

• Process noise W = 0.2 which is the highest for the higher delay. Itis assumed to be better to use a lower process noise to cope with theworst case. This value as explained before means the variance on thevelocity of the agent.

• Measurement noises Σ = 12 and σ = 1. The accuracy of the high-quality sensor is 12 times greater that the low-quality one.

26

Page 28: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 1001.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

2

N

p*av

erag

e(N

)

d=3d=7

Figure 2.5: The function p∗average(N) for two different delays d using model1.

• P ∗(0) = 0 if we know exactly the position of the object on the beginningof the test and an high value for the estimator on the first iteration tobelieve more on the first measurement taken. In the case of the exampleis used P ∗(0) = 0 since is assumed that the position of the agent is notknown.

Next is shown in Figure 2.5 the function p∗average. As it can be seen is not atall the case that decreasing N always yields a more accurate average estimate.It is seen for a delay d = 3 the optimal switching period N∗ = 1, but for a d =7 the optimal switching period is now N∗ = 7. One should remember thatthis function just evaluates the performance in terms of estimation accuracynot taking into account any communication cost λ. It is also seen that allthe curves converge to the same value p∗average(M = 0). This is because thehigh-quality sensor is not used at all when N = ∞, and the low-qualitysensor has no delay.

How the covariance P ∗(k,N), N = 1, 7,∞, evolve over time for delayd = 7 is shown on Figure 2.6. As expected P ∗(k,N) converges to periodictrajectories. As it can be seen also its better to use N = 7 for delay d = 7because the covariance value P ∗(k,N) takes lower values. As one can expectfor d = 3 when k evolves, the value of P ∗(k,N) is lower when N = 1.

In Figure 2.7 we find the optimal sensor cycle period N∗. Now is shownthe value of V ∗(M) for a communication cost λ = 0.3. This value was chosenin order to show that even if the accuracy of the average estimate is betterfor N = 1 when d = 3, for this given communication cost, it is better to

27

Page 29: Localization and tracking using an heterogeneous sensor

0 20 40 60 80 100 120 140 160 180 2000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

k

P*(

k,N

) N=1N=7N=Infinit

Figure 2.6: The function P ∗(k,N) for different periods N when delay d = 7using model 1.

0 10 20 30 40 50 60 70 80 90 1001.3

1.4

1.5

1.6

1.7

1.8

1.9

2

2.1

2.2

2.3

N

V*(

N)

d=3

d=7

Figure 2.7: The performance cost VT as function of period N for two differentdelays d using model 1.

use a periodic switching with N = 2 instead, in order to have an optimalperformance cost V ∗. Also one can notice that now for a delay d = 7 it isbetter to not use the high-quality sensor at all even if the estimation accuracyis improved when using N = 7. For other results see [8].

Example: Model 2

For the tests made with model 2 the parameters were adjusted as follows:

28

Page 30: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 1000.8

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

N

p*av

erag

e(N

)

d=3d=7

Figure 2.8: The function p∗average(N) for two different delays d using model2.

• Process noise W = 0.003 which is the highest for the higher delay. Itis assumed to be better to use a lower process noise to cope with theworst case. This value as explained before means the variance on theacceleration of the agent.

• Measurement noises Σ = 12 and σ = 1. The accuracy of the high-quality sensor is 12 times greater that the low-quality one.

• P ∗(0) = 0 if we know exactly the position of the object on the beginningof the test and an high value for the estimator on the first iteration tobelieve more on the first measurement taken. In the case of the exampleis used P ∗(0) = 0 since is assumed that the position of the agent is notknown.

Next is shown in Figure 2.8 the function p∗average. Again, as was shownin the example for the first model, is not at all the case that decreasing Nalways yields a more accurate average estimate. It is seen for a delay d = 3the optimal switching period N∗ = 1, but for a d = 7 the optimal switchingperiod is now N∗ = 5. One should remember that this function just evaluatesthe performance in terms of estimation accuracy not taking into account anycommunication cost λ. It is also seen that all the curves converge to thesame value p∗average(∞). This is because the high-quality sensor is not usedat all when N = ∞, and the low-quality sensor has no delay.

How the covariance P ∗(k,N), N = 1, 5,∞, evolve over time for delayd = 7 is shown on Figure 2.9. As expected P ∗(k,N) converges to periodic

29

Page 31: Localization and tracking using an heterogeneous sensor

0 20 40 60 80 100 120 140 160 180 2000

0.5

1

1.5

2

2.5

3

k

P*(

k,N

) N=1N=5N=Infinit

Figure 2.9: The function P ∗(k,N) for different periods N when delay d = 7using model 2.

trajectories. As it can be seen also its better to use N = 5 for delay d = 7because the covariance value P ∗(k,N) takes lower values. As one can expectfor d = 3 when k evolves, the value of P ∗(k,N) is lower when N = 1.

In Figure 2.10 we find the optimal sensor cycle period N∗. Now is shownthe value of V ∗(N) for a communication cost λ = 0.3. This value was chosenin order to show that even if the accuracy of the average estimate is betterfor N = 1 when d = 3, for this given communication cost, it is better touse a periodic switching with N = 2 instead, in order to have an optimalperformance cost V ∗. Also one can notice that now for a delay d = 7 it isbetter to not use the high-quality sensor at all even if the estimation accuracyis improved when using N = 5. For other results see [8].

One can expect then in the experimental validation that when using theoffline scheduler the accuracy of the estimated position be higher when usingthe 1st model instead of the second for the cases of period N = ∞ and N = 2for the parameters explained before. For N = 1, i.e, when the high-qualitysensor is used all the time one can say that the accuracy when using thesecond model is improved. All this conclusions are based on the values givenby p∗average for these different N values when comparing both models.

2.5.2 Covariance-Based scheduler

The first approach discussed was the offline scheduler were the sensor switch-ing was assumed to be periodic. In this Section, we instead let the sensor

30

Page 32: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 1001.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

3.2

3.4

N

V*(

N)

d=3

d=7

Figure 2.10: The performance cost VT as function of period N for two differ-ent delays d using model 2.

switching be based on how much increase in accuracy one can get from us-ing a particular sensor at that time instance. We call this covariance-basedscheduler or covariance-based switching as presented in [8].

The method is based on knowing the value of the iteration of the Riccatiequation (2.24) at maxD steps ahead for all possible switching combinations.In order to perform this one no longer can use the model based on the peri-odicity N as defined in Section 2.5.1. Now is defined a switch schedule s(k).The objective of this method is to find first the value of a function f wheref = min P ∗

0,0(k+maxD). So one does a tree search of the lowest value of P ∗0,0

at the tree depth equal to maxD at a given instant k + maxD. Figure 2.11illustrates this.

So for maxD = 2 one has to look at four different values of P ∗0,0 for k =

k +maxD. After being known the minimum value for P ∗0,0 at k = k +maxD

one can know the sensor switching path that has to be taken from instantk to k + maxD. If there are equal minimum P ∗

0,0 values for k = k + maxDone should choose the minimum cost path. Here one just uses the value ofP0,0(k) to calculate paverage(k) following the same reasons as the ones followedin Section 2.5.1. So taking the same assumptions as in Section 2.5.1, one willhave a communication cost λ whenever the high-quality sensor is used. Inthe end, one will have all the paths with distinct costs and then one choosesthe minimum. This algorithm is run at each k = (num ∗ (maxD)) + 1),being num = 0 for k = 1, and at each time a new search is made the numvalue is increased by one. So, for instance if maxD = 2, then we look ahead

31

Page 33: Localization and tracking using an heterogeneous sensor

Pk(0,0)

Pk+1(0,0)

Pk+2(0,0)

Pk+2(0,0)

Pk+2(0,0)

Pk+2(0,0)

Pk+1(0,0)

maxD=2

hq

hq

lqhq

lq

lq

Figure 2.11: Tree search example for covariance-based switching withmaxD=2.

at k = 1, 3, 5, (...). The chosen path is then stored in the set Thq and Tlq

and then the action taken for sensor switching is based on these sets. It isassumed that the first sensor to be used is the low-quality sensor. Figure 2.12shows the flow diagram that illustrates the discussed algorithm.

Now let us then define the switch schedule s(k) as

s(k) =

{1 , Tlq

2 , Thq(2.33)

and the following C and D matrices are used in the Riccati equation 2.24.

C(k) =

{ [C1 0 . . . 0 0

], s(k) = 1[

0 0 . . . 0 C2

], s(k) = 2

(2.34)

V (k) =

{Σ , s(k) = 1σ , s(k) = 2

(2.35)

For a given initial covariance P ∗(0), the schedule s(k) can then be com-puted on-line.

Examples

Next one will show with an example for each model what is the behavior ofthe covariance-based scheduler comparing to the offline one. It will be run

32

Page 34: Localization and tracking using an heterogeneous sensor

Pk(0,0)

Pk+1(0,0)

Pk+2(0,0)

Pk+2(0,0)

Pk+2(0,0)

Pk+2(0,0)

Pk+1(0,0)

maxD=2

hq

hq

lqhq

lq

lq

num=0 k=1

maxD=n S(1)=1

Initialization

k=(num*(maxD))+1) ?

Calculate minimum cost path and store

in vector S(k)

Yes

No

s(k)=S(k)Take action

Follow path S(k)

Generate tree and calculate

min(P*(0,0)(k+maxD))

Figure 2.12: Flow diagram of the covariance-based switching algorithm.

the covariance-based scheduler for maxD ∈ [1, 10] for the same conditionsassumed for the examples of the offline scheduler and a brief discussion willbe made. Considerations about the number of times the high-quality sensorand the difference on the p∗average for both schedulers will me made also.

In order to cope with the trade-off between communication cost and es-timation quality one need to have a cost function as was discussed in Sec-tion 2.5.1. The function paverage(k) has the same interpretation given in (2.27)which is the average trace of the covariance matrix P (k). As assumed in theexamples of Section 2.5.1 one will just use the value of P0,0(k) to calculatepaverage(k).

If one wanted to be precise the computation time taken for higher maxDshould be also taken into account. But until maxD = 10 it was seen that thecomputation time for each tree search is less then 0.1s so it can be neglected.

One should refer again that the parameters used were achieved in Sec-tion 2.5.1. It is reasonable to think that there are values of maxD, that forhigher relation W/d then the ones establish in Section 2.5.1, gives schedulingsequences that the webcam is used to perform localization. The achievementof the boundary for W/d was not calculated for this report but can be seen

33

Page 35: Localization and tracking using an heterogeneous sensor

1 2 3 4 5 6 7 8 9 101.135

1.136

1.137

1.138

1.139

1.14

1.141

194195

195 194194 195

194 194

195

194

maxD

paverage* (k,maxD)

Online

Offline N=1

Offline N=Inf

Figure 2.13: The function paverage(k,maxD) and the usage of the high-qualitysensor for the covariance-based scheduling when delay d = 3 and model 1.

as future work.

Example: Model 1

In this first example one will deal with the model one designed in Section 2.2.For this model one will compare the performance in terms of paverage(k)and cost function VT for the given process noise W and delay d that wasconsidered in the examples for the offline scheduler. Figure 2.13 shows thefunction paverage(k) for different values of maxD when delay d = 3 for aprocess noise W = 0.2. So in this case it is not considered any communicationcost. At each maxD is shown the number of times the high-quality sensor isused. There were made 200 iterations when calculating P (k), so kmax = 200.As one can see in Figure 2.13 the covariance-based scheduler performs betterfor almost all values of maxD in terms of estimation quality. Also one cansee that the number of times the high-quality sensor is used is lower then theoffline case (N∗ = 1 means M=200). So one can conclude that even addingany communication cost the performance criterion VT will always be smallerfor the covariance-based case.

In Figure 2.14 is shown the case where the delay d = 7. As it was seenin 2.5.1 the optimal switching periodicity N∗ = 7. It is shown that thefunction paverage(k) always takes smaller values in the offline case. As it canbe seen also since the optimal periodicity N∗ = 7 the value of M = 28,so is impossible to get a better performance V ∗

T using the covariance-based

34

Page 36: Localization and tracking using an heterogeneous sensor

1 2 3 4 5 6 7 8 9 101.55

1.6

1.65

1.7

1.75

1.8

1.85

1.9

1.95

2

62 62

123

91

73 63

53

68

83

73

maxD

paverage* (k,maxD)

Online

Offline N=1

Offline N=7

Offline N=Inf

Figure 2.14: The function paverage(k,maxD) and the usage of the high-qualitysensor for the covariance-based scheduling when delay d = 7 and model 1.

scheduler given this parameters of W and d. Also one should notice thatusing maxD = 1 or maxD = 5 gives us the same p∗average and same numberof high-quality sensor usage.

Example: Model 2

Now one will describe the comparison between the offline and covariance-based schedulers when applied to model 2 describe in Section 2.2. As for thefirst example one will take the same parameters described in Section 2.5.1.

The first example taken will be for d = 3 and process noise W = 0.003and the results are shown in Figure 2.15 . As one can see it is not possibleto achieve with the covariance-based scheduler, better estimations then theones given by the offline scheduler. Also one can note that the lowest valueof the function paverage(k) is for N = 1 for the offline case. Since the numberof times the sensor is used on the covariance-based case is always lower thenkmax one can already foreseen that when calculation the performance criterionwith a certain communication cost it will be better to use the covariance-based scheduler instead of the offline one for delay d = 3. This is shown inFigure 2.16 where it can be seen that for maxD = 2, i.e., doing a 2 depth treesearch one can get better estimates using the web-camera then not using it.The value used for the communication cost λ = 0.3 Figure 2.18 illustrates thefunction paverage(k) when the delay d = 7 and the process noise W = 0.003.

35

Page 37: Localization and tracking using an heterogeneous sensor

1 2 3 4 5 6 7 8 9 100.8

1

1.2

1.4

1.6

1.8

2

2.2 4

100

133146

157 163 165

145

130136

maxD

paverage* (k,maxD)

Online

Offline N=1

Offline N=Inf

Figure 2.15: The function paverage(k,maxD) and the usage of the high-qualitysensor for the covariance-based scheduling when delay d = 3 and model 2.

1 2 3 4 5 6 7 8 9 102.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

3

4

100

133

146

157

163 165

145

130

136

maxD

VCB

*(k,maxD)

Online

Offline N=1

Offline N=Inf

Figure 2.16: The performance function V ∗(k, maxD) and the usage of thehigh-quality sensor for the covariance-based scheduling when delay d = 3 andmodel 2.

36

Page 38: Localization and tracking using an heterogeneous sensor

1 2 3 4 5 6 7 8 9 102.1

2.15

2.2

2.25

2.3

2.35

2.4

2.45

2.5

0 92

123

92

74 63 53 46 43 38

maxD

paverage* (k,maxD)

Online

Offline N=1

Offline N=5

Offline N=Inf

Figure 2.17: The function paverage(k,maxD) and the usage of the high-qualitysensor for the covariance-based scheduling when delay d = 7 and model 2

As it can be seen for N∗ = 5 in the offline case the value of times thatthe high-quality sensor is used is M = 40, since kmax=200. Also as one cansee the estimation quality is higher when used the offline scheduler. But asseen in this Section in the other examples, since for maxD = 10 the value ofM = 38, one can find a given value for the communication cost λ that makesthe performance criterion V ∗

T (k, maxD) lower when using the covariance-based scheduler. It can be seen that for λ = 0.002 this situation occurs. Thisis shown in Figure 2.18. Here one can conclude that for the first model isnot possible to achieve better performances when using the covariance-basedscheduler in comparison with the offline one when delay d = 7. On the otherhand using both models when delay d = 3 is possible to improve the systemperformance by using an covariance-based version instead of an offline one.Also for the second model this is true for a high delay d = 7.

Table 2.2 shows the comparison between both scheduling approacheswhen using the different models and different communication costs. It isseen that the covariance-based approach is better then the offline one forboth models when the communication cost was set to be λ = 0.3 since thecovariance-based scheduler gave scheduling sequences with lower high-qualitysensor usage. It was also seen that in terms of estimation accuracy paverage,for the optimal scheduling sequence when using model 1 and model 2, model1 was the one that gave lower paverage values, i.e. more accurate estimations.

37

Page 39: Localization and tracking using an heterogeneous sensor

1 2 3 4 5 6 7 8 9 102.1

2.15

2.2

2.25

2.3

2.35

2.4

2.45

2.5

0

92

123

92

74 63 53 46 43 38

maxD

V* C

B(k

,max

D)

OnlineOffline N=1Offline N=Inf

Figure 2.18: The performance function V ∗(k, maxD) and the usage of thehigh-quality sensor for the covariance-based scheduling when delay d = 7 andmodel 2.

Model 1|W=0.2 Model 2|W=0.003

λ = 0 Offline Offlineλ = 0.3 Covariance-based Covariance-based

Table 2.2: Optimal sensor scheduling approach for model 1 and 2 consideringdifferent communication cost λ.

38

Page 40: Localization and tracking using an heterogeneous sensor

Chapter 3

Experimental set-up

In this chapter is discussed all the implementations made in order to performthe experimental validation of all the localization algorithms and tools de-scribed in the previous Sections. The chapter is divided in two Sections. Thefirst Section is the hardware description and finally the software. In particu-lar is described the designed and developed wireless sensor network testbed,the ultrasound localization system, the vision based system and the mobileagent used. Figure 3.1 gives an overview of the operation of the network andall the hardware involved.

3.1 Hardware

3.1.1 Wireless Sensor Network Testbed

In this chapter is presented the work developed towards the design, developand implementation of a Wireless Sensor Network Testbed. First is givenan overview about WSNs and WSNs testbeds. Is presented the WSN stateof the art followed by the design approach taken, its implementation andnecessary validation.

Background

In this Section is presented an overview of WSNs and WSNs testbeds. Onewill focus on the state of the art of WSN testbeds all show their features,architectures, communication characteristics and the hardware and softwareused.

39

Page 41: Localization and tracking using an heterogeneous sensor

Figure 3.1: Overview of the experimental set-up and operation based on thewireless sensor network testbed.

Wireless Sensor Networks

Nowadays all the systems used to perform a broad type of tasks such as pro-ducing lines, airplanes, cars, electric plants, satellites, health-care, a feedbackcontrol is applied [14]. Since the early 1930’s we have watched developmentson engineering in order to do a better control of processes. This theoriesstarted to be called as ”classical control” being the control made by hard-ware and continuous time. In the middle 1950’s was introduced the ”digitalcontrol” where computers were the ones responsible for closing the loop (dis-crete control). After the 1980’s due to communication link improvementsa new ”networked control” was used, where a group of computational unitsare used in a cooperative and decentralized fashion to perform a certain task.In the beginning of the 21st century, novel control theories were introducedin order to cope with a new technology, the wireless, originating ”wirelesscontrol” [15]. The main characteristic of these systems is that the link be-tween sensors and controllers and controllers and actuators is made throughwireless. In [17] is presented the development path of WSN since 1994 where

40

Page 42: Localization and tracking using an heterogeneous sensor

DARPA funded research on ”Low power wireless integrated microsensor”,followed by an announcement in 2003 from MIT saying that the WSN isone of the 10 technologies that will have the highest influence on the future.We are currently in 2008 and from all the research being made in this area(IEEE Signal Processing, Robotics, Communications) one can see that a lotof work has to be done. As the forecast presented in [14] shows, the numberof wireless sensors and embedded devices in the world will of more then 1Trillion fitting the idea that we can connect everything using wireless net-works. These networks can be applied to solve or/and ease tasks that atenowadays performed by cable solutions and are also creating novel applica-tions. The applications can be as seen in [14], [17], [18] and are for example:

• Wireless mining ventilation control.

• Wireless control of flotation process (Industrial monitoring and processcontrol).

• Vehicle fuel efficiency with networked sensing (automotive).

• Disaster relief support using mobile sensors.

• Surveillance with networked autonomous vehicles.

• Environmental monitoring (Terrestrial and aquatic monitoring).

• Habitat monitoring.

• Security and Defence of airports, stations, buildings, etc.

• Military - Information exchange, sniper detection, mine detection, etc.

• Domotics.

• Health care.

As advantages of the application of wireless networks one can easily seethat the cost (wiring and installation work), flexibility (less physical designlimitation, more mobile equipment, faster commissioning and reconfigura-tion) and reliability (no cable wear and tear and no connector failure) arereduced or completely removed. One can point out also disadvantages to thistype of networks which are the current challenges of all the research commu-nity. Is shown in [14] that security, reliability, lack of knowledge, cost, lackof commercialized solutions, low and slow data transmissions are the maindrawback of this technology at the moment. A lot of this features are due to

41

Page 43: Localization and tracking using an heterogeneous sensor

the characteristics of the wireless communications which are the less compu-tational capability, low energy storage, large variations in connectivity, lowbandwidth, delays and packet losses and the not well developed communi-cation theory at the moment [15]. Also presented in [15] and [16] are givensome solutions used in order to use WSN for performing control which are:

• Communication protocol suitable for control ( [19], [20] and [21]).

• Control application that compensates for communication imperfections( [22], [23] and [24]).

• Cross-layer solution with integrated design of application and commu-nication layers.

One can obviously see that in wireless control the communication andcontrol are always together, and so solutions in both areas have to be searchedin order to improve the WSNs when employed for this type of tasks.

In order for the research community to test their control algorithms,power management solutions and protocols on WSNs there were built WSNstestbeds. These testbeds are composed by an amount of wireless nodes whichare deployed indoors or outdoors according to the needs of the test one wantsto perform.

A wireless sensor network testbed was developed in KTH and its design,development and implementation is discussed in the following Sections.

State of the art

Figure 6.3 shows some WSN testbeds recently developed and successfullydeployed and tested, in several locations and with different features andsettings. The references shown are websites and [26]. Various possibilitiescan be seen regarding the design of the testbeds.

Features

Most testbeds share the possibility of

• remote programming and debugging

• monitoring and real-time interaction

• logging

• network administration and management

42

Page 44: Localization and tracking using an heterogeneous sensor

This subset of features allows running experiments with data collection andcontrol over events. Additionally, batch mode, scheduling, quotas and sup-port for multiple, simultaneous users are also common and interesting char-acteristics. Scalability is also a concern, although the importance given to itvaries: it is extremely important for the SenseNeT developers [33] but notas much for the MoteLab [32].

Architecture and Communication

Typically, testbed architecture consists of a control station connected to oneor more gateways, which provide the interface with the motes. Users connectto the testbed (possibly to all its levels) via the control station.Depending on the links used between the various levels, the testbed can havea more or less hierarchical setup. The choices of network devices and chan-nels for communication also allow different levels of flexibility (see [35] for anexample of a flexible testbed).Starting at the top, the communication between users and the control stationis normally done through an existing LAN and, possibly, through the Inter-net. This allows users to access the testbed by simply using a web browserto connect to the server set up in the control station.The connection of the control station to the gateways is done with vari-ous technologies. If motes are acting as base stations, then USB (serial) isused [34, 36]. A particular and interesting case is that of the DeploymentSupport Network by ETH Zurich [27], where Bluetooth is used (via BTn-odes [37]). Ethernet is used especially with gateways with some processingpower and when such a backbone is already installed (see, for instance, [32]).If mobile nodes are used (e.g., robots) then 802.11 is used when gateways(e.g., Stargates) have such possibility [30, 31].Motes connect to gateways depending on which type (and how many) of thelast is deployed and on which tasks are expected to be performed over theback-channel. If a mote acting as a base station is used, then radio is thechannel employed [33, 36]. Otherwise, USB is usually utilised [30, 35].

Hardware and Software

Wireless nodes

The motes usually chosen are Tmote Sky or Mica2/Z. There are other op-tions, but they are not normally considered unless the testbed is required tosupport more than one kind of motes.Motes almost invariably run TinyOS. For network-wide programming, Del-uge [38] is commonly used.

43

Page 45: Localization and tracking using an heterogeneous sensor

Gateways

Gateways are often Tmote Connect, Crossbow MIB-600 or Stargates (run-ning Linux). Sometimes, expansion cards are used to have other possibilitiesfor the communication links, e.g., 802.11. This is common when robots -being the Acroname Garcia a common choice - carry motes, acting as mobilenodes.

Control station

Depending on the functionality required, the control station can be betteror worse equipped. Linux is widely used. Web servers use Apache or others;databases are built with MySQL; scripts in various languages are used as amanagement or experimentation tool.

Design

In this Section is presented the steps performed to reach the design thatfit the KTH S3 research team requirements. System Engineering tools wereused.

System Breakdown Structure

The System Breakdown Structure (SBS) is a diagram built to organize, di-vide and set a hierarchy for the processes and products that comprise thesystem architecture. The SBS is then used to help project development andorganisation of the team tasks. For the construction of this diagram, theIEEE standard presented on [26] was used. The SBS is part of the SystemsEngineering Management Plan (SEMP), also described on that reference.The diagram presented on Figure 6.4 shows the SBS. The top level presentsthe identification of the system to be created: ”Development of a WSNTestbed”. The second level corresponds to the products of this system thatare the technological part of the system i.e. the WSN Testbed itself (in grey),and the systems engineering processes that will take place during the systemlife cycle. The third and last level shows all the components (sub-products)of the WSN Testbed. Under each product or sub-product all the tasks andtopics related to them are detailed.

Requirements

This Section presents the requirements observed for the WSN Testbed. Thefields of interest of the KTH S3 research team is first discussed and then the

44

Page 46: Localization and tracking using an heterogeneous sensor

requirements will be shown.

WSN Testbed fields of action and demonstrations

We are now going to describe the fields and the respective specific demon-strations that are going to be performed on the testbed.

• Multi-robot Systems

– Localisation

– Coordination

• Networking

– Routing

– Power Control

– Radio Propagation Models/Measurements

• Control/Estimation over Wireless Networks

– Consensus Filters

– Control over wireless link

– Tracking (of objects/persons)

• Security

WSN Testbed Requirements

The requirements pointed next are based on the state of the art analysisand the fields of interest listed in the previous Sections. So, in order to beconsidered a state of the art testbed, it should be capable of:

• Remote network-wide programming and debugging

• Event and (collected) data logging and trend monitoring

• Real-time interaction and batch experiments

• Network management and administration - link mapping, nodes status

• Support for multiple users with scheduling and quotas

• Easy to use - web based user interface

45

Page 47: Localization and tracking using an heterogeneous sensor

• Expandability - more and/or different sensors (including cameras)

• Scalability

• Source of power supply control and fault injection

• Actuation and mobile nodes - temperature, humidity and light con-trollers; robots; doors and windows actuators

• Generate different environments - meet Swedish industry/governmentneeds

Additionally, the network should cover the corridors and possibly a few officerooms. Cost should also be minimised as well as human effort.

One also needed to have an external power supply that should meet thefollowing requirements:

• Provide external power supply to the motes.

• Possibility to use batteries and external supply at the same time andto switch between them if desired.

• Easy to deploy in any place inside a building.

• Possibility to have different structural configurations and different num-ber of motes.

• Reduced price.

Implementation

In this Section is discussed the solutions provided to implement the testbed,the power supply and the wireless programming.

Solutions

We divided our approach in two parts. One consists in the solution to beadopted at the moment, called basic, and the other is the complete solution.The basic solution is the one that will fulfill some requirements pointed out inSection 3.1.1. This solution will not fulfill the requirements where actuatorsare required. Moreover, some testbed capabilities will not be available sinceservers will not be put to work in this first part.

46

Page 48: Localization and tracking using an heterogeneous sensor

Basic Solution

The basic solution includes only a portion of the complete testbed, both inthe fulfillment of requirements and in the amount of motes deployed. This so-lution was implemented to be used on this MsC thesis project. The proposedapproach is as follows.

• Set up a group of motes in boxes (properly adapted) on the ceiling of theKTH S3 corridors (Network 1 - 16 motes) and MsC Students/Visitorsroom (Network 2 - 8 motes).Use the floor plan (see Appendix 6) to determine where to deploy themotes; look out for energy consumption (reduce transmission power)and possible traffic problems

• Configure two computers (running Linux) to act as the central com-puters i.e. where the mote acting as base station is connected. Therewere configured two computers in order to have two different networksworking. This has to deal with the fact the two different projects hadto be developed on the testbed.

• Have ultrasound sensors connected to motes and a mobile agent inorder to perform localization on the testbed. The mobile agent shouldbe an electric RC car since is a cheap solution to start. This is onlyavailable on network 1.

• Work with Deluge so that wireless, network-wide programming can bedone

• Implement a sleeping mode for the motes i.e. complete or selectivetestbed power down functionality.

• Create radio messages for debugging (use ready-made software for log-ging).

In order to implement the required VCC power supply of 5V the followingapproach was taken. The developed solution was to use the USB connectorof the Tmote Sky. Since the Tmote is already prepared to be supplied powerthrough this connector (5V) and at the same time by batteries, without caus-ing any harm to the mote, this was considered the optimal solution for ourproblem. In order to perform this task, a 5V VCC adapter is connected toa l meters long cable, which in each d meters as a female (type A) USB con-nector derivation, which then is connected to the Tmote supplying the 5V.Tests were made in order to analise the integrity of the Tmote.

47

Page 49: Localization and tracking using an heterogeneous sensor

Figure 3.2: Standard KTH wireless sensor network testbed power supply

In each External Supply Network solution there has to be:

• l m length Cable.

• l/d USB female (Type A) connectors.

• l/d-1 Derivation connectors.

• One VCC adapter with output current capable of supply l/d motesconsumption. Each Tmote has a consumption of 23mA. Consideringthat sensors and actuators can be pluged in the tmote there is a needto have the right adapter to supply the Network. The cable adapterplug is capable to have different types of adapters.

The length l and connector step d, will depend on the application. It wasmade a standard configuration of:

• 10m length cable (l=10).

• 1m connector step (d=1).

With this solution one can see that the user has the freedom to choosethe desired configuration shape (circle, square, rectangle, etc. on 2D or 3D)and the number of motes for each application.

In Figure 3.2 is presented the schematic for a standard External SupplyNetwork.

48

Page 50: Localization and tracking using an heterogeneous sensor

Complete Solution

This complete solution presented in this Section is based on the analysis ofrequirements presented on Section 3.1.1. The characteristics addressed inthe basic solution should be maintained entirely, except for the increase innumber of devices. Nevertheless, several more features are required thanthose, which increases the complexity of the system.The architecture should be roughly the same as the one presented in the basicsolution. Some motes should act as base stations, connected to computersvia USB. These computers should act as control stations and be connectedthrough the private Ethernet network (alone and/or with 802.11, if desired)already present in the floor.The number of base stations and computers should be determined through ananalysis of the floor plan, as with the basic solution. However, one computershould be configured to run as a web, database and management server; thisshall be set as the central computer, where full control over the system ispermanent.The following actions should be possible to be performed through the web -using the user interface that is being developed - or locally.

• Experiment planning, scheduling and interaction

• Data storage and analysis

• Run the user interface which is currently being developed.

• Allow access to motes along with debugging and programming

• Network control, supervision and management

In terms of actuators the testbed should have

• Robots - 3 or more, due to localization needs; possibly Acroname Gar-cia. They are the preferred choice among the state of art testbeds

• Mote Cameras - CITRIC mote camera, see [39].

• Temperature, humidity and light actuators

• Doors and windows actuators for security issues and environmentalstate changes

49

Page 51: Localization and tracking using an heterogeneous sensor

Wireless control testbed

Wireless

sensor

Control

computer

Camera

Network 1

Network 2

Mobile

agent

Figure 3.3: Testbed deployment in the 6th floor of the Q building in KTH.

Deployment

In Figure 3.3 one can see the current deployment of the testbed. Thereexist two different networks in order to develop different work. Network 1is used to perform localization and network 2 is being used to test detectionsystem algorithms and . The experimental validation presented in Chapter 4was implemented on network 1. One can also see that there were installedthree cameras, two control stations to control and program each networkand a mobile agent. The characteristics of the cameras, mobile agent andthe ultrasound sensors are presented in Section 4. Figure 3.4 and 3.5 showsthe deployed sensor nodes on the corridor of the 6th floor of the Q building inKTH and the the wireless node Tmote Sky in casing with Ultrasound Sensor.

3.1.2 Ultrasound sensor

The ultrasound sensing system is based on having a cluster of ultrasoundtransmitter microphones that emits ultrasonic signals, an ultrasound receivermicrophone which receives the signals sent by the other and two wirelessnodes that interact with them. In order to interact with the sensor one hadto develop a circuit for each in order to perform the signal conditioning.

The transmitter design structure is seen in Figure 3.6. It is assumed bytesting that the ultrasound microphone span is a cone with an angle of 60◦.There were placed six microphones separated by 60◦ (Figure 3.6, top view)covering all the area around the transmitter cluster. Also the microphonesare placed with 30◦ displacement angle between the ground plane and eachmicrophone centroid line as seen in Figure 3.6(side view) being then able tohave a 3D coverage area of an half sphere. So with this design one can cover

50

Page 52: Localization and tracking using an heterogeneous sensor

June 29, 2008

ACCESS test bed on

wireless sensor network

Figure 3.4: Picture shows the network 1 of the testbed in the corridor of the6th floor of the Q building at KTH.

Figure 3.5: Picture shows the wireless node Tmote Sky in casing with Ultra-sound Sensor.

51

Page 53: Localization and tracking using an heterogeneous sensor

60ºTOP VIEW

Ultrasound Transmitter

30º

SIDE VIEW

Figure 3.6: Ultrasound transmitter cluster top view and side view.

all the area around the transmitter.For one microphone placed in a vertical position if the ceiling height is

2m one can say that the coverage area of just that microphone is a circle thatcan be calculated by A = π × r2 where r is the radius of the cone describedby the microphone span angle in the ceiling. One can know r by calculatingr = tan 30 × 2 where 30 is half of the span angle. Performing the calculusone knows that the coverage area is of about 5m2.

The transmitter circuit has to be able to modulate a signal for the inputof the microphone of 40KHz which is the working frequency of the ultrasonicmicrophones. Also since the ultrasonic signal is going to be sent throughseven microphones, the signal amplitude has to be high. The value chosenwas of 12V. This voltage is supplied by a 12V and 2.2 Ah Battery. Thecircuit has as an input a signal from the wireless node which has the functionof enabling the transmission of the ultrasound wave from the microphone.The wave is enabled when the input signal is low (0V) and disable when high(3-5V). The designed and developed circuit is presented on Figure 6.2.

The ultrasound receiver circuit works as follow. The ultrasonic signalreceived by the receiver microphone generates an electric signal which is thenfiltered and amplified. After this, one needs to evaluate if the electric signalis due to an ultrasonic signal or just some random noise. So, knowing a prioriwhat is the usual shape of that signal, one needed to have a threshold value toperform the signal identification. When first developed this system one triedto use the wireless node ADC in order to check when the signal measured ishigher then this threshold value. Problems arise due to the high variation of

52

Page 54: Localization and tracking using an heterogeneous sensor

the readings. After this was tried to design an hardware comparison using ancomparator based on operational amplifiers. This shown to give low varyingsignal with low noise so this solution was adopted. The threshold value wasset to be 1.8V. One achieved this value by testing the system. Once the signalsuffers the comparison the generated signal is now a binary signal consistingof an high value (5V) or low value (0V). This signal is then given as an inputto an interruption input port of the wireless node in order to keep track onthe arrival of an ultrasonic signal. The circuit of the ultrasound receiver isshown on Figure 6.

After describing the transmitter and receiver circuitry one will show thecharacteristics of the wireless nodes connected to each one.

The transmitter node has to be capable of generating a binary signal tothe transmitter circuit. This is done by clearing the output pin connect tothe transmitter circuit. The pin selected was the pin 5 of the wireless node(see [13] for details). Also the transmitter wireless node has to be able tosend a message at the same time as the electric signal. This is explained inSection 3.2.

The receiver node has just to enable interruption on the port selected,wish was pin7 of the expansion connector (Port GIO0, see [13] for details).Also it has to be able to receive the message from the Tx node in order toperform the calculation of the Time-of-flight. The Tx and Rx procedure isfully detailed in section 3.2.

3.1.3 Vision based system

The Vision Based System is composed by two modules. The web camera(Sensor) and the Processing Unit. They are both presented next. The videoacquisition, frame grabbing and posterior image analysis has to be made in adedicated processing unit rather then with the fusion center processing unitsince this role of tasks block the computer processing and so it would not letthe system ask for a ultrasound value while computing a position value forthe respective image.

Web-camera

The camera chosen for this system was the web-camera, Logitech QuickcamFusion with the following characteristics:

• Gross sensor resolution of 1.3 Mega pixels

• CMOS Optical Sensor Type

53

Page 55: Localization and tracking using an heterogeneous sensor

Figure 3.7: Logitech fusion web-camera.

• Color

• 1280x960 of image size

• USB Interface and powered through USB

It was chosen a web-camera since was the cheapest solution that allowus to have a simple and quick connection to the processing unit and enablethe image processing with a good image quality. Figure 3.7 shows the web-camera used.

Processing Unit

The Processing unit has to be able of:

• Interface with the web camera through USB.

• Acquire video, grab frame and process frame to achieve world coordi-nates of the position of the object).

• Send the position of the robot using UDP communication protocol onMATLAB.

The processing unit is a laptop, equipped with an IEEE 802.11g wirelesscard in order to communicate with the fusion center. One could use also thewireless sensor network to process the data exchange. For this one needed

54

Page 56: Localization and tracking using an heterogeneous sensor

Figure 3.8: Mobile Agent. RC electric car controlled by an human operator.

to set the base station node to listen to position measurements given by thevision system processing unit and also to have a dedicated node connected tothe vision system processing unit. Due to construction simplicity one keptthe IEEE 802.11g wireless solution instead. As a future development onecould develop a solution only based on the WSNs IEEE 802.15.4 protocol.

3.1.4 Mobile agent

The mobile agent used was an electric Radio Controlled car. The car wasthen equipped with one Tmote Sky node, the Transmitter circuitry and one12V, 2.2 Ah battery. The car radio controller has a range of approximately20m. It has the function of simulate an autonomous robot moving and send-ing ultrasound pulses and RF messages through the wireless node, on thecorridors where the testbed is deployed. The control is done by an humanoperator. In the future the car will be changed for an autonomous robot.Figure 3.8 shows the mobile agent.

3.1.5 Fusion Center

The fusion center is composed by one processing unit which interfaces withthe sensor network through a wireless node connected to it (base stationnode) and interfaces with the vision based system processing unit through awireless IEEE 802.11g connection. The processing unit is a laptop, equippedwith an IEEE 802.11g wireless card in order to communicate with the visionsystem and USB ports enabling the wireless node connection.

55

Page 57: Localization and tracking using an heterogeneous sensor

3.2 Software

In this Section one describes all the operational procedures and algorithmsimplemented for each module of the localization system. It is described theultrasound system, the vision based and then the fusion center.

3.2.1 Ultrasound system

The procedure followed to develop the ultrasound system can be divided intotwo Sections, Transmitter (Tx) and Receiver (Rx). It will be presented theimplementation in each Section. The general procedure is based on sendinga message through the Tx node to the receiver node and at the same time aultrasound signal. Considering that the message is instantaneously receivedby the Rx node (speed of light), the ultrasound wave emitted by the Txnode is then captured by the Rx microphone which triggers an interruptionon the receiver node. The time difference of arrival between the two signals(RF Message and Ultrasound) is then used to calculate the distance betweenthe two nodes. The time value is then sent to the based station node whichafter filtering the data (Kalman Filter) gives the estimated position (SeeSection 3.2.3). In order to perform localization, three receiver nodes wouldbe needed. The algorithm is based on the fact that the speed of sound isapproximately constant and equal to 340.29 m/s, so multiplying the timedifference of arrival by the speed of sound, the distance between the nodes isachieved. As we can see in [10] the speed of sound can be better approximatedif the temperature and humidity are known. For now this fact was ignoredsince the values obtained are less then 3% of the measured distance(SeeSection 4.1).

Figure 3.9 presents the model explaining the global procedure of the ul-trasound system.

Transmitter

The transmitter node has to be capable of:

• Generate and send a message (with a tag showing that it is the ultra-sound message) and at the same time clear the pin 5 on the expansionconnector in order to generate an ultrasound signal in the ultrasoundcircuit. This is set to be done periodically in order to help in the per-formance tests evaluation on the receiver node. One can see that powerconstraints were placed in the mobile node this would no happen. Thenis assumed that there are no power constraints in the mobile agent.

56

Page 58: Localization and tracking using an heterogeneous sensor

Time

Time

Receiver

(Rx)

Transmitter

(Tx)

Ultrasound signal

+ RF message

Receives RF

message Receives Ultrasound

signal + Transmittes

message to base station

with ∆t

tmsg tUS

∆t

∆t=tus-tmsg

Distance=∆t×SpeedSound

Figure 3.9: Interaction between ultrasound receiver and transmitter modules.

The tinyOS functions used by the transmitter are:

• Timers - Millisecond resolution - Pin set and clear.

• Message - Message Sending

• Counter - Microsecond resolution - Evaluate the timer accuracy

• Printf - Debugging

The transmitter operation can be seen in the flow diagram in Figure 3.10.

An important feature of the ultrasound Tx is that the periodicity ofthe message and ultrasound transmission can be lowered to 100ms. Forvalues under this the receiver starts receiving inaccurate values of the timedifference. The period value used for all the tests was 250ms, in which theultrasound is emitting for 100ms and waiting 150ms for sending again.

Receiver

The receiver node has to be capable of:

• Receive the message sent by the transmitter node and decode it. Startthe counter.

• Have a comparator that compares the voltage in the ultrasound channelwith 1.8V. Output is 1 if higher then 1.8V, 0 if lower.

57

Page 59: Localization and tracking using an heterogeneous sensor

Figure 3.10: Transmitter flow Diagram

• Enable interrupt on GIO0 port in the MSP430.

• Generate interruption and tag the arrival time of the ultrasound.

• Calculate the time difference of arrival between message and ultra-sound.

• Broadcast a message with the calculated value.

It was chosen a value of 1,80V for the threshold value because with thetests made, when a ultrasound pulse arrives the signal is always higher thenthis value. This value can always be changed in order to improve accuracy

58

Page 60: Localization and tracking using an heterogeneous sensor

on the detection point, since it establish the margin between a Ultrasoundpulse reception.

The tinyOS functions used by the receiver are:

• Timers - Millisecond resolution - Accuracy and performance tests.

• Message - Message Reception.

• Counter - Microsecond resolution (32KHz) - Calculate the Time differ-ence of arrival.

• Interrupt - Tag the time when ultrasound pulse received.

• Printf - Debugging.

The Receiver flow diagram is presented on Figure 3.11.As one can see that since the counter clock is 32KHz the minimum dis-

tance the node can solve is around 1cm assuming the velocity of sound of340.29m/s.

3.2.2 Vision based system

Here is presented the algorithms and tools used to on the vision based system.As seen before the processing unit has to be able to:

• Interface with the web camera through USB.

• Acquire video, grab frame and process frame to achieve world coordi-nates of the position of the object).

• Send the position of the robot using UDP communication on MATLAB.

Image Processing

The video acquisition, frame grabbing and image processing were all pri-marily done with the MATLAB Image Processing Toolbox, because it is theeasiest and most straight forward software tool to use. All the MATLABfunctions were chosen depending on several quality tests made during theimplementation of the algorithm. As one could see while running the tests,all the processing time taken was in the worst cases from 8s to 11s which wastoo much given the system requirements. In order to reduce the time spent

59

Page 61: Localization and tracking using an heterogeneous sensor

Figure 3.11: Receiver Flow Diagram

on video acquisition and frame grabbing a new approach based on OPENCV(see [42] for further details) was implemented. It was seen that the timespent for analysing the image and getting the object the position values wasalways 1s using the OPENCV solution.

In order to improve the robustness of the image processing algorithm onehad to achieve a way to clean from the image the parts where the robotcannot be found. For the first time the system was run a white rectangle wasmarked on the floor, limiting the area where the robot can be found whichwe call the clear path. The image is then stored as cutimage.jpg and is thenused to eliminate all the surrounding of the clear path. One can easily seethat if the position of the camera is changed one has to deal with this step

60

Page 62: Localization and tracking using an heterogeneous sensor

again. This procedure is seen as the camera calibration step and its need isexplained next. It was performed in order to reduce the computational timeof the image processing algorithm by reducing the detection area. Anotherfeature of the algorithm is that the object necessarily has to be round whitecircle with a diameter larger then 5cm in order to be detected. It was placeda white circle with a diameter of 20cm in order to cover all the top of themobile agent.

The flow diagram of the image processing is presented next as well as thedescription of each function used.

Pre-Processing

Image acquisition

Robot Image Coordinates

Segmentation

Cutting and cut image storage

Position Transformation

and storage of M

If k=1

If k≠1

Robot Real

Coordinates

If k=1

If k≠1

Apply cut image mask

Figure 3.12: Image processing flow diagram

• Image Acquisition

61

Page 63: Localization and tracking using an heterogeneous sensor

Figure 3.13: Acquired image with mobile agent. Image taken on the corridorof the 6th on the Q Building at KTH.

The image acquisition is only made if the threshold of the gray scaleimage is above a certain value. This value was set to be 0.4 because wesaw that above this value the contrast of the image is sufficient in orderto perform the localization of the lines and the object in the image. Forthis the MATLAB function graythresh was used.

In Figure 3.13 one can see an example of an acquired image with themobile agent.

There are two types of ways of acquiring the image with depending onthe scheduling scheme used:

– Periodic Triggering - signal enabling the picture acquisition fromthe fusion center is periodic.

– Async Triggering - signal enabling the picture acquisition from thefusion center is not periodic.

The image acquisition and storage is made by using the OPENCVlibrary functions. It was created a C++ code which acquires, showsand store an image in a jpeg format file. The code is presented next andit explains how the image acquisition, display and storage is processed.

62

Page 64: Localization and tracking using an heterogeneous sensor

With the tests made using MATLAB function cputime function waspossible to evaluate that the delay between calling the function andhave an updated image available was of 0.75s. This program has tobe called on the Linux terminal before the MATLAB image processingalgorithm is called.

// Initializes capturing video

CvCapture* capture = cvCapturefromCAM( CV_CAP_ANY ); from camera

(...)

// Create a window in which the captured images will be presented

cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );

// Show the image captured from the camera in the window and repeat

while( 1 ) {

// Get one frame

IplImage* frame = cvQueryFrame( capture );

(...)

//Store image under filename ’’VideoFrame.jpg’’

sprintf(filename,"VideoFrame.jpg");

//Show image on the window

cvShowImage( "mywindow", frame );

// Release the capture device housekeeping

cvReleaseCapture( &capture );

cvDestroyWindow( "mywindow" );

• Pre-Processing - Filtering and image resizing; Edge detection; Firstthe RGB image is converted to grayscale format using the functionrgb2gray. Then the precision class is change to double using the func-tion im2double. After this 5 pixels around the border of the frameare removed in order to help on the segmentation task. With thisapproach the image borders are not detected as lines. We used theunsharp masking filter with the fspecial function followed by a edgedetection with prewitt filter with level 0.02. After this step if it is the

63

Page 65: Localization and tracking using an heterogeneous sensor

Figure 3.14: Cut image view. Binary image.

first time the system is used (k = 1) or if the camera has changed po-sition, then one has to detect the lines of the white rectangle drawn onthe floor as explained before. This is done by the step Segmentation.If is not the first time the system is used and there are no changes madeto the camera position one can go for the Apply cut image mask step.

• Segmentation - Get the border of the rectangle where the robot isinserted; Functions HoughTransform, houghlines and houghpeaksare used.

• Cutting - Cut image; Everything outside of the rectangle has to becut in order to help on the robot detection; It is created and stored abinary image with pixel value 1 inside the rectangle and 0 outside therectangle. The result of this step can be seen in Figure 3.14.

• Apply cut image mask - In this step the cut image is multiplied by thesmooth image (output of Pre− processing) and another edge filteredis applied (canny filter with 0.08 level). The area outside the rectangleis then removed i.e. all the pixels of that have value 0 now.

• Robot image coordinates - Calculation of the robot coordinates;

– Calculate the object boundaries on the image using bwboundaries();

64

Page 66: Localization and tracking using an heterogeneous sensor

– Labeling bounded objects;

– Filtering the image in order to remove the noise using erode anddilate filters;

– Labeling the areas of interest;

– Eliminate the areas which are not circles; Excentricity higher then0.8;

– Calculate the centroid of the circular labeled objects;

• Position transform and storage of M. This step is only performed forthe first step k = 1, otherwise for k 6= 0 the next step is performed. Inthis step the following is done:

– Calculate 4 points of interest on the image; Corners of the rect-angle; Since in the image the area is no longer a rectangle (thesize of the top rectangle line is much smaller then the size of thelower rectangle line) the calculation of the corner is not so trivial.A filter was applied in order to find the pixels where the patternfrom of (3.1) was found. This procedure is called matching. Whendetecting the matching points, one looks for the points that havethe lowest and highest i coordinate (left and right top corner re-spectively). To know the lowest line corners was straight forwardand we only had to look to the points that the distance to thelower corners of the image was smaller.

pattern =

0 0 01 1 10 0 0

(3.1)

– Transform image coordinates to world coordinates; This is madeusing the following coordinates transformation:

wiwjw

=

m11 m12 m13 m14

m21 m22 m23 m24

m31 m32 m33 m34

×

XYZ1

(3.2)

where i and j are the x and y pixel coordinates of the object,mij are the values of the transformation matrix M , w is the focallenses scale factor and X,Y and Z are the world coordinates of theobject. One can notice that the values of mi3 = 0 since Z = 0(theobject is in the ground plane). Then is easy to discover that the

65

Page 67: Localization and tracking using an heterogeneous sensor

value m34=1. Now we have that the matrix M has 8 unknownvariables and so one needs 4 points (ik,jk) in the pixel coordinatesto solve the equation system with 8 unknown variables mij. Oneneeds 4 points since each point allow us to have 2 equations sincew can be written in a function of only X and Y and then issubstituted on the equations generated by the two lines above.This can be interpreted as:

wik = m11X + m12Y + m14

wjk = m21X + m22Y + m24

w = m31X + m32Y + 1(3.3)

So now one has to solve the 8 equations given by the points(ik,jk),k ∈ [0, 4] to get the values mij.

In the end the matrix M is stored in order to be used for the nextsteps. This is done in order to reduce the computational time.

• Robot real coordinates

After having the matrix M calculated is easy to get the values for Xand Y for a given centroid value (i,j) since we only have two unknowvariables with two equations. One has to substitute w in the two firstequations of (3.3) for w = m31X + m32Y + 1.

In Figure 3.13 one can see the final result of the image processing algo-rithm made to the image presented in Figure 3.15. Next is describedthe vision based processing unit.

Note that it is assumed that the camera has a top view and not lateralview as in the implementation. This was done in order to reduce the com-plexity of the position achievement. If this simplification was not made thecoordinate transformation would be non-linear as it is shown in Figure 3.16where the web-camera span-angle is θ and its pixel resolution is denoted by∆α. As one can see the world resolution ∆Xi to pixel resolution ∆α, wouldchange according to the distance of the mobile agent to the camera, givenby a non-linear function, so ∆X1 6= ∆X2. This problem is not dealt in thiswork, we assume that ∆X1 = ∆X2

Processing unit

The way to send data from the vision based(VB) processing unit to thebase station(Fusion center(FC) processing unit) is done by UDP protocol inMATLAB using the wireless standard IEEE 802.11g. The base station sends

66

Page 68: Localization and tracking using an heterogeneous sensor

Figure 3.15: Mobile agent detection. Star shows the detected centroid of thethe mobile agent.

Figure 3.16: Vision based localization system with non-linearities.

a message over wireless with value ’1’ if it wants to have information aboutthe mobile agent position from the vision system. The vision system waitsto get this value, when this occur, the image processing algorithm is called

67

Page 69: Localization and tracking using an heterogeneous sensor

and the position value is returned to the base station also using the sameprotocol over the wireless link. The code of the algorithm implemented ispresented next.

• VB Processing unit side

VB=udp(’IP OF CS’,’LocalPort’, 15005, ’RemotePort’, 15004); %define port properties

fopen(VP); %open port

%Wait for action enabling from Base Station

%Always reading until get a 1 as input

%send by Base Station (enabling signal)

while reception~=1

Value=fscanf(VP,’%c’,1) %read from the port

end

%Call algorithm and returns X and Y

Image_Processing_Algorithm

reception=0;

A=[X;Y]

%Return position value from ethernet port to base station

fprintf(u,A); % write in the port

3.2.3 Fusion Center

The fusion center is composed by one processing unit which interfaces withthe sensor network through a wireless node connected to it (base stationnode) and interfaces with the vision based system processing unit througha wireless connection as it was seen in Section 3.1. Next is presented theoperation of the base station node, the ultrasound reader, the vision basedreader and the fusion center position estimator implemented. The schematicwith all the components involved and data flow and treatment is presentedin Figure 3.17.

Base Station Node

The base station node is responsible for receiving the messages from eachreceiver node on the wireless network and then send their values, with therespective node ID, through the serial port to the fusion center system pro-cessing unit. The base station only sends the values through the port when

68

Page 70: Localization and tracking using an heterogeneous sensor

t1t2t3t4

Ultrasound Base Station

Node

Position calculation and Offset Adjusment

Scheduler Outlier Rejection

Ultrasound Network

(x1,y1)

Web-Camera(x2,y2)

^ ^(x,y) Estimator

d1d2d3d4

Figure 3.17: Data flow and treatment including the sensors and processingunit.

it receives a message from four nodes in order for the time in which the local-ization is made is the same for all. At this node is also made a pre-selectionof nodes which should give their time measurement. The nodes which givestime difference values higher then 14690µs (≈ 5m) are discarded. Since thehighest distance between transmitter node and receiver given all the clustersof four motes (combination of all the 16 motes used) is ≈ 2.60m, in orderto cope with possible message collisions between closer nodes, one has cho-sen 5m value as a threshold. This value can be adjusted according to theplacement of the nodes in the network. There are four values received butonly three nodes are required because the localization is made in 2D sincewe assume that the mobile node is at Z=0. The reason for taking valuesfrom four motes is that with this one can apply an algorithm in order to putthe most far away node and the others in sleeping mode because one knowsthat it will not be required to perform localization. This enables less powerconsumption on the network. Also this is needed if the three nodes are inthe same plane X or Y one have to discard their readings since the solutionfor the trilateration method cannot be achieved if this happens, as describedin Chapter 2. The way to deal with this fact is described next.

The flow of the algorithm implemented in the node is:

1. Receive message k from node n.

2. If value of node n ≥ 14690, then discard value. Otherwise keep thevalue and increase k.

3. If the values are received by three nodes located in the same plane Xor Y discard the value of the node that has the highest time differencevalue and wait for another reading.

4. If k = 4, send the four values trough the serial port. If k <4 go to step1.

69

Page 71: Localization and tracking using an heterogeneous sensor

The message sent has the format dk1 = ∆tk1, dk2 = ∆tk1, dk3 = ∆tk3, dk4 =∆tk4,

One should notice also that if the nodes are too close sometimes therecould be a measurement sequence where the received distance can be givenby 4 nodes in the same X or Y plane. In order to cope with this one canincrease the number of distance values transmitted to the fusion center fromthe base station node.

Ultrasound reader

The method described in Chapter 2 used for calculating the position of theobject is the trilateration method. All the different methods for determiningposition are described in Section 2.1.2. We are going to calculate the positionin 2D as discussed in the previous Section.

For each distance received the procedure of the ultrasound reader algo-rithm implemented on the fusion center processing unit is as follows:

1. Sum a characteristic offset value for each node to the distance value.The offset value was given by tests done to each node and getting therespective average of the error, which was shown to have a low variance(See Chapter 4).

2. Calculate X and Y based on the values given by three of the foursnodes applying the trilateration method. Since the node that is morefar away from the target is the one that should have more error, acomparison between the four values are made and the highest is dis-carded. As discussed before this enables a better power managementof the network.

Vision based reader

As discussed in Section 3.2.2 was implemented a MATLAB program to per-form the request of a position measurement given by the camera and toreceive this value. This program has to be able to trigger the algorithm de-veloped on the vision system processing unit as described in Section 3.2.2.This is done by sending a message through the wireless link, using the UDPprotocol, with value=1. The time between the request and reception at thebase station node is estimated to be of 0.5s maximum. When the imageprocessing is concluded on the vision system processing unit the value withthe position of the mobile agent is sent back to the fusion center. Thenthe vision based reader MATLAB code implemented on the fusion center

70

Page 72: Localization and tracking using an heterogeneous sensor

processing unit is responsible to store the position value to be used for themobile agent position estimation according to the defined scheduler.

The code developed is as follows:

FC=udp(’IP of VB’,’LocalPort’, 15001, ’RemotePort’, 15000);

%define port properties

fopen(FC); % open port

%Take new measurement

fprintf(FC,’1’) %write 1 on port

%Wait for measurement from port

X=fscanf(FC,’%c’)%read port and save as string vector X

%Since the values are stored on a string one has to read it.

%Each character is in the ASCII form so one has to pass it to integer.

for i=1:length(X)

X(1,i)=X(1,i)-48; %passing to integer

end

%Now one has to give to each digit its original position value.

%For example, if X=1000, the digit one has to be multiplied by 1000 (10^3)~

%and then sum zeros multiplied by power of 10 for the rest digites

%according to their position.

%Store position for X

posX(l,1)= X(1,1)*10^X(1,12)+X(1,3)*10^(X(1,12)-1)+X(1,4)*10^(X(1,12)-2) +

X(1,5)*10^(X(1,12)-3) + X(1,6)*10^(X(1,12)-4)+ X(1,7)*10^(X(1,12)-5)

%store position for Y

posY(l,1)= X(1,14)*10^X(1,25)+X(1,16)*10^(X(1,25)-1)+X(1,17)*10^(X(1,25)-2) +

X(1,18)*10^(X(1,25)-3) + X(1,19)*10^(X(1,25)-4) + X(1,20)*10^(X(1,25)-5)

One should notice that in order to test the system for higher communica-tion delays then the ones presented before it was implemented a function inthe base station processing unit to delay the usage of the position received bythe vision based reader for the established value. Normally the vision basedreading is received in the base station processing unit 2s after the request.As one will see in the Experimental Validation Section the time delay usedwas d = 3 so one needed to delay the reading usage for about 1s. Only whenthe 3s have passed after asking a position the values received are used.

71

Page 73: Localization and tracking using an heterogeneous sensor

Fusion center position estimator

As it was described in the previous sub-sections one have access to the po-sition given by the two sensors through different networks. One is the ul-trasound communicating with the fusion center by a wireless sensor network(IEEE 802.15.4) and the other the web-camera communicating with the fu-sion center through a wireless communication (IEEE 802.11g). Both valuesare now available at the fusion center as it was described before. Now onehad to implement the estimator presented in Section 2.4 taking also intoaccount the scheduler, offline or covariance-based, one wants to implement(see Section 2.5 for details). So, a code was formulated according to the flowdiagram presented on Figure 3.18.

As one can see the algorithm is based on five steps.

1. Initialization - Initialization of the Kalman filter and other requiredvariables.

2. Define scheduler - In this step one has to choose if the offline or covariance-based scheduler is used. If the offline scheduler is used one has to choosethe switching periodicity N of the vision based system (see Section 2.5for details). Is assumed that when the scheduler is selected the modelhas also to be defined. One can choose between model 1 and model 2.

3. Perform a measurement according to the scheduler defined for the cur-rent instant k. Store the value of the position (X, Y ). This is achievedby calling the ultrasound or vision based reader algorithms previouslydescribed.

4. Apply the outlier rejection method. This method works as a filter todiscard bad measurements. It has the characteristic of discarding themeasurements made of (X,Y) when the traveled distance between twosteps k is higher then 3meter. One can expect that the worst (highestdistance traveled without reading) is when the vision based systemis used. As it was seen in Section 3.2.2 the delay of the Vision basedsystem is 3s. According to table 2.1 one can see that the highest processnoise for model 1 is 1m/s and model 2, 0.09m/s2. One can assume thatthe velocity and acceleration of the mobile agent when using model 1and model 2 cannot be higher then these values expected. As one caneasily see establishing the maximum velocity for the mobile agent on1m/s the maximum traveled distance on each coordinate is 0.3m in3s. So 3meters can be considered an appropriate threshold to classifya bad position measurement when the web-camera is used. When the

72

Page 74: Localization and tracking using an heterogeneous sensor

Pre-Processing

Image

acquisition

Robot Image

Coordinates

Segmentation

Cutting and cut

image storage

Position

Transformation

and storage of M

If k=1

If k≠1

Robot Real

Coordinates

If k=1

If k≠1

Apply cut

image mask

Initialization

Define Scheduler

Online/Offline(N)

Perform

measurement

(y1or y2)

Estimate

position with

Kalman Filter

Continue

Yes

Stop

No

Outlier

Rejection

Figure 3.18: Flow diagram of the algorithm implemented on the fusion centerprocessing unit.

web-camera is not used one should put the threshold to be 1.5meterssince the ultrasound system delay is of 1m/s, so a bad measurementcan be considered if the displacement is above 1.5meters.

In the case of a bad measurement the value of X and Y is obtained byassuming that the position of the mobile agent has not changed. So,Xk = Xk−1 if Xk is considered an outlier. The same is done for Y.One can also see that other methods could be applied to solve somesensor malfunction. One can imagine that using a movement predictorto estimate the position this method could be improved. This can beseen as a future improvement to this work.

73

Page 75: Localization and tracking using an heterogeneous sensor

0 100 200 300 400 500 600 700 800

0

100

200

300

400

500

600

X

Y

Meeting Room

Copy Room

Web-camera

Wireless nodes

Mobile Agent

Figure 3.19: Graphical User Interface created for the user to be able to watchthe position of the robot with spacial references.

One should also refer that if the first movement is a bad measurement(no real value) the position is set to (0,0).

5. The final step would be to estimate the position at instant k using theKalman filter proposed on Section 2.4. In the end we have an estimatedposition (X, Y ), which is assumed to be more accurate then the onesgiven by the sensors as was seen in Section 2.5.1 and 2.5.2.

Another feature created for the fusion center was a Graphical User In-terface that enables the user of the system to see the 2D spacial positioningof the robot on the defined area. The GUI can be seen in Figure 3.19. Asone can see there are active nodes defined with green circles and non-activenodes defined with red circles. They are active if the ultrasound transmitteris positioned within their range. It is assumed that the camera can alwaysvisualize the robot if it is within the central area as it is seen in Figure 3.13

In the GUI is also provided the direction of movement performed by themobile agent in order for the user to predict if the one has delay in themeasurements what will be the next position of the agent.

74

Page 76: Localization and tracking using an heterogeneous sensor

Chapter 4

Experimental validation

In this chapter are presented the proposed experimental validations for theapproaches described. First is shown the behavior and characteristic of theultrasound system and the vision based system when used separately. Thereare made tests on localization when the estimator is used and then compar-isons to the situation when it is not used are made. After this first step oneneeded to validate the fusion center system where tracking situations are putforward for different schedules used. Considerations on the results achievedare made for each Section.

4.1 Ultrasound System

Tests were made in order to evaluate the performance of both transmitterand receiver nodes. On each node, hardware and software tests were per-formed. All the software implemented in each node worked well.

There were made three type of tests in order to estimate the accuracy ofthe system in three types of situations:

• Straight line - Test the accuracy of node and the maximum distancebetween Rx and Tx node, when they were placed in front of each other.The distance goes from 100cm to 1500cm with 100cm increments. FixedTx node.

• Localization - Test the accuracy on the localization of the Tx nodegiven 4 receiver fixed cluster of nodes placed on the ceiling. Fixed Txnode.

75

Page 77: Localization and tracking using an heterogeneous sensor

0 500 1000 1500-200

0

200

400

600

800

1000

1200

1400

1600

Real Distance (cm)

Dis

tanc

e M

easu

red

(cm

)

y = 1*x - 48

Average Distance

linear

Figure 4.1: Straight line average measurements. Real distance(cm) VS Mea-sured distance(cm)

• Tracking - Track the movement of the Tx node within the area given by12 Receiver nodes placed on the ceiling. Only the nodes in the middlesection were used.

There were only used 12 ultrasound receivers in order to cover only themiddle corridor illustrated in Figure 3.19.

4.1.1 Straight Line

This test was performed in one of the corridors of the 6th floor of the Qbuilding. There was a clean line of sight between the Rx and Tx but therewere objects on the sides of the corridor. As said before the Tx and Rxwhere placed in front of each other. In Figure 4.1 one can see the averageof 400 measurements of the distance for each real distance. We can see thatthe distance measured by the node depends linearly on the distance as wasexpected. But as it is seen there is a clear offset value that needs to becorrected.

In figure 4.2 is presented the linear interpolation of the error taking intoconsideration the distances from 100cm to 1400cm. These error values givenfor each distance were a first approach to get the offset values discussed inSection 3.2.1. In the next sub-section will be explained why these values donot fit for that purpose.

76

Page 78: Localization and tracking using an heterogeneous sensor

0 200 400 600 800 1000 1200 1400-65

-60

-55

-50

-45

-40

-35

-30

-25

-20

-15

Real distance (cm)

Mea

sure

d di

stan

ce (c

m)

y = 0.0078*x - 48

Figure 4.2: Straight line average error - Linear interpolation with 14 distancepoints. Error(cm) VS Real distance(cm)

This test was used also to verify the maximum distance between the Rxand Tx node and it was seen that after 1200cm the deviation of the valuesis very high and so the accuracy is low.

4.1.2 Localization

In order to check the accuracy of the ultrasound system an overall test to thelocalization algorithm was made. For this the Tx node was placed in variousplaces within the area defined by the 12 Rx nodes. For the Tx placement 100position values were measured and analysed. First the ultrasound system wastested without any outlier filtering, estimation and offset calibration appliedto the distance measured by each node. It was seen that the performanceof the system would require those methods to improve it. As was discussedbefore in the straight line test the offset value as to be taken into accountotherwise the error will be quite high. To confirm if the offset value calculatedin the previous Section was right, the Tx node was placed in four differentpositions and values were taken from 4 different receiver nodes. We wereable to see that the errors are different on each node but this difference isnot higher then 10cm which is shown on Figure 4.3. The explanations forthis fact are:

• Communication errors - Variable delays on the message transmission

77

Page 79: Localization and tracking using an heterogeneous sensor

from the Tx node.

• Interferences on the ultrasound signal due to objects in the surround-ings.

• Non-perfect circuits - even though the circuits developed were imple-mented on Plastic Circuit Boards (PCBs) one always have small differ-ences on each circuit when they are built. Also there are interferences,resistances and capacitance errors, etc. on the circuit that can affectthe comparison triggering.

• Interruption errors - There could be time differences on the triggeringof interruptions on each mote due to circuit inequality (Tmote Sky).

The average of the errors for each node was chosen to be the offset value tosum to the measurement made at each time instance. This value can also becalculated by the linear interpolation of the errors given several tests on eachpoint. So each node has its own offset value, but they do not differ more then2−5cm from one to another. This can be seen as the calibration phase of thenetwork. Better solutions to reach the optimal offset value were not requiredsince the final error with this approach was within the requirements, but itcan be seen as an improvement for future work. One discarded the offsetvalue calculated by the linear interpolation given in Figure 4.2 since with thenew localization tests performed one saw a 10cm difference compared to theprevious ones.

After setting up the offset value and having more accurate results for thedistance measurements taken by each node one can now focus on the position.With the distance measurements one can now apply the trilateration methodto calculate the position (X, Y ) given three distance measurements.

After calculating the position is applied the outlier rejection method. Thismethod, as explained in Section 3.2.1, is intended to reject all the wrong po-sition measurements made by the ultrasound sensors. Let us remind that ameasurement is considered wrong when the variation between one positionat time k − 1 and k is higher then 1.5meter.

Next are presented in Figure 4.4 and 4.5 the performance of the systemfor a given location of the mobile agent at X = 50cm and Y = 200, when:

1. The outlier rejection is not used.

2. The outlier rejection is used.

78

Page 80: Localization and tracking using an heterogeneous sensor

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-45

-40

-35

-30

-25

Mote

Erro

r (cm

)

Position1Position2Position3Position4

Figure 4.3: The average error(cm) in four different position for four givenreceiver nodes.

For simplicity reasons only the X coordinate is evaluated.So as one can see in Figure 4.4, when the outlier rejection is not used the

system cannot eliminate esporadic errors that occur sometimes on the systemdue to receiving bad information from the nodes. As one can also see for thefirst measurement there was an error on the reading (measurement X = ∞)which without the outlier rejection will influence the system performancesince it is not removed.

In Figure 4.5 the outlier rejection is applied. One can see that the er-roneous measurement occurred in the middle of the measurements has beenremoved. Also for the first measurement the value X = ∞ is now substitutedby X = 0 which does not have a great influence in the overall performanceof the system since the mean error is 0.05cm. One should notice that sincethe minimum resolution of the system is 1cm the mean error value of 0.05cmcannot be taken into account.

As discussed in Section 3.2.1, after being applied the outlier rejectionmethod the estimator is used in order to improve the accuracy of the mea-surement.

Next are presented in Figure 4.6 and 4.7 the cases when the estimatorfor model 1 and model 2, respectively are applied to the system. Notice thatthe parameters of the estimator for both models are:

79

Page 81: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 100-50

0

50

100

150

200

250

300

Measurement

Value (cm)

Mean Error=-Inf

Position X

Error X

Figure 4.4: Ultrasound system performance when no outlier rejection methodis applied. Error and position values for real position X=50.

0 10 20 30 40 50 60 70 80 90 100-60

-40

-20

0

20

40

60

80

Measurement

Va

lue

(c

m)

Mean Error=0.046106

Position X

Error X

Figure 4.5: Ultrasound system performance when an outlier rejection methodis applied. Error and position values for real position X=50.

• Estimator model 1 - Process noise W = 0.1.

80

Page 82: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 100-60

-40

-20

0

20

40

60

Measurement

Va

lue

(c

m)

Mean Error=-1.4009

Position X

Error X

Figure 4.6: Ultrasound system performance when an outlier rejection methodand the model 1 estimator is applied. Error and position values for realposition X=50.

• Estimator model 2 - Process noise W = 0.003 and W = 0.09.

The measurement errors were kept the same as in Section 2.5.One only tried, as in Section 2.5, to use the lowest process noise which

is for the highest delay d = 7. Since the performance was high for the firstmodel, as one will see next, there was not tested the cases where the processnoise W is higher. It is intended to show the performance of the system in theworst case scenario which occurs if the camera would be used and the delaywould be d = 7 (highest constrained situation to the overall system). Alsothe covariance matrix P (k), as described in 2.5, is not initialized as a matrixof zeros but as a matrix of ones with dimension Rd+1×d+1 for the model 1 andalso a matrix of ones but with dimension R2d+2×2d+2. One can notice thatbeing the P (k) matrices of this size that a choice to use a separate estimatorimplementation for each variable X and Y was made. So each coordinate isestimated in parallel.

One can see in Figure 4.6 that due to the fact of the bad measurementoccurred at k = 1 the first estimated position is X = 0, Y = 0 and so theestimated coordinate X takes at least 10 measurements to get on a 10cm errorboundary of the real measured value. One should also notice that the sharpdisplacements on the X position are attenuated with the filter which is a good

81

Page 83: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60 70 80 90 100-60

-40

-20

0

20

40

60

80

Measurement

Va

lue

(c

m)

Mean Error=6.0031

Position X

Error X

Figure 4.7: Ultrasound system performance when an outlier rejection methodand the model 2 estimator is applied. Error and position values for realposition X=50.

characteristic for the overall performance. One could expect better resultsfrom this filter if the bad measurement at k = 1 have not occurred. Eventhough there exists the referred “settling” time and the average is 1.4cm.

In Figure 4.7 one can see the system performance when the estimator ofmodel 2 is applied. In this case the performance decreases and the resultis more influenced in the same way of the estimator of model 1 for the caseof the bad measurement at k = 1. It takes now 30 steps to recover to the10cm error boundary of the real value. One can also notice that the sharpdisplacements on the position are attenuated in the same way as the esti-mator of model 1. One can notice that this happens due to the fact thatthe filter model does not expect quick variations in position, but in velocity.The estimator follows quickly the measurement from k = 1 to k = 2 andoverestimates it because it thinks it will continue moving in that directionbut then it takes a long time to get back to the actual position. One wasalready expecting this as it was shown in the example in Section 2.2. Eventhough these errors occur the average error is 6cm which is lower then the10cm error discussed before. It was also tested the case when the processnoise is W = 0.09. For this case one checked that since the model expectsa higher variation in velocity from one step to the other the ”settling” timewill be lower and so the values will be inside the 10cm error boundary more

82

Page 84: Localization and tracking using an heterogeneous sensor

quickly at k = 10.

Also one could notice that the maximum position error when the outlierrejection method is used is 23cm. When the estimation is also used, in thecase of model 1 the maximum error is 8cm (not counting the values until thestep k = 10 since the result is based on the bad measurement at k = 1. Forthe 2nd model estimator the maximum error is 6cm when W = 0.003, alsowithout counting the errors when k < 30 for the same reasons pointed before.For the case where the process noise is W = 0.09 using the 2nd model, theerror was of 7cm. One can say that the ultrasound system as an error lessthen 8cm. More tests where made in different positions and it was seen thatfor the localization problem the ultrasound solution gives accurate results.

As a summary one can say that with the results shown, in order to havethe best estimation possible of the position when performing localizationwith the ultrasound system one should use the following techniques:

1. Sum an offset value characteristic of each particular node.

2. Apply the outlier rejection method. Chose the threshold in accordanceto the problem in question. In our case was chosen the threshold = 150when using only the ultrasound sensor.

3. In our case was also better to use the estimator of model 2 after usingthe outlier rejection method. This is due to the fact that it gives thelowest maximum error, and assuming that in further developments thevalues before k < 30 are not considered.

Also it has to be seen that if the bad measurement did not occurred fork = 1 the performance of the estimator would be increased and give muchbetter results then the ones presented here. As a solution for this and othercases is that the system should be turned on and one should not take in con-sideration the values given before step 30., which means 30s of calibrationtime. After this the estimator for both system can be used helping to givebetter performance to the system.

One should notice also that the estimator shown here was the one withoutany prediction adjustment since it was seen that for localization purposes andlow process noises used the difference was not significant to be shown here.One should also mention that using the prediction adjustment the error formodel 1 one the same but not for model 2. In the case of model 2 the errorwas slightly lower when using the prediction adjustment but one has chosennot to show it here.

83

Page 85: Localization and tracking using an heterogeneous sensor

4.2 Vision Based System

In this Section one can find the analyses of the tests made to the vision basedsystem when performing localization.

First one should remark the characteristics of this system discussed inSection 3.2.2.

It is known that the system takes a new image each 0.75s using theOPENCV C++ function and takes normally less then 1s to analyse theacquired image. Also one should account with a 0.5s minimum time forposition transmission and availability so one can say that the processingtime delay is always less then 3s which was one of the values for the delayd taken into account during the scheduling solutions evaluation presented inSection 2.5. One can notice that in fact the delay time is in the minimumcase 1.5s but one established a delay d = 3s in order to cope with eventualhigher transmission times.

Also one should refer that sometimes during the tests problems occurredwith the acquired image quality due to the fast time one takes a picture eventhough is 0.75s. During the tests were observed that the failure probabilitywas only of 5%(5 failures in 100 measurements).

Another error that sometimes occur is that the mobile agent goes outsidethe defined calibration rectangle and so the detected image will have a partialcircle detected which is not always considered as the mobile agent due tothe constrains applied in the Robot image coordinates step as shownin Figure 3.14 in Section 3.2.2. The outlier rejection method shown in theprevious section will tackle these errors, but now the threshold was set to beof 3m as was explained in Section 3.2.3.

4.2.1 Localization

Here one will discuss the performance of the vision based system when ac-quiring data of a fixed mobile agent. It is shown the same path describedin the previous Section but with a different simulation. Both models weretested for this case.

In Figure 4.8 one can see the case where the localization using the visionbased system is performed using the model 1 estimator. The parameterswere set as:

• Process noise W = 0.2.

• Delay d = 3.

84

Page 86: Localization and tracking using an heterogeneous sensor

49.4 49.45 49.5 49.55 49.6 49.65

200.9

200.95

201

201.05

201.1

X (cm)

Y (c

m)

Localization - Vision based system

Oulier and no filter

Outlier and filter

Figure 4.8: Localization performance of the vision based system for a steadymobile agent at (50,200). Outlier rejection method and model 1 estimationperformed.

• Initial covariance matrix with dimension Rdelay+1×delay+1 with ones inall the positions.

It is noticeable that the performance of the vision based system in orderto detect the mobile agent is very high, with errors less then 1cm for eachcoordinate. It is shown also that almost all the position values are place on a2cm2 frame for both methods used. The number of steps for this to occur islower then 10. There are only two situations that require particular attention.As it is noticed there is one jump on a reading made by the web-camera inwhich the Y coordinate is displaced by 3cm up. This case is not tackledby the outlier rejection method since the displacement threshold detectorwas set to 1m. But as one can notice this error is perfectly filtered by theestimator. Also using the vision based system there is another importantestimator characteristic which is the effect of the delay. Since the delayis set to 3s, only when that the time step is higher then this value theweb-camera measurements are taken into account. So, due to this fact theposition estimation value takes almost 10 steps in order to reach the 2cm2

frame discussed before.It is also shown that the position values when using the outlier rejection

method and the estimator have lower variation comparing when the estima-tor is not used, as was expected.

85

Page 87: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60-60

-40

-20

0

20

40

60

80

100

120

140

Measurement

Value (cm)

Mean Error=7.418

Position X

Error X

Figure 4.9: Localization performance of the vision based system for a steadymobile agent at (50,200). Outlier rejection method and model 2 estimationperformed for W = 0.003.

Now in Figure 4.9 one can see the system performance when using themodel 2 estimator. The parameters chosen for the model were,

• Process noise W = 0.003

• Delay d = 3

• Initial covariance matrix with dimension R2d+2×2d+2 with ones in allthe positions.

It was chosen to show not the overview of the position but the characteris-tic of the X coordinate over the measurements. As it is seen the performanceof the system decreases significantly due to the delay observed. Now oneneeds almost 20 steps in order to achieve the 2cm2 frame, which is the dou-ble of the previous case. This is explained by the fact that the estimator isnot expecting such high variations in the movement direction. As one can seeeven if this occurs, the mean error is only 7cm, but when comparing to theperformance of the first model it is truly worst. The case when the processnoise is W = 0.09 was also studied and the evaluation of its performancecan be seen in Figure 4.10. One can see that for higher process noises the

86

Page 88: Localization and tracking using an heterogeneous sensor

0 10 20 30 40 50 60-60

-40

-20

0

20

40

60

80

100

120

140

Measurement

Va

lue

(c

m)

Mean Error=1.7919

Position X

Error X

Figure 4.10: Localization performance of the vision based system for a steadymobile agent at (50,200). Outlier rejection method and model 2 estimationperformed for W = 0.09.

settling time decreases and as a consequence also the mean error. This isthen a good improvement when using model 2.

For the localization problem one can say that the model 1 estimator hasa better performance then the one given by the second model. This has todo with the fact that the second model has a worst performance with so highvariations in movement direction due to the delay involved, which is the casepresented here since the readings will be zero until k = 3 and after that ithas a jump to the first reading performed by the web-camera.

One should notice also that the estimator shown here was the one withoutany prediction adjustment since it was seen that for localization purposes andlow process noises used the difference was not significant to be shown here.One should also mention that using the prediction adjustment the error formodel 1 one the same but not for model 2. In the case of model 2 the errorwas slightly lower when using the prediction adjustment but one has chosennot to show it here.

87

Page 89: Localization and tracking using an heterogeneous sensor

4.3 Fusion center

As discussed in previous Sections, according to the sensors variance, thesensor measurement noises used for the validation for the models proposedwere set for the ultrasound-based system as σ = 12 and for the vision-basedsystem as Σ = 1.

For the experimental validation a new performance criterion based on theposition error is put forward and analysis are made comparing the resultsachieved by using the estimator or just using raw measurements from thesensors. Both scheduling approaches and models are validated.

The tracking test was based on the situation where the mobile agentperforms a movement with variations only in one coordinate (straight linewith fixed Y and varying X). The traveled distance was of approximately4.5m and was performed in 12s. The trajectory was tried to be performedwith a constant speed but as one can know, using a RC car as the mobileagent has the drawback that one cannot know its current velocity, have anexactly constant speed or either performing an exact straight path. Since thevalues given by the camera are very accurate and taken at precise time onecan trust that the velocity can be approximately given by its position values

when calculating v =xk − xk−1

∆t. For the mobile agent start up (first 2s) the

velocity was approximately constant and equal to 11cm/s, followed by anapproximately constant velocity of 50cm/s. When the car achieves the finalposition it deaccelerates for a constant velocity of 11cm/s as for the start upbut now this is done over 3s.

According to the velocities involved on the test was established that theprocess noises used for model 1 and model 2 were W = 0.5 and W = 0.05.As explained in Section 2.2 the process noise in model 1 is the velocity of themobile agent, so is expected that in each step it will move 0.5m with randomdirection. For model 2 since one is not expecting high variations in velocityduring the test, the acceleration of the mobile agent was set to be 0.05m/s2.One expects to have low variations in the direction of movement but randomvariations, according to the process noise, on the acceleration.

One should notice also that after the system was turned on, a 17s waitingtime before starting the mobile agent movement, in order to remove theinitial estimation overshoot since the starting position is unknown.

Tests were made with model 1 and model 2 for different periodic switch-ing cycles N and for the covariance-based optimal switching sequence. Therewere taken 29 samples which are equivalent to 29s since the sampling periodwas set to be 1s. For each model one did the analysis over the positionmean quadratic error taking into account the high-quality sensor communi-

88

Page 90: Localization and tracking using an heterogeneous sensor

λ Model 1|W=1.8 Model 2|W=0.05

0 Offline: N∗= 9, 27 Offline: N∗= 17, ∞2000 Offline: N∗= 14, 27, 28 Offline: N∗= 17, ∞

Table 4.1: Optimal periodic high-quality sensor switching N∗ of VE for model1 and 2 considering different communication cost λ.

cation cost for all given M , where M is the high-quality sensor usage. Theperformance criterion is defined as,

VE(M) =M

k× λ +

1

k + 1

k∑i=0

|xreal − xmeasured|2 (4.1)

This new performance criterion is an approximation of (2.28) since nowinstead of the theoretical estimation accuracy is analysed the experimentalestimation accuracy. Has one can expect if the communication cost λ →∞ the optimal high-quality sensor switching will be N = ∞, since is tooexpansive to use the vision-based system. Also if λ = 0 only the meanquadratic error is taken into account since no cost is attached to the vision-based system usage.

First the estimation performance for the adopted process noises W = 0.5and W = 0.05 for model 1 and model 2 respectively was tested. A comparisonbased on the performance criterion VE(M) between the estimated positionand the position given by raw measurements with just outlier-rejection fil-tering is made. The communication cost was set to λ = 0 since just theposition accuracy wanted to be evaluated. It was seen that for model 1, withprocess noise W = 0.5, the estimator did not improve the position accuracycomparing to the raw measurements. The process noise W was then tunedfor W = 1.8. The results are shown in Figure 4.11, comparing both models.

In Figure 4.11 is also shown the covariance-based scheduler errors usingmodel 1 and model 2. The optimal scheduling sequence obtained when per-forming this approach was that for model 1 and W = 1.8 the high-qualitysensor should never be used (M = 0), and for model 2 was that usingmaxD = 4, a sequence composed by a N = 2, N = 3, N = 5 and thena periodical N = 2 scheduling should be used. With this the high-qualitysensor usage is of M = 12. It is seen that this approach does not give betterresults than using an offline scheduler. The optimal high-quality sensor pe-riodic switching N using both models and different communication costs λis seen in Table 4.1.

In Figure 4.12 the mobile agent tracking example is shown. Both esti-mated and raw position measurements are compared to the real mobile agent

89

Page 91: Localization and tracking using an heterogeneous sensor

λ Model 1|W=1.8 Model 2|W=0.05

0 Offline: N∗= ∞ Offline: N∗= 20.3 Covariance-based: Sequence of N = 2, N = 3, Covariance-based:

N = 5 and a periodical N = 2 N∗= ∞

Table 4.2: Optimal periodic high-quality sensor switching N∗ of VT for model1 and 2 considering different communication cost λ.

position. The optimal high-quality sensor switching considered is N = 6.This value was chosen since it was seen as a good example to demonstratethe tracking improvement when using the estimator. One can see that model2 performs better when comparing to model 1 as was expected from Sec-tion 2.2, since it fits well in the type of path performed. Is noticeable thatmodel 2 requires a set-up period due to having an high settling period withfixed mobile agent position which is not a requirement of model 1.

It was also made the theoretical validation of the system with the mo-bile agent and sensor models parameters previously referred, based on theperformance criterion VT from (2.28). Here only the estimation accuracy istaken into account. Table 4.2 shows the optimal scheduling sequence and ap-proach for different communication costs λ and process noise W using model1 and model 2. Note that the communication cost λ is not the same sincethe estimation accuracy criterion paverage does not have the same order ofthe quadratic mean error. Analysing and comparing the theoretical resultsand the experimental results for the same process noises, one can see thatthe covariance-based scheduler has higher cost performance which makes itbetter then the offline scheduler to be used on both models, which doesnot happen in the experimental validation, where the offline scheduler is thebest. When just analysing the accuracy for model 2, which was the one thatgave better results when experimentaly validated, the estimation accuracy ishigher for an offline N = 2 scheduling, where the web-camera is used eachtwo steps, while the position accuracy is higher when the web-camera is usedfewer times (N = 17) or even not used (N = ∞). For model 1 is also seenthat the theoretical result does not match the experimental but is more closethen the one achieved for model 2. Also was seen that the model 1 gave lowerpaverage values for the optimal scheduling sequence achieved when comparingto model 2, making the estimator with less errors when using it.

As a summary one can see that the comparison between experimental andtheoretical validation shows that the position accuracy does not match theestimation accuracy. Also one should refer that in terms of estimation accu-racy model 1 was the one that gave better results, but when experimentalyvalidated, model 2 is better on tracking the mobile agent.

90

Page 92: Localization and tracking using an heterogeneous sensor

(a) Model 1 with process noise W = 1.8.

(b) Model 2 with process noise W = 0.05.

Figure 4.11: Estimated and raw position quadratic errors for offline schedulerwith N and Covariance-based scheduler for model 1 and model 2.

91

Page 93: Localization and tracking using an heterogeneous sensor

(a) Model 1 with process noise W = 1.8.

(b) Model 2 with process noise W = 0.05.

Figure 4.12: Tested mobile agent trajectory for optimal high-quality sensorswitching N = 6. Real position, estimated position and position given byraw sensor measurements over the X coordinate when performing 29 mea-surements.

92

Page 94: Localization and tracking using an heterogeneous sensor

Chapter 5

Conclusions and Future Work

During this work was’seen how to design, develop and implement a localiza-tion and tracking system.

We showed how to develop a wireless sensor network testbed with a sys-tems engineering approach.

All the concepts presented in Chapters 2 and 3 were validated and wasshown that they can be applied in real system to perform localization andtracking in a wireless sensor network environment.

One saw also the development of two localization systems based on aultrasound sensor connected to a wireless sensor network and a vision basedsystem with a web-camera connected through wireless to a processing unit.It was seen that the maximum error while using the ultrasound system toperform localization was of 8cm when using all the filtering approaches. Oneshould also consider that sometimes this could not hold due to interferenceson the environment and possibly network congestions. As it was possible tosee the errors are not so low while performing tracking. One should try toimprove the robustness of the ultrasound system while performing trackingsince it was seen that due to the high quantity of motes being used themeasurement quality decreases.

One should refer here that were not made any considerations about thepacket losses and delays in the wireless sensor network while using the ul-trasound system. As a further work these characteristics should be carefullydescribed and analysed in order to be able to characterize bad performancesfrom the ultrasound system and relate them to these parameters.

The vision based system was seen to give measurements with high accu-racy with almost no error. One should also try as a future work to developa faster image processing algorithm based in OPENCV only and lower com-munication delays in order to reduce the total delay from 3s to 1s. It willremove the interest in the problem formulated in this work but will give a

93

Page 95: Localization and tracking using an heterogeneous sensor

accurate networked solution to be used for further work developed in KTH.When using the vision based system the delay presented turned to be a

difficult problem to solve. One knows from the analysis made that for a givenmodel, the process noise W should not be higher then a given value for agiven delay. Trying to cope with this delay showed that sometimes is bettereven if the estimation accuracy is higher while using the camera more often,to use more times the ultrasound sensor. This has also to do with the factthat the ultrasound sensor measurements are not so inaccurate as they aremodeled to be.

As a further step one should also think about putting the vision basedsystem not communication through the 802.11g wireless protocol but throughthe same wireless sensor network. This is possible to be performed using awireless node connected to the vision based processing unit and by enablingthe base station node to receive those measurements also. Also one shouldtry to test the system using the wireless camera nodes provided in [39]. Thisupgrade in the network should pose interesting problems since the packetlosses and delays in the network will increase.

There were developed two models in order to describe the dynamics ofthe mobile agent. From the two models proposed, model 2 was the one thatshowed better performances in terms of tracking and localization accuracyhaving lower position errors, even though model 1 was the one that had bestestimation performance when theoreticaly validated under the same condi-tions. One can say that for the type of movements and velocities analysedmodel 2 describe with good accuracy the real dynamics of the mobile agent.Even though tracking with low errors was achieved for different types oftrajectories model 2 will not fit so good as for straight line movements. Adynamical model that truly copes with the real mobile agent dynamics whichare not linear, but non-linear, should be developed. One can also foreseenthat for the new system the Kalman filter will not guaranty that the esti-mated solution achieved will be optimal, since this is only seen for the linearcases.

Another suggestion for further work is also that one should think aboutdeveloping a model that predicts the movement of the robot and helps theestimator to give more accurate results.

A difficulty posed in this work was how to cope with the velocity andacceleration requirements given by the estimator for both proposed models.Here one saw that in a future work one should have a Robot with capabilityof measuring, displaying and performing the movement under the estimatorrequirements.

In terms of scheduling was seen that an offline solution gave more ac-curate tracking estimations then the covariance-based one. One should not

94

Page 96: Localization and tracking using an heterogeneous sensor

forget that for the delay analysed the covariance-based has higher perfor-mance when a trade-off between estimation quality and communication costis made, but for the problem presented here, when experimentaly validated,the covariance-based scheduler had worst performances.

As a motivation driven by the non-linear characteristics of the vision-based system one should develop a purely online scheduling solution, wherethe model parameters change depending on the location of the agent onthe room. Also another case can be when the temperature, humidity, light,interferences,etc. can infuence the model parameters as well.

As an interesting application for this system could be to try to use itwith more ultrasound sensors and more cameras covering more areas. Onecan see that an interesting problem could be to cooperatively schedule 2or more cameras covering two or more different areas while scheduling alsothe ultrasound sensors. With this and having an estimator based on themovement prediction one could turn on cameras based on the knowledgethat the agent will be in the area covered by that given camera in some tseconds. This pose interesting power management, estimation and modelingdesign problems.

Also the case where more then one mobile agent is used should be con-sidered. This will put forward interesting problems to be solved for theultrasound system and also vision based system.

95

Page 97: Localization and tracking using an heterogeneous sensor

Chapter 6

Appendix

96

Page 98: Localization and tracking using an heterogeneous sensor

Fig

ure

6.1:

Rec

eive

rC

ircu

it

97

Page 99: Localization and tracking using an heterogeneous sensor

Fig

ure

6.2:

Tra

nsm

itte

rC

ircu

it

98

Page 100: Localization and tracking using an heterogeneous sensor

Fig

ure

6.3:

Wir

eles

sSen

sor

Net

wor

kTes

tbed

s-a

surv

ey

99

Page 101: Localization and tracking using an heterogeneous sensor

Fig

ure

6.4:

Syst

emB

reak

dow

nStr

uct

ure

100

Page 102: Localization and tracking using an heterogeneous sensor

Figure 6.5: Floor plan - KTH Q 6th (SSS)

101

Page 103: Localization and tracking using an heterogeneous sensor

Bibliography

[1] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, ”Wirelesssensor networks: a survey” Computer Networks, vol. 38,no. 4, 2002.

[2] D. Culler, D. Estrin, and M. Srivastava, Overview of wireless sensor net-works, IEEE Computer, Aug 2004, special Issue in Sensor Networks.

[3] P. Antsaklis and J. Baillieul, Special issue on technology of networkedcontrol systems, Proceedings of the IEEE, vol. 95, no. 1, 2007.

[4] M. Athans, On the determination of optimal costly measurement strate-gies for linear stochastic systems, Automatica, vol. 8, pp. 397412, 1972.

[5] W. Wu and A. Arapostathis, Optimal control of markovian systems withobservations cost: Models and lqg controls, in Proceedings of AmericanControl Conference, Portland, 2005, pp. 294299

[6] Bernardo Maciel and Jose Araujo.Ultrasound Based Localization: Wire-less Sensor Network Testbed. KTH, Stockholm, January 2008.

[7] Bernardo Maciel and Jose Araujo.Ultrasound Based Localization - partII:Wireless Sensor Network Testbed. KTH, Stockholm, February 2008.

[8] Henrik Sandberg, Maben Rabi, Mikael Skoglund and Karl H. Johans-son.Estimation over heterogenous sensor networks. to appear on 47th

IEEE CDC 2008

[9] Rafael C. Gonzalez and Richard E. Woods.Digital Image Processing. Sec-ond Edition. Prentice Hall

[10] Wikipedia - Speed of Sound http://en.wikipedia.org/wiki/Speed of sound

[11] Vikram Krishnamurthy. Stochastic Estimation and Control-LecturesNotes on Optimal Filtering. Stockholm, Sweden, Spring 2008

[12] B.D.O. Anderson and J.B.Moore,Optimal Filtering. Prentice Hall, 1979

102

Page 104: Localization and tracking using an heterogeneous sensor

[13] Tmote Sky Datasheet - Moteiv corporation

[14] Johansson, Karl H., ”Wireless Networked Systems Embedded in thePhysical Environment: The future of Internet”, EQ2450 Seminars inWireless Systems, Stockholm, Sweden, 2008

[15] Johansson, Karl H., ”Networked Control Systems”, Artist2 Workshop,Stockholm, Sweden, 2008

[16] Johansson, Karl H., ”Control over wireless networks”, CTS-HYCONWorkshop, Paris, France, 2008

[17] Desai, Uday., Jain, B.N., Merchant, S.N., ”Wireless Sensor Networks:where do we go?”, April 20, 2007

[18] Picexplanation of function procedureco, Gian Pietro, ”Wireless SensorNetworks: An introduction”, 2005

[19] Andreas Willig, ”Recent and Emerging Topics in Wireless IndustrialCommunications: A Selection”, IEEE-ToII, May 2008, accepted for pub-lication

[20] A. Willig, ”Polling-based MAC Protocols for Improving Realtime Per-formance in a Wireless PROFIBUS”, IEEE Transactions on IndustrialElectronics, 2003.

[21] Andreas Willig and Elisabeth Uhlemann, ”PRIOREL-COMB: A Pro-tocol Framework Supporting Relaying and Packet Combining for Wire-less Industrial Networking”, In Proc. 7th IEEE International Workshopon Factory Communication Systems (WFCS), Dresden, Germany, May2008.

[22] A. Seuret and K. H. Johansson, ”Networked control under time-synchronization errors”, IEEE CDC, Cancun, Mexico, 2008. Submitted.

[23] H. Sandberg, M. Rabi, M. Skoglund, and Karl H. Johansson, ”Estima-tion over heterogeneous sensor networks”, IEEE CDC, Cancun, Mexico,2008. Submitted.

[24] P. G. Park, C. Fischione, A. Bonivento, K. H. Johansson, A.Sangiovanni-Vincentelli, ”Breath: a self-Adapting protocol for wirelesssensor networks in control and automation”, IEEE SECON, San Fran-cisco, CA, USA, 2008. To appear.

103

Page 105: Localization and tracking using an heterogeneous sensor

[25] L. Shi, K. H. Johansson, and R. M. Murray, ”Kalman filtering withuncertain process and measurement noise covariances with applicationto state estimation in sensor networks”, IEEE Conference on ControlApplications, Singapore, 2007.

[26] 1220-1998 IEEE Standard for Application and Management of the Sys-tems Engineering Process,1998.

[27] Matthias Dyer, Jan Beutel, Thomas Kalt, Patrice Oehen, Lothar Thiele,Kevin Martin, and Philipp Blum. ”Deployment Support Network”. InEWSN, volume 4373 of Lecture Notes in Computer Science, pages 195-211. Springer, 2007.

[28] Simeon Furrer, Wolfgang Schott, Hong Linh Truong, and BeatWeiss.”The IBM wireless sensor networking testbed”. In TRIDENTCOM.IEEE, 2006.

[29] Matt Welsh. ”TinyOS Technology Exchange IV” - Testbeds WorkingGroup Presentation, April 2007.

[30] Emre Ertin, Anish Arora, Rajiv Ramnath, Vinayak Naik, Sandip Bapat,Vinod Kulathumani, Mukundan Sridharan, Hongwei Zhang, Hui Cao,and Mikhail Nesterenko. ”Kansei: a testbed for sensing at scale”. InIPSN ’06: Proceedings of the 4fth international conference on Informationprocessing in sensor networks, pages 399-406, New York, NY, USA, 2006.ACM.

[31] David Johnson, Tim Stack, Russ Fish, Daniel Montrallo Flickinger,Leigh Stoller, Robert Ricci, and Jay Lepreau. ”Mobile Emulab: ARobotic Wireless and Sensor Network Testbed”. In INFOCOM. IEEE,2006.

[32] Geofrey Werner-Allen, Patrick Swieskowski, and Matt Welsh. ”Mote-Lab: a wireless sensor network testbed”. In IPSN ’05: Proceedings of the4th international symposium on Information processing in sensor net-works, page 68, Piscataway, NJ, USA, 2005. IEEE Press.

[33] Tasos Dimitriou, John Kolokouris, and Nikos Zarokostas. ”Sensenet: awireless sensor network testbed”. In MSWiM ’07: Proceedings of the 10thACM Symposium on Modeling, analysis, and simulation of wireless andmobile systems, pages 143-150, New York, NY, USA, 2007. ACM.

104

Page 106: Localization and tracking using an heterogeneous sensor

[34] B. Chun, P. Buonadonna, A. AuYoung, C. Ng, D. Parkes, J. Shneid-man, A. Snoeren, and A. Vahdat. Mirage: A Microeconomic ResourceAllocation System for SensorNet Testbeds,2005.

[35] Vlado Handziski, Andreas Kopke, Andreas Willig, and Adam Wolisz.”Twist: a scalable andreconfigurable testbed for wireless indoor experi-ments with sensor networks”. In REALMAN’06: Proceedings of the 2ndinternational workshop on Multi-hop ad hoc networks: from theorytoreality, pages 63-70, New York, NY, USA, 2006. ACM.

[36] Jose Pinto, Alexandre Sousa, Paulo Lebres, and Joao Sousa Gil ManuelGoncalves. Mon-Sense Application for Deployment, Monitoring and Con-trol of Wireless Sensor Networks. In ACM Real WSN 2006, 2006.

[37] BTnodes - A Distributed Environment for Prototyping Ad Hoc Net-works. http://www.btnode.ethz.ch/

[38] Deluge. http://www.cs.berkeley.edu/ jwhui/research/deluge/. 10

[39] CITRIC: A Low-Bandwidth Wireless Camera Network Platform Phoe-bus Wei-Chih Chen, Parvez Ahammad, Colby Boyer, Shih-I Huang, LeonLin, Edgar J. Lobaton, Marci Lenore Meingast, Songhwai Oh, SimonWang, Posu Yan, Allen Yang, Chuohao Yeo, Lung-Chung Chang, DougTygar and S. Shankar Sastry EECS Department University of California,Berkeley Technical Report No. UCB/EECS-2008-50 May 13, 2008

[40] MSP430 Pins, WWW. URL. http://www.tinyos.net/tinyos-2.x/doc/nesdoc/telosb/index/tos.chips.msp430.pins.html

[41] Octopus, WWW. URL. http://csserver.ucd.ie/ rjurdak/Octopus.htm.

[42] OPENCV - http://opencvlibrary.sourceforge.net/

[43] Jerker Nordth, ”Ultrasound-based Navigation for Mobile Robots”, MsCThesis - Lund Institute of Technology, February 2007

[44] M. Egerstedt, X. Hu and A. Stotsky, ”Control of a Car-Like Robot Usinga Virtual Vehicle Approach. in Proceedings of the 37th IEEE Conferenceon Decision and Control, Tampa, Florida, 1998

[45] Nissanka Bodhi Priyantha, ”The Cricket Indoor Location System”, PhDThesis - MIT, June 2003

[46] I. Getting. The Global Positioning System. IEEE Spectrum,30(12):3647, December 1993.

105

Page 107: Localization and tracking using an heterogeneous sensor

[47] Merrill I. S. Introduction to Radar Systems. McGraw-Hill, New York,NY, third edition, December 2002.

[48] Alessandro Marcassa, ”Localization and tracking using a wireless sensornetwork”, Laurea Thesis, April 22, 2008, Padova University

[49] C. Fischione, L. Pomante, F. Santucci, C. Rinaldi, S. Tennina, ”MiningVentilation Control: Wireless Sensing, Communication Architecture andAdvanced Services”, to appear in Proc. of IEEE Conference on Automa-tion Science and Engineering (CASE 08), Washington, DC, USA, August2008.

[50] M. Bertinato, G. Ortolan, F. Maran, R. Marcon, A. Marcassa, F.Zanella, P. Zambotto, L. Schenato, A. Cenedese, ”RF Localization andtracking of mobile nodes in Wireless Sensors Networks: Architectures,Algorithms and Experiments” University of Padua, Italy, 2007

[51] Triangulation, Wikipedia, http://en.wikipedia.org/wiki/Triangulation

[52] Trilateration, Wikipedia, http://en.wikipedia.org/wiki/Trilateration

[53] Multilateration, Wikipedia, http://en.wikipedia.org/wiki/Multilateration

106