108
DE – 28 (MTS) PROJECT REPORT IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT Submitted to the Department of Mechatronics Engineering in partial fulfillment of the requirements for the degree of Bachelor of Engineering in Mechatronics 2010 Sponsoring DS: Submitted By: Talha Manzoor Zeeshan Haider Raja Brig. Dr. Javaid Iqbal Ammar Zahid Ali Nasir Sattar

IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

Embed Size (px)

DESCRIPTION

A BS thesis submitted to EME

Citation preview

Page 1: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

DE – 28 (MTS)

PROJECT REPORT

IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

Submitted to the Department of Mechatronics Engineering in partial fulfillment of the requirements for the degree of

Bachelor of Engineering

in

Mechatronics

2010

Sponsoring DS: Submitted By:

Talha Manzoor Zeeshan Haider Raja

Brig. Dr. Javaid Iqbal Ammar Zahid Ali Nasir Sattar

Page 2: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

ii

ACKNOWLEDGEMENTS

First of all thanks to The All-Mighty for making us capable of doing the work

presented herein. We are grateful to Dr. Javaid Iqbal for allowing us to undertake this line

of research along with his experienced supervision throughout the course of this project.

We would also like to thank Dr. Kunwar Faraz for his critical feed back and expert

analysis of our work. Thanks also to our seniors, Zohaib Riaz, Affan Pervaiz and

Muhammad Ahmer for their kind guidance and encouragement. We would also like to

acknowledge the help of Jawad Ahmed relating to the troubleshooting of the problems

which occured in our code. We would also like to thank Mr. Qasim for his wonderful

cooperation with us in the robotics lab.

Page 3: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

iii

ABSTRACT

This report describes our technique for simulation and implementation of

Simultaneous Localization and Mapping (SLAM) using Particle Filters. It is a common

misconception that Particle Filters for SLAM, while keeping the map of the environment

as part of the hidden state, does not yield real time results. Our technique presented in this

paper not only considers the map as part of the hidden state but also yields effective and

accurate real time results. SLAM using our technique has been simulated as well as

implemented in real time, the results of which are presented herein. Our algorithms are not

only easy to understand but can also be implemented by one who does not have an

extensive theoretical knowledge of the SLAM problem.

Page 4: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

iv

TABLE OF CONTENTS AKNOWLEDGMENTS ii ABSTARCT iii TABLE OF CONTENTS iv LIST OF FIGURES vii LIST OF TABLES ix

LIST OF SYMBOLS x CHAPTER 1 - INTRODUCTION 1 CHAPTER 2 - BACKGROUND INFORMATION 3

2.1 Introduction 3 2.2Applications 3

2.3 History 3

CHAPTER 3 - STATE ESTIMATION WITH PARTICLE FILTERING 5 3.1 The state 5

3.1.1 Completeness and Continuity of State 6 3.2 Interaction with the Environment 7

3.2.1 Sensor Measurements 7

3.2.2 Control Actions 8

3.2.3 Measurement Data 8 3.2.4 Control Data 8

3.3 State Estimation Techniques 9 3.3.1 Belief Distributions 9

3.3.2 The Bayes Filter 10 3.3.3 The Particle Filter 12

3.3.3.1 Basic Algorithm 12 3.3.3.2 Importance Resampling 15

3.3.3.3 Limitations and Remedies 16 3.3.3.3.1 Inclusion of Random Samples 16

3.3.3.3.2 Use of Unnormalized Weights 17

CHAPTER 4 – MOTION MODEL 18 4.1 Kinematic Configuration 19

4.2 Probabilistic Kinematics 20 4.3 Odometry Motion Model 21

Page 5: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

v

CHAPTER 5 – MEASUREMENT MODEL 24 5.1 The Range Finder Sensor Model 26

CHAPTER 6 – OCCUPANCY GRID MAP BUILDING 29 6.1 Algorithm 29

CHAPTER 7 – OBSTACLE AVOIDANCE AND NAVIGATION 32 7.1 Random Walk 32

7.2 Seek Open Algorithm 33 7.3 Frontier Based Exploration 34

7.4 Wall Follow Technique 36 7.5 Our Techniques 36

CHAPTER 8 – PLAYER PROJECT 39 8.1 Overview 39

8.1.1 Player 39

8.1.2 Stage 40 8.2 Player Robotic Simulation Platform 40

8.2.1 Interfaces 40 8.2.2 Software – Hardware Interface 40

8.2.3 Abstract Driver 41 8.2.4 TCP Client Program 41

8.2.5 The Player Server 41 8.2.6 The C++ Client Library 42

8.2.7 Relevant Drivers 42 8.3 The Stage Robotic Simulator 42

8.3.1 Stage Models 43 8.3.2 Blobfinder Model 43

8.3.3 Laser Model 43 8.3.4 Position Model 43

CHAPTER 9-PEOPLEBOT: THE ROBOT 44 9.1 Overview 44 9.2 Technical Specifications 45

9.2.1 Sensors 45 9.2.2 Computing Power 45

9.3 Sick LMS 200 45 9.4 Odometry 46

Page 6: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

vi

9.5 Modelling of PeopleBot 46

9.5.1 Motion Model 47

9.5.2 Sensor Model 47

CHAPTER 10 – WIRELESS COMMUNICATION 50

CHAPTER 11 – RESULTS 51

11.1 MATLAB 51 11.2 Player/Stage Simulation 52

11.3 Implementation on PeopleBot 54

CHAPTER 12 – FUTURE WORK AND CONCLUSIONS 56 BIBLIOGRAPHY 57 ANNEXURE A 58

Player Code 58

ANNEXURE B 90 MATLAB Codes 90

ANNEXURE C 98 Script for Wireless Settings of PeopleBot 98

Page 7: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

vii

LIST OF FIGURES

Figure 1. Illustration of the importance resampling step. The magnitudes of the

blue lines show the number of samples or particles in the current particle set.

Figure 2. Robot pose, shown in a global coordinate system

Figure 3. Odometry model: The robot motion in the time interval (t - 1; t] is

approximated by a rotation δrot1, followed by a translation δtrans and a second

rotation δrot2. The turns and translation are noisy.

Figure 4. Typical ultrasound scan of a robot in its environment

Figure 5. Obstacle avoidance using random walk

Figure 6. (a) Evidence Grid (b) Frontier Edge Segments (c) Frontier Regions

Figure 7. A wall following robot

Figure 8. PeopleBot

Figure 9. Sick LMS 200

Figure 10. Representation of motion from one point to the other

Figure 11. This figure shows the implementation of the odometry based motion

model. The figure on the left shows the particles generated by incorporating high

translational noise and the figure on the right shows the particles generated as a

result of incorporating a high rotational noise

Figure 12. This figure illustrates the sensor model while incorporating only 2

beams (shown in blue) for clarity. If the map and pose of the robot are fixed then

for a scan input of [4,3] the weight returned is 0.0538, for a scan of [6,6] the weight

is 0.0039 and for a scan input of [2,3], which is the correct scan, the weight

returned is 0.2129. Thus, our measurement model works.

Figure 13. This figure depicts the working of our map making algorithm. The

green pixels represent the unexplored areas, the black represent the occupied area

Page 8: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

viii

and the white pixels represent the frees pace. The scan input for the situation shown

is [4,4,4,4,4].

Figure 14. This figure shows the complete map of the world obtained after

performing SLAM in Player/Stage without any included noise. A perfect SLAM

algorithm would return a ditto copy of this map

Figure 15. This figure shows the complete map of the world generated by

including noise in the sensors and no filtering technique at all. The resulting map

clearly sows the integral errors of localisation and mapping. This map was also

used by us to compare our subsequent results.

Figure 16. This figure shows the map of the world as a result of particle filtering

with 75 particles. The resolution of the grid has been kept such that a single pixel in

the bitmap represents 5x5 cm. As evident, a huge amount of noise has successfully

been filtered which shows convergence of our algorithms.

Figure 17. Map of corridor generated using 75 particles. The grid resolution has

been kept at 5x5cm. The maximum range of the laser is 8m. The total length of the

area explored is approximately 62m. A simple wall following obstacle avoidance

code controls the movements of the robot because such a technique for obstacle

avoidance is suitable enough for navigation in corridors

Figure 18. Map of Electronics Lab at Mechatronics department at EME NUST.

This environment was used to check loop-closing of our SLAM technique. The grid

resolution is kept at 5x5 cm and 75 particles have been used.

Figure 19. Map of ground floor corridor of Centre for Advanced Mathematics and

Physics (CAMP) building at EME, NUST. The grid resolution is kept at 5x5 cm

and the total number of particles are 75.

Page 9: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

ix

LIST OF TABLES

Table 1. The general algorithm for Bayes filtering

Table 2. The particle filter algorithm, a variant of the Bayes filter based on

importance sampling

Table 3. Algorithm for sampling from p(xt | ut , xt-1) based on odometry

information. Here the pose at time t is represented by xt-1 = (x y θ). The control is a

differentiable set of two pose estimates obtained by the robot’s odometer, ut = (x*t-1,

x*t), with x*

t-1 = (x*, y*, θ*) and x*t = (x**, y**, θ**).

Table 4. Algorithm for computing the weight of a range finder scan using

Euclidean distance to the nearest neighbor. The function prob(dist2,σ2hit) computes

the probability of the distance under a zero-centered Gaussian distribution with

variance σ2hit

Table 5. A simple inverse measurement model for robots equipped with range

finders.Here α is the thickness of obstacles, and β the width of a sensor beam. The

values locc and lfree in lines 9 and 11 denote the amount of evidence a reading carries

for the two different cases.

Page 10: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

x

LIST OF SYMBOLS

x The x co-ordinate of the robot

y The y co-ordinate of the robot

θ The heading of the robot

xt The state at time t

zt The measurement at time t

µt The control data at time t

ƞ The normalization constant for a probability distribution

Χt The particle set at time t

xt[m] State of particle m at time t

wt[m] Weight of particle m at time t

xt* The state of the robot as reported by the odometry at time t

δrot1 The first rotation encountered in decomposition of the robot’s motion between two

points

δrot2 The second rotation encountered in decomposition of the robot’s motion between two

points

δtrans The translation encountered in decomposition of the robot’s motion between two

points

zmax The maximum range reading of the robot’s sensor

zkt The range of beam k for the measurement at time t

θksens The angular orientation of the sensor beam k relative to the robot’s heading direction

(xksens yk

sens) The relative location of the sensor in the robot’s fixed, local coordinate

system

Page 11: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

1

CHAPTER 1 - INTRODUCTION

Simultaneous Localization and Mapping is one of the latest and most difficult

problems faced by the robotic research community in recent years. Localization and

Mapping are two separate tasks. Localization means determining the position of the robot

given a pre-determined map of the environment. Whereas mapping defines the problem of

determining the map of the environment, given accurate position of the robot at all times.

Tackling of both problems simultaneously gives birth to SLAM. Due to inaccuracy in the

robot sensors and actuators, the accumulated error in both these problems makes it very

difficult for SLAM to converge. An inaccurate map results in inaccurate localization which

in turn causes more inaccuracy in the map. Existence of such integral errors which

accumulate over time requires some filtering technique to be employed while

implementing SLAM.

The different types of filters used and the difference in variables treated as part of

the hidden state have classified SLAM in to many types. Initially the Kalman filter was

extensively used in SLAM. It enabled SLAM to converge but had the limitation of

assuming a fixed Gaussian distribution for all uncertainties. Also it was not able to produce

real time results in environments containing a large number of features due to the heavy

covariance matrix.

Some techniques were developed which used a particle filter for estimating the

robot pose while using a Kalman Filter for the map. The most widely known of these

techniques is FAST SLAM. Although they produced effective results in real time, they

needed a very large number of features in the environment to converge.

Particle Filters are the base of perhaps the most popular techniques developed for

autonomous robot localization and mapping. The most widely used of this group of

techniques are the Monte-Carlo methods. They have also been independently employed in

SLAM. Particle Filters compute samples over all distributions and do not assume any

particular form of distribution, thus overcoming a major limitation of the Kalman Filter

based techniques. It has been thought that maintaining a map for each and every particle in

the particle filter would take too much time to be implemented in the real world. Hence

many popular techniques among this class of SLAM have avoided maintaining multiple

maps someway or the other. Our technique however maintains a map for every particle

Page 12: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

2

while giving real time results at the same time. Successful results have been achieved in

simulation as well as real implementation. All parts of our algorithms are easy to

understand and implement. We have used a simple odometry based motion model and a

similar range finder based sensor model. We have represented maps in the form of

occupancy grids. This eliminates the computationally expensive need for feature extraction

which is a separate field of study in itself. Hence we are able to do SLAM in an

environment with very few features.

Page 13: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

3

CHAPTER 2 – BACKGROUND INFORMATION

2.1 Introduction

SLAM is one of the most widely researched subfields of robotics. An intuitive

understanding of the SLAM process can be conveyed though a hypothetical example.

Consider a simple mobile robot: a set of wheels connected to a motor and a camera,

complete with actuators—physical devices for controlling the speed and direction of the

unit. Now imagine the robot being used remotely by an operator to map inaccessible

places. The actuators allow the robot to move around, and the camera provides enough

visual information for the operator to understand where surrounding objects are and how

the robot is oriented in reference to them. What the human operator is doing is an example

of SLAM (Simultaneous Localization and Mapping). Determining the location of objects

in the environment is an instance of mapping, and establishing the robot position with

respect to these objects is an example of localization. The SLAM subfield of robotics

attempts to provide a way for robots to do SLAM autonomously. A solution to the SLAM

problem would allow a robot to make maps without any human assistance whatsoever.

2.2 Applications

If a solution to the SLAM problem could be found, it would open the door to

innumerable mapping possibilities where human assistance is cumbersome or impossible.

Maps could be made in areas which are dangerous or inaccessible to humans such as deep-

sea environments or unstable structures. A solution to SLAM would obviate outside

localization methods like GPS or man-made beacons. It would make robot navigation

possible in places like damaged (or undamaged) space stations and other planets. Even in

locations where GPS or beacons are available, a solution to the SLAM problem would be

invaluable. GPS is currently only accurate to within about one half of a meter, which is

often more than enough to be the difference between successful mapping and getting stuck.

Placing man-made beacons is expensive in terms of time and money; in situations where

beacons are an option, simply mapping by hand is almost always more practical.

2.3 History

SLAM is a major, yet relatively new subfield of robotics. It wasn’t until the mid

1980s that Smith and Durrant-Whyte developed a concrete representation of uncertainty in

Page 14: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

4

feature location (Durrant-Whyte, [2002]). This was a major step in establishing the

importance in finding a practical rather than a theoretical solution to robot navigation.

Smith and Durrant-Whyte’s paper provided a foundation for finding ways to deal with the

errors. Soon thereafter a paper by Cheesman, Chatila, and Crowley proved the existence of

a correlation between feature location errors due to errors in motion which affect all

feature locations (Durrant-Whyte, [2002]). This was the last step in solidifying the SLAM

problem as we know it today.

There is a lot more we could say about the history of breakthroughs that have been

made in SLAM but the ones that have been mentioned are perhaps the most important

(Durrant-Whyte, [2002]), so instead it is time to take a closer look at the SLAM problem.

But before that it is necessary to review the preliminaries of the knowledge which leads to

our implementation of SLAM. This report provides the understanding required by

remaining strictly within the domains of our work. But it by no means, implies that SLAM

is only limited to the techniques mentioned over here. The canvas of SLAM is too large to

be included in any single literature. It would require a lot of study to develop an

understanding of SLAM and fully appreciate its vast domain and possible applications in

the ever-growing field of mobile robotics.

Page 15: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

5

CHAPTER 3 – STATE ESTIMATION WITH PARTICLE FILTERING

3.1 The State

The environment, or world, of a robot is a dynamical system that possesses internal

state. It may be a room which the robot has to map, it may be a laboratory in which it has

to assist a scientist, or it may even be the assembly line of a factory in which it has to

perform various machining operations. The robot can acquire information about its

environment using its sensors. However, sensors are noisy, and there are usually many

things that cannot be sensed directly. As a consequence, the robot maintains an internal

belief with regards to the state of its environment, as it cannot be determined directly

through measurement of the sensors. The robot can also influence its environment through

its actuators, like its wheels or any other mechanism which changes the state of the

environment. However, the effect of doing so is often somewhat unpredictable because

firstly, the actuators do not follow the commands given to them accurately and secondly,

the sensors which provide feedback of the movements of the actuators are also not noise-

free.

Environments are characterized by the state variable. It is convenient to think of the

state as the collection of all aspects of the robot and its environment that can impact the

future. State may change over time, such as the location of people; or it may remain static

throughout the robot’s operation, such as the location of walls in (most) buildings. State

that changes will be called dynamic state, which distinguishes it from static, or non-

changing state. The state also includes variables regarding the robot itself, such as its pose,

velocity, whether or not its sensors are functioning correctly, and so on. Throughout this

document, the state will be denoted x; although the specific variables included in x will

depend on the context. In our case, the state x consists of the robots pose (location and

heading), and the map of the environment. The state at time t will be denoted xt. Typical

state variables used in the robotics research are

1. The robot pose, which comprises its location and orientation relative to a global co-

ordinate frame. Rigid mobile robots possess six such state variables, three for their

Cartesian coordinates, and three for their angular orientation, also called Euler

angles (pitch, roll, and yaw). For rigid mobile robots confined to planar

environments, the pose is usually given by three variables; its two location

Page 16: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

6

coordinates in the plane and its heading direction (yaw). The robot pose is often

referred to as the kinematic state.

2. The configuration of the robot’s actuators, such as the joints of robotic

manipulators. Each degree of freedom in a robot arm is characterized by a one-

dimensional configuration at any point in time, which is part of the kinematic state

of the robot.

3. The robot velocity and the velocities of its joints. A rigid robot moving through

space is characterized by up to six velocity variables, one for each pose variables.

Velocities are commonly referred to as dynamic state.

4. The location and features of surrounding objects in the environment. An object may

be a tree, a wall, or a pixel within a larger surface. Features of such objects may be

their visual appearance (color, texture). Depending on the granularity of the state

that is being modeled, robot environments possess between a few dozen and up to

hundreds of billions of state variables (and more). Just imagine how many bits it

will take to accurately describe a physical environment! For many of the problems

encountered in today’s research, the location of objects in the environment will be

static. In some problems, objects assume the form of landmarks, which are distinct,

stationary features of the environment that can be recognized reliably.

5. The location and velocities of moving objects and people. Often, the robot is not

the only moving actor in its environment. Other moving entities possess their own

kinematic and dynamic state.

There can be a huge number of other state variables. For example, whether or not a

sensor is broken is a state variable, as is the level of battery charge for a battery-powered

robot.

3.1.1 Completeness and Continuity of State

A state xt will be called complete if it is the best predictor of the future. Put

differently, completeness entails that knowledge of past states, measurements, or controls

carry no additional information that would help us to predict the future more accurately. It

is important to notice that this definition of completeness does not require the future to be a

Page 17: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

7

deterministic function of the state. The future may be stochastic, but no variables prior to xt

(like the state at the previous time step), may influence the stochastic evolution of future

states, unless this dependence is mediated through the state xt. Temporal processes that

meet these conditions are commonly known as Markov chains.

The notion of state completeness is mostly of theoretical importance. In practice, it

is impossible to specify a complete state for any realistic robot system. A complete state

includes not just all aspects of the environment that may have an impact on the future, but

also the robot itself, the content of its computer memory, the brain dumps of surrounding

people, etc. Those are hard to obtain. Practical implementations therefore single out a

small subset of all state variables, such as the ones listed above. Such a state is called

incomplete state. However, it is often safe to assume an incomplete state to be a complete

one, as has been done in our work.

In most robotics applications, the state is continuous, meaning that xt is defined

over a continuum. A good example of a continuous state space is that of a robot pose, that

is, its location and orientation relative to an external coordinate system. Sometimes, the

state is discrete. An example of a discrete state space is the (binary) state variable that

models whether or not a sensor is broken. State spaces that contain both continuous and

discrete variables are called hybrid state spaces. The state in our application of SLAM is a

continuous one.

3.2 Interaction with the Environment

There are two fundamental types of interactions between a robot and its

environment: The robot can influence the state of its environment through its actuators.

And it can gather information about the state through its sensors. Both types of interactions

may co-occur, but they are often treated separately for all related purposes.

3.2.1 Sensor Measurements

Perception is the process by which the robot uses its sensors to obtain information

about the state of its environment. For example, a robot might take a camera image, a

range scan, or query its tactile sensors to receive information about the state of the

environment. The result of such a perceptual interaction is called a measurement,

observation or percept. Typically, sensor measurements arrive with some delay. Hence

Page 18: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

8

they provide information about the state a few moments ago. Accuracy dictates that this

delay is accounted for in all calculations.

3.2.2 Control Actions

Control actions change the state of the world. They do so by actively asserting

forces on the robot’s environment. Examples of control actions include robot motion and

the manipulation of objects. Even if the robot does not perform any action itself, state

usually changes. Thus, for consistency, it is assumed that the robot always executes a

control action, even if it chooses not to move any of its motors. In practice, the robot

continuously executes controls and measurements are made concurrently.

A robot may keep a record of all past sensor measurements and control actions.

Such a collection is commonly referred to, as the data. In accordance with the two types

of environment interactions, the robot has access to two different data streams.

3.2.3 Measurement Data

It provides information about a momentary state of the environment. Examples of

measurement data include camera images, range scans, and so on. Mostly, small timing

effects are ignored. (e.g., most laser sensors scan environments sequentially at very high

speeds, but it is simply assumed that the measurement corresponds to a specific point in

time). The measurement data at time t will be denoted zt. Furthermore it is assumed, that

the robot takes exactly one measurement at a time. This assumption is mostly for

notational convenience, as nearly all algorithms can easily be extended to robots that can

acquire variables numbers of measurements within a single time step. The notation

Zt1:t2 = zt1 , zt1+1 , zt1+2 , , , , zt2.

Denotes the set of all measurements acquired from time t1 to time t2.

3.2.4 Control Data

Control data carry information about the change of state in the environment. In

mobile robotics, a typical example of control data is the velocity of a robot. Setting the

velocity to 10 cm per second for the duration of five seconds suggests that the robot’s pose,

after executing this motion command, is approximately 50cm ahead of its pose before

command execution. Thus, its main information regards the change of state. An alternative

Page 19: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

9

source of control data are odometers. Odometers are sensors that measure the revolution of

a robot’s wheels. As such they convey information about the change of the state. Even

though odometers are sensors, odometry is treated as control data, since its main

information regards the change of the robot’s pose.

Control data is denoted µt. The variable µt will always correspond to the change of

state in the time interval (t1; t]. The sequences of control data by µt1:t2 , for t1 – t2 will be

µt1:t2 = µt , µt+1 , µt1+2 , , , µt2

Since the environment may change even if a robot does not execute a specific

control action, the fact that time passed by constitutes, technically speaking, control

information. Hence, we assume that there is exactly one control data item per time step t.

The distinction between measurement and control is a crucial one, as both types of

data play fundamentally different roles in the material yet to come. Perception provides

information about the environment’s state; hence it tends to increase the robot’s

knowledge. Motion, on the other hand, tends to induce a loss of knowledge due to the

inherent noise in robot actuation and the stochasticity of robot environments, although

sometimes a control makes the robot more certain about the state. By no means is this

distinction intended to suggest that actions and perceptions are separated in time, i.e. the

robot does not move while taking sensor measurements. Rather, perception and control

takes place concurrently, many sensors affect the environment, and the separation is

strictly for convenience. However, due to time issues in the calculations, we have

performed these two separately, i.e. the robot stops to make the measurements and estimate

the current state.

3.3 State Estimation Techniques

3.3.1 Belief Distributions

Another key concept in probabilistic robotics is that of a belief. A belief reflects the

robot’s internal knowledge about the state of the environment. It has already been

discussed that the state cannot be measured directly. For example, a robot’s pose might be

(x, y, z) in some global coordinate system, but it usually cannot know its pose (location

and heading), since poses are not measurable directly (not even with GPS!). Instead, the

Page 20: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

10

robot must infer its pose from data. Therefore, the true state is distinguished from its

internal belief, or state of knowledge with regards to that state.

Probabilistic robotics represents beliefs through conditional probability

distributions. A belief distribution assigns a probability (or density value) to each possible

hypothesis with regards to the true state. Belief distributions are posterior probabilities

over state variables conditioned on the available data. The belief over a state variable xt is

denoted by bel(xt), which is an abbreviation for the posterior

bel(xt) = p(xt | z1:t , u1:t)

This posterior is the probability distribution over the state xt at time t, conditioned

on all past measurements z1:t and all past controls u1:t.

It may be noticed that it is assumed that the belief is taken after incorporating the

measurement zt. Occasionally, it will prove useful to calculate a posterior before

incorporating zt, just after executing the control ut. Such a posterior will is denoted as

follows.

bel(xt) = p(xt | z1:t , u1:t)

This probability distribution is often referred to as prediction in the context of

probabilistic filtering. This terminology reflects the fact that bel(xt) predicts the state at

time t based on the previous state posterior, before incorporating the measurement at time

t. Calculating bel(xt) from bel(xt) is called correction or the measurement update.

3.3.2 The Bayes Filter

The most general algorithm for calculating beliefs is given in the following text.

The Bayes filter is a very general form of state estimation and all further techniques are

derivatives of this filter. They include the Kalman Filter, the Particle Filter, the

Information Filter and the Histogram Filter. The filter we have used (Particle Filter) is also

a derived form of the Bayes Filter, hence it is necessary to provide some detail of it over

here. This algorithm calculates the belief distribution bel from measurement and control

data.

The table on the next page depicts the basic Bayes filter in pseudo-algorithmic

form. The Bayes filter is recursive, that is, the belief bel(xt) at time t is calculated from the

Page 21: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

11

belief bel(xt-1) at time t-1. Its input is the belief bel at time t-1, along with the most recent

control ut and the most recent measurement zt. Its output is the belief bel(xt) at time t. the

given table only depicts a single step of the Bayes Filter algorithm: the update rule. This

update rule is applied recursively, to calculate the belief bel(xt) from the belief bel(xt-1),

calculated previously.

The Bayes filter algorithm possesses two essential steps. In Line 3, it processes the

control ut. It does so by calculating a belief over the state xt based on the prior belief over

state xt-1 and the control ut. In particular, the belief bel(xt) that the robot assigns to state xt

is obtained by the integral (sum) of the product of two distributions, the prior assigned to

xt-1, and the probability that control ut induces a transition from xt-1 to xt. As stated above,

this update step is called the control update, or prediction.

The second step of the Bayes filter is called the measurement update. In Line 4, the

Bayes filter algorithm multiplies the belief bel(xt) by the probability that the measurement

zt may have been observed. It does so for each hypothetical posterior state xt.

As may be apparent, the resulting product is generally not a probability, that is, it

may not integrate to 1. Hence, the result is normalized, by virtue of the normalization

constant . This leads to the final belief bel(xt), which is returned in Line 6 of the algorithm.

To compute the posterior belief recursively, the algorithm requires an initial belief

bel(x0) at time t = 0 as boundary condition. If one knows the value of x0 with certainty,

bel(x0) should be initialized with a point mass distribution that centers all probability mass

on the correct value of x0, and assigns zero probability anywhere else. If one is entirely

ignorant about the initial value x0, bel(x0) may be initialized using a uniform

1: Algorithm Bayes filter (bel(xt-1), ut, zt):

2: for all xt do

3: bel(xt) =ʃ p(xt | ut, xt-1) bel(xt-1) dx

4: bel(xt) = ƞ p(zt | xt) bel(xt)

5: end for

6: return bel(xt)

Page 22: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

12

Table 1 The general algorithm for Bayes filtering.

distribution over the domain of x0. Partial knowledge of the initial value x0 can be

expressed by non-uniform distributions, however, the two cases of full knowledge and full

ignorance are the most common ones in practice.

The algorithm of the Bayes filter can only be implemented in the form stated here

for very simple estimation problems. In particular, there is no need to be able to carry out

the integration in Line 3 and the multiplication in Line 4 in closed form, or it is not needed

to to be restricted to finite state spaces, so that the integral in Line 3 becomes a (finite)

sum.

3.3.3 The Particle Filter

3.3.3.1 Basic Algorithm

The particle filter is nonparametric implementation of the Bayes filter. Particle

filters approximate the posterior by a finite number of parameters. However, they differ

from the Kalman filter and others in the way these parameters are generated, and in which

they populate the state space. The key idea of the particle filter is to represent the posterior

bel(xt) by a set of random state samples drawn from this posterior. Instead of representing

the distribution by a parametric form (the exponential function that defines the density of a

normal distribution), particle filters represent a distribution by a set of samples drawn from

this distribution. Such a representation is approximate, but it is nonparametric, and

therefore can represent a much broader space of distributions than, for example, Gaussians.

In particle filters, the samples of a posterior distribution are called particles and are

denoted

Χt = xt[1] , xt

[2] , , , , ,xt[M]

Each particle x[m] (with 1 <= m <= M) is a concrete instantiation of the state at time

t, that is, a hypothesis as to what the true world state may be at time t. Here M denotes

Page 23: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

13

Table 2 The particle filter algorithm, a variant of the Bayes filter based on importance sampling.

the number of particles in the particle set Xt. In practice, the number of particles M is often

a large number, e.g., M = 1000. In some implementations M is a function of t or of other

quantities related to the belief bel(xt).

The intuition behind particle filters is to approximate the belief bel(xt) by the set of

particles Xt. Ideally, the likelihood for a state hypothesis xt to be included in the particle set

Xt shall be proportional to its Bayes filter posterior bel(xt):

Xt[m] ~ p(xt | z1:t , u1:t)

As a consequence of this, the denser a sub-region of the state space is populated by

samples, the more likely it is that the true state falls into this region. As will follow below,

the property holds only asymptotically for M approaches infinity for the standard particle

filter algorithm. For finite M, particles are drawn from a slightly different distribution. In

practice, this difference is negligible as long as the number of particles is not too small

(e.g., M >100).

1. Algorithm Particle filter(Xt-1 , ut , zt):

2. X`t = Xt = Φ

3. for m = 1 to M do

4. sample xt[m] ~ p(xt | ut , x[m]

t-1)

5. wt[m] = p(zt | xt

[m])

6. X`t = X`t + (xt[m]

, wt[m])

7. end for

8. for m = 1 to M do

9. draw i with probability α wt[i]

10. add xt[i] to Xt

11. end for

12. return Xt

Page 24: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

14

Just like all other Bayes filter algorithms, the particle filter algorithm constructs the

belief bel(xt) recursively from the belief bel(xt-1) one time step earlier. Since beliefs are

represented by sets of particles, this means that particle filters construct the particle set Xt

recursively from the set Xt-1. The most basic variant of the particle filter algorithm is stated

in the table given on the previous page. The input of this algorithm is the particle set Xt-1,

along with the most recent control ut and the most recent measurement zt. The algorithm

then first constructs a temporary particle set X` which is reminiscent (but not equivalent) to

the belief bel(xt). It does this by systematically processing each particle

xt-1[m] in the input particle set Xt-1 as follows.

1. Line 4 generates a hypothetical state xt[m] for time t based on the particle xt-1

[m] and

the control ut. The resulting sample is indexed by m, indicating that it is generated

from the m-th particle in Xt-1. This step involves sampling from the next state

distribution p(xt | ut, xt-1). To implement this step, one needs to be able to sample

from p(xt | ut, xt-1). The ability to sample from the state transition probability is not

given for arbitrary distributions p(xt | ut, xt-1). However, many major distributions

possess efficient algorithms for generating samples. The set of particles resulting

from iterating Step 4 M times is the filter’s representation of bel(xt).

2. Line 5 calculates for each particle xt[m] the so-called importance factor, denoted

wt[m]. Importance factors are used to incorporate the measurement zt into the

particle set. The importance, thus, is the probability of the measurement zt under

the particle xt[m], that is, wt

[m] = p(zt | xt[m] ). If we interpret wt

[m] as the weight of a

particle, the set of weighted particles represents (in approximation) the Bayes filter

posterior bel(xt).

3. The real “trick” of the particle filter algorithm occurs in Lines 8 through 11. These

lines implemented what is known as resampling or importance resampling. The

algorithm draws with replacementM particles from the temporary set X`t. The

probability of drawing each particle is given by its importance weight. Resampling

transforms a particle set of M particles into another particle set of the same size. By

incorporating the importance weights into the resampling process, the distribution

of the particles change; whereas before the resampling step, they were distributed

according to bel(xt), after the resampling they are distributed (approximately)

Page 25: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

15

according to the posterior bel(xt) = ƞ p(zt | xt[m] )bel(xt). In fact, the resulting sample

set usually possesses many duplicates, since particles are drawn with replacement.

More important are the particles that are not contained in Xt ; those tend to be the

particles with lower importance weights

The resampling step has the important function to force particles back to the

posterior

bel(xt). In fact, an alternative (and usually inferior) version of the particle filter would

never resample, but instead would maintain for each particle an importance weight that is

initialized by 1 and updated multiplicatively:

wt[m] = p(zt | xt

[m] ) wt-1[m]

Such a particle filter algorithm would still approximate the posterior, but many of

its particles would end up in regions of low posterior probability. As a result, it would

require many more particles; how many depends on the shape of the posterior. The

resampling step is a probabilistic implementation of the Darwinian idea of survival of the

fittest: It refocuses the particle set to regions in state space with high posterior probability.

By doing so, it focuses the computational resources of the filter algorithm to regions in the

state space where they matter the most.

3.3.3.2 Importance Resampling

For an understanding of the importance resampling step, the figure below gives an

illustration of importance factors in particle filters:

(a)We seek to approximate the target density f.

(b) Instead of sampling from f directly, we can only generate samples from a

different density, g. Samples drawn from g are shown at the bottom of this diagram.

(c) A sample of f is obtained by attaching the weight f(x)=g(x) to each sample x. In

particle filters, f corresponds to the posterior of the belief bel(xt) and g to the prior

of the belief bel(xt).

Page 26: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

16

Figure 1 Illustration of the importance resampling step. The magnitudes of the blue lines show the number of samples or

particles in the current particle set.

3.3.3.3 Limitations and Remedies

3.3.3.3.1 Inclusion of Random Samples

One of the limitations of the Particle Filter is encountered in the re-sampling step.

Although a highly weighted particle has a high probability of being drawn for inclusion in

the new particle set, it also has a non-zero probability of not being drawn. So, unless a very

Page 27: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

17

large number of particles are maintained, we have a chance of losing more accurate state

estimates. We have solved this problem by including some random particles regardless of

their weights in the new particle set.

3.3.3.3.2 Use of Un-normalized Weights

Most applications require that the weights assigned to the particles based on the

measurement model are normalized as they represent probabilities. It may be tedious in

some situations to calculate this normalizer. However we have not implemented this step

because our maps in the state vectors are in deterministic form and do not need to be

represented probabilistically. So, for our technique, normalized weights are not required in

the re-sampling step.

Page 28: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

18

CHAPTER 4 - MOTION MODEL

This and the next chapter describe the two vital components for implementing the

particle filter algorithm described thus far, the motion and the measurement models. This

chapter focuses on the motion model. It gives some explanation of probabilistic motion

models as they are being used in actual robotics implementations. These models comprise

the state transition probability p(xt | ut , xt-1), which plays an essential role in the prediction

step of the Bayes filter. The next chapter will describe probabilistic models of sensor

measurements p(zt | xt), which are essential for the measurement update step.

Robot kinematics, has been studied thoroughly in past decades. However, it has

almost exclusively been addressed in deterministic form. Probabilistic robotics generalizes

kinematic equations to the fact that the outcome of a control is uncertain, due to control

noise or un-modeled exogenous effects. Following the theme our work, our description

will be probabilistic: The outcome of a control will be described by a posterior probability.

In doing so, the resulting models will be amenable to the probabilistic state estimation

technique, the particle filter described in the previous chapters.

Our exposition focuses entirely on mobile robot kinematics for robots operating in

planar environments. In this way, it is much more specific than most contemporary

treatments of kinematics. However, our restricted choice of study is by no means to be

interpreted that probabilistic ideas are limited to kinematic models of mobile robots.

Rather, it is descriptive of the present state of the art, as probabilistic techniques have

enjoyed their biggest successes in mobile robotics. As the model presented here illustrates,

deterministic robot actuator models are “probilified” by adding noise variables that

characterize the types of uncertainty that exist in robotic actuation.

In theory, the goal of a proper probabilistic model may appear to accurately model

the specific types of uncertainty that exist in robot actuation and perception. In practice,

the exact shape of the model often seems to be less important than the fact that some

provisions for uncertain outcomes are provided in the first place. In fact, many of the

models that have proven most successful in practical applications vastly overestimate the

amount of uncertainty. By doing so, the resulting algorithms are more robust to violations

of the Markov assumptions, such as un-modeled state and the effect of algorithmic

approximations.

Page 29: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

19

4.1 Kinematic Configuration

Kinematics is the calculus describing the effect of control actions on the

configuration of a robot. The configuration of a rigid mobile robot is commonly described

by six variables, its three-dimensional Cartesian coordinates and its three Euler angles

(roll, pitch, yaw) relative to an external coordinate frame. Most of SLAM is largely

restricted to mobile robots operating in planar environments, whose kinematic state, or

pose, is summarized by three variables. This is illustrated in the figure below. The robot’s

pose comprises its two-dimensional planar coordinates relative to an external coordinate

frame, along with its angular orientation. Denoting the former as x and y (not to be

confused with the state variable xt), and the latter by θ, the pose of the robot is described

Figure 2 Robot pose, shown in a global coordinate system.

by the following vector:

(x , y , θ).

The orientation of a robot is often called bearing, or heading direction. As shown

in the figure, we postulate that a robot with orientation θ = 0 points into the direction of its

x-axis. A robot with orientation θ = 0.5π points into the direction of its y-axis. Pose

without orientation will be called location.

For simplicity, locations in this book are usually described by two-dimensional

vectors, which refer to the x-y coordinates of an object:

(x , y).

Page 30: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

20

4.2 Probabilistic Kinematics

The probabilistic kinematic model or motion model plays the role of the state

transition model in mobile robotics. This model is the familiar conditional density

p(xt | ut , xt-1)

Here xt and xt-1 are both robot poses (and not just its x-coordinates), and ut is a

motion command. This model describes the posterior distribution over kinematics states

that a robots assumes when executing the motion command ut when its pose is xt-1. In

implementations, ut is sometimes provided by a robot’s odometry. However, for

conceptual reasons ut is referred to as control.

Commonly two specific probabilistic motion models p(xt | ut; xt-1) are used, both for

mobile robots operating in the plane. The velocity-based motion model and the odometry

based motion model. Both models are somewhat complimentary in the type of motion

information that is being processed. The first model assumes that the motion data ut

specifies the velocity commands given to the robot’s motors. Many commercial mobile

robots (e.g., differential drive, synchro drive) are actuated by independent translational and

rotational velocities, or are best thought of being actuated in this way. The second model

assumes that one is provided with odometry information. Most commercial bases provide

odometry using kinematic information (distance traveled, angle turned). The resulting

probabilistic model for integrating such information is somewhat different from the

velocity model.

In practice, odometry models tend to be more accurate than velocity models, for the

simple reasons that most commercial robots do not execute velocity commands with the

level of accuracy that can be obtained by measuring the revolution of the robot’s wheels.

However odometry is only available post-the-fact. Hence it cannot be used for motion

planning. Planning algorithms such as collision avoidance have to predict the effects of

motion. Thus, odometry models are usually applied for estimation, whereas velocity

models are used for probabilistic motion planning.

Page 31: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

21

4.3 Odometry Motion Model

Figure 3 Odometry model: The robot motion in the time interval (t - 1; t] is approximated by a rotation δrot1, followed by

a translation δtrans and a second rotation δrot2. The turns and translation are noisy.

In our work, we have used the odometry measurements as the basis for calculating

the robot’s motion over time. Odometry is commonly obtained by integrating wheel

encoders information; most commercial robots make such integrated pose estimation

available in periodic time intervals (e.g., every tenth of a second). Practical experience

suggests that odometry, while still erroneous, is usually more accurate than velocity. Both

suffer from drift and slippage, but velocity additionally suffers from the mismatch between

the actual motion controllers and its (crude) mathematical model. However, odometry is

only available in retrospect, after the robot moved. This poses no problem for filter

algorithms, but makes this information unusable for accurate motion planning and control.

Here we explain the motion model that we have used. It uses odometry

measurements in lieu of controls. Technically, odometry are sensor measurements, not

controls. To model odometry as measurements, the resulting Bayes filter would have to

include the actual velocity as state variables—which increases the dimension of the state

space. To keep the state space small, it is therefore common to simply consider the

odometry as if it was a control signal. In this section, we will do exactly this, and treat

odometry measurements as controls. The resulting model is at the core of many of today’s

best probabilistic robot systems.

Page 32: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

22

Let us define the format of our control information. At time t, the correct pose of

the robot is modeled by the random variable xt. The robot odometry estimates this pose;

however, due to drift and slippage there is no fixed coordinate transformation between the

coordinates used by the robot’s internal odometry and the physical world coordinates. In

fact, knowing this transformation would solve the robot localization problem!

The odometry model uses the relative information of the robot’s internal odometry.

More specifically, in the time interval (t-1, t], the robot advances from a pose xt-1 to pose xt.

The odometry reports back to us a related advance from xt-1* = (x* y* θ*) to x*

t = (x** y**

θ**). Here the asterisk indicates that these are odometry measurements, embedded in a

robot-internal coordinate whose relation to the global world coordinates is unknown. The

key insight for utilizing this information in state estimation is that the relative difference

between x*t-1 and x*

t, under an appropriate definition of the term “difference,” is a good

estimator for the difference of the true poses xt-1 and xt. The motion information ut is, thus,

given by the pair

ut = ( x*t-1 , x*

t)

To extract relative odometry, ut is transformed into a sequence of three steps: a

rotation, followed by a straight line motion (translation) and another rotation. The figure

below illustrates this decomposition: the initial turn is called δrot1, the translation δtrans, and

the second rotation δrot1. As may be verified, each pair of positions has a unique parameter

vector (δrot1, δtrans, δrot2), and these parameters are sufficient to reconstruct the relative

motion between the positions. Thus, this vector is a sufficient statistics of the relative

motion encoded by the odometry. Our motion model assumes that these three parameters

are corrupted by independent noise.

If particle filters are used for localization, we would also like to have an algorithm

for sampling from p(xt | ut , xt-1). The algorithm shown in the table below, implements the

sampling approach. It accepts an initial pose xt-1 and an odometry reading ut as input, and

outputs a random xt distributed according to p(xt | ut , xt-1). Note that it randomly guesses a

pose xt (Lines 5-10), instead of computing the probability of a given xt. This sampling

algorithm is quite easy to implement in practice. The mathematical derivation is simply

straight forward and can be looked up in [1].

Page 33: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

23

1: Algorithm sample motion model odometry(ut , xt-1):

2: δrot1 = atan2(y** - y*, x** - x*) – θ*

3: δtrans = √( (x** - x*)2 + (y** - y*)2 )

4: δrot2 = θ** - θ* - δrot1

5: δ^rot1 = δrot1 - sample(α1 δrot1 + α2 δtrans)

6: δ^trans = δtrans - sample(α3 δtrans + α4 ( δrot1 + δrot2) )

7: δ^rot2 = δrot2 - sample(α1 δrot1 + α2 δtrans)

8: x` = x + δ^trans cos( θ + δ^

rot1)

9: y` = y + δ^trans sin( θ + δ^

rot1)

10: θ` = θ + δ^rot1 + δ^

rot2

11: return xt = (x`, y`, θ`)

Table 3 Algorithm for sampling from p(xt | ut , xt-1) based on odometry information. Here the pose at time t is represented

by xt-1 = (x y θ). The control is a differentiable set of two pose estimates obtained by the robot’s odometer, ut = (x*t-1, x*

t),

with x*t-1 = (x*, y*, θ*) and x*

t = (x**, y**, θ**).

The function sample (α) takes an argument which is the variance of a zero-centered

normal distribution and returns a random sample from that distribution. Note that this need

not be a Gaussian distribution but can be representative of any form, thus showing the

flexibility of the sampling based techniques.

Page 34: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

24

CHAPTER 5 - MEASUREMENT MODEL

Measurement models comprise the second domain-specific model in probabilistic

robotics, next to motion models. Measurement models describe the formation process by

which sensor measurements are generated in the physical world. Today’s robots use a

variety of different sensor modalities, such as tactile sensors, range sensors, or cameras.

The specifics of the model depend on the sensor: Imaging sensors are best modeled by

projective geometry, whereas sonar sensors are best modeled by describing the sound wave

and its reflection on surfaces in the environment.

Probabilistic robotics explicitly models the noise in sensor measurements. Such

models account for the inherent uncertainty in the robot’s sensors. Formally, the

measurement model is defined as a conditional probability distribution p(zt | xt,m), where

xt is the robot pose, zt is the measurement at time t, and m is the map of the environment.

Although we have considered and used only the range-finder model, the underlying

principles and equations are not limited to this type of sensors. Instead the basic principle

can be applied to any kind of sensor, such as a camera or a bar-code operated landmark

detector.

To illustrate the basic problem of mobile robots that use their sensors to perceive

their environment, the figure below shows a typical range-scan obtained in a corridor with

a mobile robot equipped with a cyclic array of 24 ultrasound sensors. The distances

measured by the individual sensors are depicted in black and the map of the environment is

shown in light grey. Most of these measurements correspond to the distance of the nearest

object in the measurement cone; some measurements, however, have failed to detect any

object. The inability for sonar to reliably measure range to nearby objects is often

paraphrased as sensor noise. Technically, this noise is quite predictable: When

Page 35: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

25

Figure 4 Typical ultrasound scan of a robot in its environment.

measuring smooth surfaces (such as walls) at an angle, the echo tends to travel into a

direction other than the sonar sensor, as illustrated. This effect is called specular reflection

and often leads to overly large range measurements. The likelihood of specular reflection

depends on a number of properties, such as the surface material and angle, the range of the

surface, and the sensitivity of the sonar sensor. Other errors, such as short readings, may be

caused by cross-talk between different sensors (sound is slow!) or by unmodeled objects in

the proximity of the robot, such as people. As a rule of thumb, the more accurate a sensor

model, the better the results. In practice, however, it is often impossible to model a sensor

accurately, primarily for two reasons:

First, developing an accurate sensor model can be extremely time-consuming, and

second, an accurate model may require state variables that we might not know (such as the

surface material). Probabilistic robotics accommodates the inaccuracies of sensor models

in the stochastic aspects: By modeling the measurement process as a conditional

probability density, p(zt | xt), instead of a deterministic function zt = f(xt), the uncertainty in

the sensor model can be accommodated in the nondeterministic aspects of the model.

Herein lies a key advantage of probabilistic techniques over classical robotics: in practice,

we can get away with extremely crude models.

However, when devising a probabilistic model, care has to be taken to capture the

different types of uncertainties that may affect a sensor measurement. Many sensors

generate more than one numerical measurement value when queried. For example, cameras

generate entire arrays of values (brightness, saturation, color) similarly; range finders

Page 36: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

26

usually generate entire scans of ranges. The number of such measurement values within a

measurement zt is denoted by K, hence written:

zt = { zt1 , , , , , , zt

K }

ztk is used to refer to an individual measurement(e.g., one range value). p(zt | xt,m) is

approximated by the product of the individual measurement likelihoods

Technically, this amounts to an independence assumption between the noise in

each individual measurement value—just as our Markov assumption assumes independent

noise over time. This assumption is only true in the ideal case.

To give some detail, dependencies typically exist due to a range of factors: people,

who often corrupt measurements of several adjacent sensors; errors in the model m;

approximations in the posterior, and so on.

5.1 The Range Finder Sensor Model

We will now describe the sensor model we used in our calculations to compute the

weights for the resampling step of the particle filter. This model lacks a plausible physical

explanation. In fact, it is an “ad hoc” algorithm that does not necessarily compute a

conditional probability relative to any meaningful generative model of the physics of

sensors. However, the approach works well in practice, and the computation is typically

more efficient.

The key idea is to first project the end points of a sensor scan zt into the global

coordinate space of the map. To project an individual sensor measurement ztk into the

global coordinate frame of the map m, we need to know where in global coordinates the

robot’s coordinate system is, where on the robot the sensor beam zk originates, and where

it points. Let xt = (x y θ) denote a robot pose at time t. Keeping with the two-dimensional

view of the world, denote the relative location of the sensor in the robot’s fixed, local

coordinate system by (xksens yk

sens) , and the angular orientation of the sensor beam relative

to the robot’s heading direction by θksens These values are sensor-specific. The end point of

Page 37: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

27

the measurement zkt is now mapped into the global coordinate system via the obvious

trigonometric transformation.

These coordinates are only meaningful when the sensor detects an obstacle. If the

range sensor takes on its maximum value, that is, zkt = zmax, these coordinates have no

meaning in the physical world (even though the measurement does carry information).

This measurement model simply discards max-range readings.

For any sensor, we assume three types of sources of noise and uncertainty:

1. Measurement noise. Noise arising from the measurement process is modeled

using Gaussians. In x-y-space, this involves finding the nearest obstacle in the map.

Let dist denote the Euclidean distance between the measurement coordinates (xzkt,

yzkt) and the nearest object in the map m. Then the probability of a sensor

measurement is given by a zero-centered Gaussian modeling the sensor noise:

2. Failures. We assume that max-range readings have a distinct large likelihood.

This is modeled by a point-mass distribution pmax.

3. Random measurements. Finally, a uniform distribution prand is used to model

random noise in perception.

The desired probability p(ztk | xt;m) integrates all three distributions: zhit . phit + zrand

. prand + zmax . pmax using the mixing weights zhit, zrand, and zmax. Adjusting the mixing

parameters transfers over to our sensor model. They can be adjusted by hand, or learned

online.

The table below provides an algorithm for calculating the measurement probability

using this method. The outer loop, multiplies the individual values of p(zkt | xt,m),

assuming independence between the noise in different sensor beams. Line 4 checks if the

Page 38: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

28

sensor reading is a max range reading, in which case it is simply ignored. Lines 5 to 8

handle the interesting case:

1: Algorithm for range finder model(zt, xt,m):

2: q = 1

3: for all k do

4: if zkt != zmax

5: xzkt = x + xk,sens cosθ - yk,sens sin θ + zk

t cos(θ + θk,sens)

6: yzkt = y + yk,sens cosθ - xk,sens sin θ + zk

t sin(θ + θk,sens)

7: dist2 = min x`,y`{ (xzkt – x`)2 + (yzk

t – y`)2 | x`,y` is occupied in m}

8: q = q.( zhit .prob(dist2,σ2hit) + zrandom / zmax )

9: return q

Table 4 Algorithm for computing the weight of a range finder scan using Euclidean distance to the nearest neighbor. The

function prob(dist2,σ2hit) computes the probability of the distance under a zero-centered Gaussian distribution with

variance σ2hit .

Here the distance to the nearest obstacle in x-y-space is computed (Line 7), and the

resulting likelihood is obtained in Line 8 by mixing a normal and a uniform distribution

(the function prob(dist2,σ2hit) computes the probability of dist2 under a zero-centered

Gaussian distribution with variance σ2hit).

The most costly operation in this algorithm is probably the search for the nearest

neighbor in the map (Line 7).

Page 39: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

29

CHAPTER 6 - OCCUPANCY GRID MAP BUILDING

The mobile robotics literature often distinguishes topological from metric

representations of space. While no clear definition of these terms exist, topological

representations are often thought of as course graph-like representations, where nodes in

the graph correspond to significant places (or features) in the environment. For indoor

environments, such places may correspond to intersections, T-junctions, dead ends, and so

on. The resolution of such decompositions, thus, depends on the structure of the

environment. Alternatively, one might decompose the state space using regularly spaced

grids. Such a decomposition does not depend on the shape and location of the

environmental features. Grid representations are often thought of as metric although,

strictly speaking, it is the embedding space that is metric, not the decomposition. In mobile

robotics, the spatial resolution of grid representations tends to be higher than that of

topological representations. For instance, some of applications use grid decompositions

with cell sizes of 10 centimeters or less. This increased accuracy comes at the expense of

increased computational costs.

Occupancy grid maps address the problem of generating consistent maps from

noisy and uncertain measurement data, under the assumption that the robot pose is known.

The basic idea of the occupancy grids is to represent the map as a field of random

variables, arranged in an evenly spaced grid. Each random variable is binary and

corresponds to the occupancy of the location is covers. Occupancy grid mapping

algorithms implement approximate posterior estimation for those random variables. The

reader may wonder about the significance of a mapping technique that requires exact pose

information. After all, no robot’s odometry is perfect! The main utility of the occupancy

grid technique is in post-processing: Many of the SLAM techniques do not generate maps

fit for path planning and navigation. Occupancy grid maps are often used after solving the

SLAM problem by some other means, and taking the resulting path estimates for granted.

However we have constructed the maps in the form of discrete occupancy grids with each

cell of the grid being occupied, un-occupied or unexplored.

6.1 Algorithm

A somewhat simplistic example of a map building function for range finders is

given in the following table. This model assigns to all cells within the sensor cone whose

Page 40: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

30

range is close to the measured range an occupancy value of locc. In the table, the width of

this region is controlled by the parameter α, and the opening angle of the beam is given by

β.

The algorithm calculates the inverse model by first determining the beam index k

and the range r for the center-of-mass of the cell mi. This calculation is carried out in lines

2 through 5 in the table. As usual, we assume that the robot pose is given by xt = (x y θ). In

line 7, it returns the prior for occupancy in log-odds form whenever the cell is outside the

measurement range of this sensor beam, or if it lies more than α/2 behind the detected

range zkt . In line 9, it returns locc > l0 if the range of the cell is within +- α /2 of the detected

range zkt. It returns lfree < l0 if the range to the cell is shorter than the measured range by

more than α /2.

While occupancy maps are inherently probabilistic, they tend to quickly converge

to estimates that are close to the two extreme posteriors, 1 and 0.

1: Algorithm occupancy grid map building (i, xt, zt):

2: Let xi; yi be the center-of-mass of mi

3: r =√( (xi - x)2 + (yi - y)2 )

4: φ = atan2(yi – y, xi - x) - θ

5: k = argminj (φ – θ j,sens )

6: if r > min(zmax, zkt + α /2) or (φ – θ k,sens ) > β/2 then

7: return l0

8: if zkt < zmax and (r - zmax) < α /2

9: return locc

10: if r <= zkt

11: return lfree

12: endif

Table 5 A simple inverse measurement model for robots equipped with range finders.Here α is the thickness of obstacles,

and β the width of a sensor beam. The values locc and lfree in lines 9 and 11 denote the amount of evidence a reading

carries for the two different cases.

Page 41: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

31

We note that our algorithm makes occupancy decisions exclusively based on sensor

measurements. An alternative source of information is the space claimed by the robot

itself: When the robot’s pose is xt, the region surrounding xt must be navigable. Our

inverse measurement algorithm in the table above can easily be modified to incorporate

this information, by returning a large negative number for all grid cells occupied by a robot

when at xt. In practice, it is a good idea to incorporate the robot’s volume when generating

maps, especially if the environment is populated during mapping.

Page 42: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

32

CHAPTER 7 - OBSTACLE AVOIDANCE AND NAVIGATION

Obstacle avoidance is the task of satisfying some control objective subject to non-

intersection or non-collision position constraints. Normally obstacle avoidance is

considered to be distinct from path planning in that one is usually implemented as a

reactive control law while the other involves the pre-computation of an obstacle-free path

which a controller will then guide a robot along. Obstacle avoidance and navigation is a

critical part of our project. As discussed earlier, we have implemented SLAM on

PeopleBot. One idea was to steer the PeopleBot wirelessly with the help of computer and

the PeopleBot will only draw the map but by doing this, the autonomous nature of the

project would have been sacrificed. So, to move it autonomously, we had to apply some

obstacle avoidance algorithm on it to navigate it safely and to achieve our desired goal.

PeopleBot is comprised of Laser, Sonar, IR and Touch (Bumper) Sensors. Obstacle

avoidance had to be done by making use of these sensors.

There are many techniques that can be used for obstacle avoidance. The best

technique depends on the specific environment and what equipment has been available.

Following are some common obstacle avoidance and navigation techniques used.

7.1 Random Walk

It’s a simple and old algorithm in which a robot randomly turns to avoid obstacles.

There’s no logic behind this algorithm. Robot just takes random decisions to avoid

obstacles. This algorithm makes use of sensory data and formulates it to take random

decisions. For example, if the robot senses that there is an obstacle in its way then this

algorithm will randomly take a decision to turn the robot away from the obstacle. This

algorithm is good for a start but it can not be taken as a driving algorithm for the project

because this algorithm takes too much time to explore the whole environment because of

its random nature. It does not follow a specified path.

As shown in the below figure, there are four obstacles of different shapes and sizes

and the robot (orange colored circle) is placed between these obstacles. The robot will

move randomly to avoid these obstacles.

Page 43: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

33

Figure 5 Obstacle avoidance using random walk

7.2 Seek Open Algorithm

As the name suggests, this algorithm goes for the open spaces in the environment.

By using this algorithm, the robot gives a high priority to the areas which are free of

obstacles, or in other words, have a larger open space and also unexplored. For example if

there is a situation where a robot has different obstacles plus some free space to maneuver

to its right and there is a much bigger free space to its left, then the robot will turn left and

explore the left area then it will move towards the right side and explore the right area. The

main logic behind this algorithm is that we divide the available map into different small

blocks called the cells. Every cell has its own value. If for example a cell is unexplored, it

will have a value of two, if a cell is explored and is free of obstacle, it will have a value of

one and if a cell is explored and contains an obstacle then it will have a value of zero. The

algorithm sets a high priority to the value two i-e the unexplored cell. The area which has

more number of unexplored cells, the robot will first explore it and then goes to other

areas. By this algorithm, the robot will explore the whole environment in a very little time.

The big drawback of this algorithm is that, this algorithm needs a map to make its

Page 44: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

34

decisions and we can not use a map for this purpose for two reasons. Firstly, it will become

computationally very expensive and secondly the speed of movement and computation will

become very slow so there is a chance that we will miss adequate information for updating

the map.

7.3 Frontier Based Exploration

The central idea behind frontier-based exploration is to gain the most new

information about the world, move to the boundary between open space and uncharted

territory. Frontiers are regions on the boundary between open space and unexplored space.

When a robot moves to a frontier, it can see into unexplored space and add the new

information to its map. As a result, the mapped territory expands, pushing back the

boundary between the known and the unknown. By moving to successive frontiers, the

robot can constantly increase its knowledge of the world. We call this strategy frontier-

based exploration. If a robot with a perfect map could navigate to a particular point in

space, that point is considered accessible. All accessible space is contiguous, since a path

must exist from the robot’s initial position to every accessible point. Every such path will

be at least partially in mapped territory, since the space around the robot’s initial location

is mapped at the start. Every path that is partially in unknown territory will cross a frontier.

When the robot navigates to that frontier, it will incorporate more of the space covered by

the path into mapped territory. If the robot does not incorporate the entire path at one time,

then a new frontier will always exist further along the path, separating the known and

unknown segments and providing a new destination for exploration. In this way, a robot

using frontier-based exploration will eventually explore all of the accessible space in the

world, assuming perfect sensors and perfect motor control. The basic logic behind this

algorithm is the same as seek open i-e by forming a grid and cells. A process analogous to

edge detection and region extraction in computer vision is used to find the boundaries

between open space and unknown space. Any open cell adjacent to an unknown cell is

labeled a frontier edge cell. Adjacent edge cells are grouped into frontier regions. Any

frontier region above a certain minimum size (roughly the size of the robot) is considered a

frontier.

Page 45: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

35

Figure 6 (a) Evidence Grid (b) Frontier Edge Segments (c) Frontier Regions

Once frontiers have been detected within a particular evidence grid, the robot

attempts to navigate to the nearest accessible, unvisited frontier. The path planner uses a

depth-first search on the grid, starting at the robot's current cell and attempting to take the

shortest obstacle-free path to the cell containing the goal location. While the robot moves

toward its destination, reactive obstacle avoidance behaviors prevent collisions with any

obstacles not present while the evidence grid was constructed. When the robot reaches its

destination, that location is added to the list of previously visited frontiers. If the robot is

unable to make progress toward its destination, then after a certain amount of time, the

robot will determine that the destination in inaccessible and its location will be added to

the list of inaccessible frontiers. The robot will then conduct a sensor sweep, update the

evidence grid, and attempt to navigate to the closest remaining accessible, unvisited

frontier.

The main drawback of this technique is the same as of seek open i-e map is required for its

computations.

Page 46: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

36

7.4 Wall Follow Technique

This is a very effective technique used to navigate a robot especially in an

environment where there are corridors and well shaped obstacles. In this technique, the

robot follows the obstacles (wall) to reach its goal. The logic behind this algorithm is that,

it maintains a constant distance from the obstacle either from the left or from right side.

Robot uses sensory data to collect information (range) and then takes decision on the basis

of this range. As shown in the diagram below, the robot is throwing laser and is using this

data to follow the wall from the left side. The red line shows the robot path and this line is

at a constant distance from the wall.

Figure 7 A wall following robot

7.5 Our Techniques

We studied all of the above obstacle avoidance algorithms and gone through the

built in obstacle avoidance programs of PeopleBot but those were not according to our

desired goals.

So, after some study, we started with a simple algorithm called “Random Walk “. It

makes use of the Laser, Sonar and bumper sensors. As the name suggest, this algorithm

takes decision randomly. Whenever the robot senses an obstacle, it turns randomly to avoid

Page 47: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

37

the collision. The algorithm of Random Walk is simple but effective. The critical limit was

the bumper sensors i-e if any of the bumper sensors is pressed, that means the robot

collided with an obstacle, so the robot reverses itself. Two types of limits were set. First

was Minimum Front Distance (0.5m) and the Second was Really Minimum Front Distance

(0.3m). The laser and sonar data was continuously taken and the minimum value was

calculated. If the minimum value was less than the Minimum Front Distance, the robot

slowed down and turned the opposite direction. If due to some noise or some false

readings, the minimum value was less than Really Minimum Front Distance, the robot

stopped and turned randomly until the Really Minimum Front Distance was overcome.

The Minimum and Really Minimum Front Distances can be changed according to

the environment. The environment in which we tested our algorithm was very intensively

occupied by obstacles of different shapes and sizes and this algorithm almost perfectly

worked in that environment. The robot perfectly maneuvered in the environment avoiding

obstacles and drawing the map of the environment. The main drawback of this algorithm

was that the robot takes a very long time to explore the whole environment and during its

navigation, it turns so often that it disturbs the map. So, to make the navigation more

precise and fast, we opted for a better algorithm.

We then opted for bug algorithms. The main point which differs between our need

and the bug algorithm was that we wanted to explore the whole environment without any

specific goal and the bug algorithm needs a goal to complete the algorithm. So, after

thorough studies, we devised an algorithm called “Modified Bug Algorithm” or “Wall

Follow”. As the name suggests, in this algorithm, the robot follows a wall (obstacle) and

explores the whole environment. As most of our testing environments were corridors or

closed environments so this algorithm was perfect to explore the whole environment. And

also, the turns were minimized so the map building was also improved.

Starting with this algorithm, it makes use of only the laser data. We divided the

180o span of the laser into 361 beams. These 361 beams were divided in to three areas

namely Right, Centre and Left, each consist of 120 beams. The algorithm has two modes,

Right wall follow and Left wall follow. The robot moves straight in the beginning until it

countered any obstacle. If the obstacle lies in the Left side of the robot, then the algorithm

chooses Left wall follow and the robot starts maneuvering by maintaining a certain

Page 48: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

38

distance from the left side of the obstacle. The priority of turning is then given to the left

turn of the robot i-e if the left laser range area returns a value which indicates that there is

room for a left turn, then the robot will turn left no matter how bigger room is there for a

straight walk or a right turn. Then the second priority is given to straight walk, and then at

last, the robot can turn right as a last option.

And if the obstacle lies in the Right side of the robot, then the algorithm chooses

Right wall follow and the robot starts maneuvering by maintaining a certain distance from

the right side of the obstacle. The priority of turning is then given to the right turn of the

robot i-e if the right laser range area returns a value which indicates that there is room for a

right turn, then the robot will turn right no matter how bigger room is there for a straight

walk or a left turn. Then the second priority is given to straight walk, and then at last, the

robot can turn left as a last option. This division is made by a simple logic. The 120 beams

of each region are taken and their ranges are added and then divided by a specific number.

If the calculated range comes below a certain number then it is assumed that the specific

area is occupied by an obstacle. For more robot safety, different checks are there in the

algorithm. For example, if the robot somehow reaches very close to the obstacle, then the

algorithm will stop the robot and will turn it into a certain direction according to the room.

We have successfully tested this algorithm in many environments e.g. our

department’s corridor, Electronics Lab, Robotics Lab etc.

Page 49: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

39

CHAPTER 8 - PLAYER PROJECT

8.1 Overview

The Player Project (formerly the Player/Stage Project or Player/Stage/Gazebo

Project) is a project to create free software for research into robotics and sensor systems.

Its components include the Player network server and Stage and Gazebo robot platform

simulators. Although accurate statistics are hard to obtain, Player is probably the most-

used robot interface in research and post-secondary education. Most of the major

intelligent robotics journals and conferences regularly publish papers featuring real and

simulated robot experiments using Player, Stage and Gazebo.

These run on POSIX-compatible operating systems, including Linux, Mac OS X,

Solaris and the BSD variants; a port to Microsoft Windows is planned. The project was

founded in 2000 by Brian Gerkey, Richard Vaughan and Andrew Howard at the University

of Southern California at Los Angeles, and is widely used in robotics research and

education. It releases its software under the GNU General Public License with

documentation under the GNU Free Documentation License.

Features include: robot platform independence across a wide variety of hardware,

support for a number of programming languages including C, C++, Java, Tcl, and Python,

a minimal and flexible design, support for multiple devices on the same interface, and on-

the-fly server configuration.

Being GPL and open source, Player Project is free in both senses (free as in free-

beer and free as in free-speech).

8.1.1 Player

Player is a network server for robot control. Running on your robot, Player

provides a clean and simple interface to the robot's sensors and actuators over the IP

network. The client program talks to Player over a TCP socket, reading data from sensors,

writing commands to actuators, and configuring devices on the fly.

Player supports a variety of robot hardware. The original Player platform is the

ActivMedia Pioneer 2 family, but several other robots and many common sensors are

Page 50: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

40

supported. Player's modular architecture makes it easy to add support for new hardware,

and an active user/developer community contributes new drivers.

Player runs on Linux (PC and embedded), Solaris and *BSD.

8.1.2 Stage

Stage simulates a population of mobile robots, sensors and objects in a two-

dimensional bitmapped environment. Stage is designed to support research into multi-

agent autonomous systems, so it provides fairly simple, computationally cheap models of

lots of devices rather than attempting to emulate any device with great fidelity. We have

found this to be a useful approach.

The standalone program stage is a fast and scalable robot simulator. The robot

controllers are compiled and loaded at run-time, and can be attached to any model.

Controllers have complete access to the Stage API.

8.2 Player Robotic Simulation Platform

There are three levels of software that help the abstraction of robot hardware details

take place.

8.2.1 Interfaces

Player defines several standard interfaces like position2d, laser and opaque which

specify the syntax and semantics of messages that can be exchanged with particular

sensors, actuators or algorithms. Each interface provides a format in which a range of data

and meta-data can be transmitted from a sensor or an actuator.

The laser interface defines a format in which a planar range sensor can return range

readings. The devices used along with the laser interface can be configured to scan at

different angles and resolutions. The client can obtain the starting and ending scan angles,

the angular resolution of the scan and the number of scan range readings from the data

retrieved from the laser interface.

8.2.2 Software - Hardware Interface

The driver translates data coming from various types of robots or devices like the

sicklaser2000 to fit within the format defined by the interface in player. The job of the

Page 51: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

41

driver is to make the robot support the required interface so that control code written in any

language can be translated from one robot to another. Drivers include those written for the

p2os, obot and rflex.

8.2.3 Abstract Driver

Though most Player drivers directly control hardware, recently a number of

abstract drivers have been developed. An abstract driver uses other drivers, instead of

hardware, as sources for data and sinks for commands. The main use of abstract drivers is

to encapsulate useful algorithms in a way that they can be easily reused. Eg: The amcl

driver is an implementation of adaptive Monte Carlo localization, a well-known algorithm

for probabilistic localization of a mobile robot. This driver supports both the position2d

interface (so it can be used directly in place of odometry) and the more sophisticated

localize interface (which allows for multiple pose hypotheses to be considered). In addition

to providing this incredibly useful implementation of a particular localization algorithm, by

defining such standard interfaces we build up an environment in which alternative

algorithms and implementations can be developed and tested. Other abstract drivers

include vfh, wave front, and laser barcode.

8.2.4 TCP Client Program

Control programs can be written in C, C++, Python or Java and represent the

topmost level of software control. Libraries are available to use the player server to

develop control programs.

8.2.5 The Player Server

The most commonly-used of the Player utilities, Player is a TCP server that allows

remote access to devices. It is normally executed on-board a robot, and is given a

configuration file that maps the robot’s hardware to Player devices, which are then

accessible to client programs. The situation is essentially the same when running a

simulation.

At any time a player server is running a client called playerv can connect and

display the actual robots view of the world via his array of sensors. It is also possible to

give simple move commands with this interface.

Page 52: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

42

8.2.6 The C++ Client Library

The C++ Client library for the player server is built on a “service proxy” model in

which the client maintains local objects that are proxies for remote services. This library

wraps the functionality of libplayerc with a friendlier C++ API. The core of libplayerc++ is

based around the following classes

PlayerCc::PlayerClient

The Client Proxy Base Class

SonarProxy

LaserProxy

BlobfinderProxy

SimulationProxy

OpaqueProxy

8.2.7 Relevant Drivers

The Vector Field Histogram accounts for the robot’s changing position and new

sonar sensor readings and creates a polar histogram that is rebuilt ever 30 seconds. This is

in turn used to avoid obstacles. The relay driver is broadcasts all transmitted messages to

subscribed clients. It uses the opaque proxy as an interface and is defined using the

configuration file.

8.3 The Stage Robotic Simulator

Stage provides a virtual world that is populated by robots and sensors, along with

various objects for the robots to sense and manipulate. It is usually used to provide

simulated devices to the Player robot server, but it can also be used as a standalone robot

simulation library. Stage was designed with multi-agent systems in mind, so it provides

fairly simple, computationally cheap models of lots of devices rather than attempting to

emulate any device with great fidelity. This design is intended to be useful compromise

between conventional high-fidelity robot simulations and the grid-world simulations

common in artificial life research. Stage is intended to be just realistic enough to enable

users to move controllers between Stage robots and real robots, while still being fast

enough to simulate large populations. It is also meant to be comprehensible to

undergraduate students, yet sophisticated enough for professional researchers.

Page 53: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

43

8.3.1 Stage Models

Stage provides several sensor and actuator models, including sonar or infrared

rangers, scanning laser rangefinder, color-blob tracking, fiducial tracking, bumpers,

grippers and mobile robot bases with odometric or global localization. The basic model

simulates an object with basic properties; position, size, velocity, color, visibility to

various sensors, etc. The basic model also has a body made up of a list of lines. Internally,

the basic model is used as base class for all other model types.

8.3.2 Blobfinder Model

The blobfinder model simulates a color-blobfinding vision device, like a

CMUCAM2, or the ACTS image processing software. It can track areas of color in a

simulated 2D image, giving the location and size of the color ’blobs’. Multiple colors can

be tracked at once; they are separated into channels, so that e.g. all red objects are tracked

as channel one, blue objects in channel two, etc. The color associated with each channel is

configurable.

8.3.3 Laser Model

The laser model simulates a laser rangefinder.

8.3.4 Position Model

The position model simulates a mobile robot base. It can drive in one of two

modes; either differential, i.e. able to control its speed and turn rate by driving left and

right wheels like a Pioneer robot, or omni directional, i.e. able to control each of its three

axes independently.

Page 54: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

44

CHAPTER 9 - PEOPLEBOT: THE ROBOT

9.1 Overview

The PeopleBot is a differential-drive robot designed for service and human-

interaction projects. The PeopleBot is built on the robust P3-DX base, with a chest-level

extension to facilitate interaction with people. PeopleBot is equipped with a diverse range

of sensors .It can navigate autonomously and avoid obstacles with approximate precision

using its built in sonar. Laser sensor can also be installed on PeopleBot and is used to help

PeopleBot navigate autonomously as well as avoid obstacles with better precision than

sonar.

Figure 8 PeopleBot

The PeopleBot base platform with included software has the ability to:

BE DRIVEN with keys or joystick

PLAN PATHS with gradient navigation

Page 55: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

45

DISPLAY a map of its sonar readings (or laser readings, with optional laser

upgrade)

LOCALIZE using sonar (or laser, with optional laser upgrade)

COMMUNICATE SENSOR & CONTROL information relating sonar, motor

encoder, motor controls, user I/O, and battery charge data

RUN C/C++ PROGRAMS

TEST ACTIVITIES QUICKLY AND SIMULATE BEHAVIORS OFFLINE with

the MobileSim simulator that accompanies each development environment

9.2 Technical Specifications

Length 47 cm

Width 38 cm

Height 104 cm

Ground clearance 3.5 cm

Weight: 19kg Payload 13kg

Maximum speed 80 cm/second

9.2.1 Sensors

24 sonar sensors (8 front, 8 rear and 8 top deck)

Laser range finder

Active IR break sensors

10 bump sensors (5 front and 5 rear).

9.2.2 Computing Power

Onboard VSBC8-rev4 PC-104 with a PIII 850MHz and 512Mb Ram

currently running Red Hat and a Hitachi H8S microcontroller.

9.3 Sick LMS 200

The SICK Laser Measurement Sensor (LMS) 200 is an extremely accurate distance

measurement sensor that is quickly becoming a staple of the robotics community. The

LMS 200 can easily be interfaced through RS-232 or RS-422, providing distance

Page 56: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

46

measurements over a 180 degree area up to 80 meters away. As this sensor grows in

popularity, the ability to program code to interact with this sensor becomes an essential

skill for all robotics.

Figure 9 Sick LMS 200

In ideal conditions, the LMS 200 is capable of measuring out to 80m over a 180°

arc. For an object with only 10% reflectivity (such as matt black cardboard), the LMS 200

can measure out to 10m. The sensor is best for indoor use as it can be dazzled by sunlight

(causing it to give erroneous readings).

9.4 Odometry

The odometry performance measures how well the robot can estimate its own

position, just from the rotation of the wheels. The robot should not have an error of more

than 2 cm per meter moved and 2° per 45° degrees turned. Typical robot drivers allow the

robot to report its (x,y) position in some Cartesian coordinate system and also to report the

robots current bearing/heading.

The odometry of PeopleBot is implemented through a controller which can be

adjusted by changing the PID parameters implemented in its model. This model allows the

robot to report its position and heading and velocities can also be obtained through the

model. Although the odometry is not perfect that is it does got some error either built in to

it or we can say it is with it, the odometer data is also used in the SLAM process and its

error effect will be minimized using estimation.

Page 57: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

47

9.5 Modelling of PeopleBot

For accurate results, it is required that PeopleBot sensors give accurate readings

regarding range finding as well as odometery. Therefore, it is necessary to determine the

variation in readings so that noise can be modelled in actual code to achieve correct results.

For that purpose we divide the modelling of PeopleBot into two parts.

9.5.1 Motion Model

The motion model selected is an odometry based one. It has been preferred over the

velocity based motion model due to its superior accuracy. Motion from one point to the

other is represented in form of a rotation, a translation and a second rotation shown in fig.

Figure 10 Representation of motion from one point to the other

Noise is added separately in each of the three parameters δrot1, δtrans, and δrot2. As

the state vector also consists of three parameters x, y and θ, degeneracy in the state

probability is avoided. Four types of noise parameters have been considered

Noise due to effect of rotation on rotation.

Noise due to effect of rotation on translation.

Noise due to effect of translation on rotation.

Noise due to effect of translation on translation.

These parameters are determined through experimentation on the real robot by

taking several readings. These parameters for our motion model have been taken as 30%,

12.5%, 1% and 30% respectively. One unit of translation is taken as 1mm and 1unit of

rotation is considered to be 1degree.

Page 58: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

48

9.5.2 Sensor Model

The sensor model used for our implementation of SLAM is a range finder based

one. The weights assigned to the particles in the particle filter algorithm are calculated

according to this model. Actual measurements have been taken through laser by a sick

LMS-200 module installed on our robot. 361 beams spanning over 180o have been

considered in most experiments. However, only 30 beams have also been used

successfully. A single measurement consisting of k beams at time step t is represented as

zt = {zt[1] , zt

[2] . . . . . . . . zt[k]}

The basic components of the range finder sensor model are

A Gaussian distribution centred at ztk. This represents the uncertainty of the sensor.

An exponential distribution representing the probability of the unexpected obstacles

such as people.

A very narrow uniform distribution at the maximum range representing those

erroneous readings that returns maximum value due to factors such as scattering of

the laser beam off glass.

A uniform distribution over all possible values representing unexplained noise.

The final distribution is a weighted sum of all four distributions. We have only used

the first distribution in our implementation of the sensor model due to the following

reasons.

Firstly, as the environment in which we tested our algorithms was a static one with

no dynamic objects, we ignored the exponential distribution representing unexpected short

range readings.

Secondly, the third distribution was ignored and countered by simply not

considering those readings that return maximum value while assigning weights.

Thirdly, as only relative weights are required in our re-sampling step, the 4th

distribution was also ignored.

Page 59: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

49

The variance of the Gaussian distribution was determined experimentally like the

noise parameters of the motion model. The Sick LMS is very accurate and the variance

calculated was of the order of microns. However the variance was selected to be 50mm to

cover all possible states effectively.

Page 60: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

50

CHAPTER 10 - WIRELESS COMMUNICATION

The computer installed on PeopleBot is P III and is not fast regarding

computations. Our technique required very fast computations for accurate results as well as

time saving. For that purpose it was necessary that computations should be performed at a

fast computer and only readings of sensors are taken from PepleBot. So, we decided that

we should run our code on our dual core laptop and perform all computations there while

only getting sensor readings from PeopleBot and after performing computations results are

directed back to PeopleBot so that it can take required action using those results. There

were two ways to have communication between our remote computer and PeopleBot either

wired or wireless. We opted for wireless communication.

For establishing wireless communication between robot and computer, first we had

to do wireless setting of robot. There are a set of commands that are run in Linux terminal

to enable wireless connection of PeopleBot. After applying settings, wireless

communication was established. But there was a problem that whenever we booted the

PeopleBot, we had to do wireless settings from step 1 by running all relevant commands.

We required that those commands automatically run whenever we boot PeopleBot. Since

we were not familiar with Linux much we did not know how to do that. After surfing

through the internet we came to the solution of problem. We wrote all those commands in

a file and made it executable using chmod +x command and put it in root shell so that it is

executed every time we boot PeopleBot. As a result wireless settings were automatically

established when ever we started PeopleBot. Now whenever we run our code at remote

computer, wireless link is automatically established and data transfer between computer

and PeopleBot start taking place.

Page 61: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

51

CHAPTER 11 - RESULTS

11.1 MATLAB

Matlab was the first platform we used to test our algorithms independently of each

other. Each algorithm was written in a separate m-file and was implemented exactly as

stated in the previous chapters. The results were satisfying and as predicted. The m-files

have been attached in the appendix. The following figures show the results of motion

model implementation, sensor model implementation and map building respectively.

Figure 11 This figure shows the implementation of the odometry based motion model. The figure on the left shows the

particles generated by incorporating high translational noise and the figure on the right shows the particles generated as a

result of incorporating a high rotational noise

Figure 12 This figure illustrates the sensor model while incorporating only 2 beams (shown in blue) for clarity. If the

map and pose of the robot are fixed then for a scan input of [4,3] the weight returned is 0.0538, for a scan of [6,6] the

Page 62: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

52

weight is 0.0039 and for a scan input of [2,3], which is the correct scan, the weight returned is 0.2129. Thus, our

measurement model works.

Figure 13 This figure depicts the working of our map making algorithm. The green pixels represent the unexplored areas,

the black represent the occupied area and the white pixels represent the frees pace. The scan input for the situation shown

is [4,4,4,4,4].

11.2 Player/Stage Simulation

Simulation of our algorithms was carried out in player/stage. The included model

of the Pioneer robot was used. Noise was added through the included model of the robot

and effective SLAM results were achieved as shown in the following figures. This was the

first platform on which we tested the Particle Filter. The code was written in C++ on a

Linux-based system.

Page 63: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

53

Figure 14 This figure shows the complete map of the world obtained after performing SLAM in Player/Stage without

any included noise. A perfect SLAM algorithm would return a ditto copy of this map

Figure 15 This figure shows the complete map of the world generated by including noise in the sensors and no filtering

technique at all. The resulting map clearly sows the integral errors of localisation and mapping. This map was also used

by us to compare our subsequent results.

Figure 16 This figure shows the map of the world as a result of particle filtering with 75 particles. The resolution of the

grid has been kept such that a single pixel in the bitmap represents 5x5 cm. As evident, a huge amount of noise has

successfully been filtered which shows convergence of our algorithms.

Page 64: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

54

11.3 Implementation on PeopleBot

Implementation of our SLAM was carried out on a Pioneer2/Peoplebot in our department

of Mechatronics at NUST. Processing was being done on our own computer whereas

Peoplebot provided the sensor readings wirelessly. The map is generated completely online

and in real time.

The following figures show the maps of our implementation. We tested our code in

3 different environments to check the robustness of our algorithms.

Figure 17 Map of corridor generated using 75 particles. The grid resolution has been kept at 5x5cm. The maximum

range of the laser is 8m. The total length of the area explored is approximately 62m. A simple wall following obstacle

avoidance code controls the movements of the robot because such a technique for obstacle avoidance is suitable enough

for navigation in corridors

Page 65: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

55

Figure 18 Map of Electronics Lab at Mechatronics department at EME NUST. This environment was used to check

loop-closing of our SLAM technique. The grid resolution is kept at 5x5 cm and 75 particles have been used.

Figure 19 Map of ground floor corridor of Centre for Advanced Mathematics and Physics (CAMP) building at EME,

NUST. The grid resolution is kept at 5x5 cm and the total number of particles are 75.

Page 66: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

56

CHAPTER 12 - FUTURE WORK AND CONCLUSION

SLAM through Particle Filters has commonly been avoided while treating the map

as part of the hidden state due to the expensive computation and extensive memory usage.

However, using the simple algorithms described in this paper, SLAM can be implemented

effectively even using a less number of particles. The complexity of the operations of map

drawing and re-sampling are linear in the number of the particles used and quadratic in the

size of the map. Whereas, other techniques involving other variants of the Bayes filter can

also be used, they require a huge amount of study and mathematical background. These are

not required in our technique.

Future work in this domain may be done in using a probabilistic map instead of a

deterministic one. As the map expanded, more and more pointers were defined in our code.

With the passage of time, these pointers exceeded the memory segment allotted to the code

and the process was aborted. As our system was linux based, we were unable to solve this

problem using far pointers. This problem can be removed through effective memory

management or development of a dedicated operating system, both of which can be

considered in future work.

Page 67: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

57

BIBLIOGRAPHY

[1] Thrun, Burgard, Fox “Probabilistic Robotics”, 2001.

[2] Bradley Hiebert-Treuer” An Introduction to Robot SLAM (Simultaneous Localization And Mapping)”.

[3] Eliazar, A., and Parr, R. Dp-slam: Fast, robust simultaneous localization and mapping

without predetermined landmarks.

[4] www.openslam.org

[5] www.playerstage.sourceforge.net

[6] B. J. Kuipers and Y-T Byun, “A robot exploration and mapping strategy based on semantic hierachy of spacial representation,” J. Robot. Autonom. Syst., vol. 8, pp. 47–63, 1991.

[7] M. W. M. Gamini Dissanayake, A Solution to the Simultaneous Localization and Map Building (SLAM) Problem Member, IEEE, P. Newman, Member, IEEE, Steven Clark, Hugh F. Durrant-Whyte, Member, IEEE, and M. Csorba

[8] A.A. Makarenko, S.B. Williams, F. Bourgoult, and H. D.-Whyte. An experiment in integrated exploration. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Lausanne, Switzerland, 2002.

[9] M. Csorbra “Simultaneous Localization and Map Building” 1997, pp. 104–124

[10] B. Yamauchi, A. Schultz, and W. Adams “Mobile Robot Exploration and Map-Building with Continuous Localization” 1998 IEEE International Conference on Robotics and Automation, May 1998, Leuven, Belgium,

[11] C. Stachniss, W. B. Giorgio On Actively Closing Loops in Grid-based FastSLAM

[12] Shapiro, Linda and Stockman, George. “Computer Vision,” Prentice Hall, Inc.

[13] S. Riisgaard and M. R. Blas, “SLAM for Dummies” A Tutorial Approach to Simultaneous Localization and Mapping

[14] G. Welch, G. Bishop, “An Introduction to the Kalman Filter”.

Page 68: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

58

ANNEXURE A

Player Code

mtsslam.h

#include <libplayerc++/playerc++.h> #include <iostream> #include<fstream> #include <cstdlib> #include <ctime> #include <cmath> using namespace PlayerCc; using namespace std; #define pi 3.141593 struct particle { friend struct particles; particle() { pose[0] = 0; //x-axis pose[1] = 0; //y-axis pose[2] = 90; //theta } float pose[3]; }; struct maps { friend struct particle; friend struct particles; maps() { grid_res = 1; rows = 100*grid_res; columns = 100*grid_res; column_offset = 50*grid_res; row_offset = 50*grid_res; map_ptr = new char* [rows]; for(int x=0;x<rows;x++) { map_ptr[x] = new char [columns]; for(int y=0;y<columns;y++) { *(map_ptr[x]+y) = 8; } } }

Page 69: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

59

void set_map_size( int res,int r, int r_offset, int c, int c_offset) { delete[] map_ptr; grid_res = res; rows = r; columns = c; column_offset = c_offset; row_offset = r_offset; map_ptr = new char* [rows]; for(int x=0;x<rows;x++) { map_ptr[x] = new char [columns]; for(int y=0;y<columns;y++) { *(map_ptr[x]+y) = 8; } } } void set_grid_res (float res) { for(int x = 0; x<rows; x++) { delete[] map_ptr[x]; } delete[] map_ptr; grid_res = res; rows = 2*grid_res; columns = 2*grid_res; column_offset = 1*grid_res; row_offset = 1*grid_res; map_ptr = new char* [rows]; for(int x=0;x<rows;x++) { map_ptr[x] = new char [columns]; for(int y=0;y<columns;y++) { *(map_ptr[x]+y) = 8; } } } int get_rows(void) { return rows; } int get_columns(void) { return columns; } float get_column_offset(void)

Page 70: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

60

{ return column_offset; } float get_row_offset(void) { return row_offset; } float get_grid_res(void) { return grid_res; } char** get_map_ptr(void) { return map_ptr; } void show_map(void) { cout<<"rows = "<<rows<<endl; cout<<"columns = "<<columns<<endl; for(int x=0;x<rows;x++) { for(int y=0;y<columns;y++) { if(*(map_ptr[x]+y)==8) { cout<<" "; } else { cout<<(int)*(map_ptr[x]+y)<<" "; } } cout<<endl; } } void set_map_array(int array[][5]) { for(int x=0;x<rows;x++) { for(int y=0;y<columns;y++) { *(map_ptr[x]+y) = array[x][y]; } } } void set_map_data(maps* ptr) {

Page 71: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

61

cout<<"real culprit1"<<endl; set_map_size(ptr->grid_res,ptr->rows,ptr->row_offset,ptr->columns,ptr->column_offset); cout<<"real culprit2"<<endl; for(int x=0;x<rows;x++) { for(int y=0;y<columns;y++) { *( (map_ptr[x]) + y ) = *( (ptr->map_ptr[x]) + y); } } cout<<"real culprit3"<<endl; } void copy_map(maps* ptr) { ptr->set_map_size(grid_res,rows,row_offset,columns,column_offset); for(int x=0;x<rows;x++) { for(int y=0;y<columns;y++) { *(ptr->map_ptr[x]+y) = *(map_ptr[x]+y); } } } void expand_map(int r_inc_back,int r_inc_front,int c_inc_left,int c_inc_right) { char** temp_ptr; int r_temp = rows+r_inc_back+r_inc_front; int c_temp = columns+c_inc_left+c_inc_right; temp_ptr = new char* [r_temp]; for(int x=0;x<r_temp;x++) { temp_ptr[x] = new char [c_temp]; for(int y=0;y<c_temp;y++) { *(temp_ptr[x]+y) = 8; } } for(int x=0;x<rows;x++) { for(int y=0;y<columns;y++) { *(temp_ptr[x+r_inc_front]+y+c_inc_left) = *(map_ptr[x]+y); } } for(int x = 0; x<rows; x++) { delete[] map_ptr[x]; }

Page 72: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

62

delete[] map_ptr; column_offset += c_inc_left; row_offset += r_inc_back; rows = r_temp; columns = c_temp; map_ptr = temp_ptr; } char** map_ptr; int rows , columns; float column_offset, row_offset; float grid_res; }; struct particles { friend void set_thetas (particles*); friend void get_scan(particles* ptr); friend void copy_maps(maps*,maps*); friend void draw_maps(particles*); friend void optimize_map(particles*,int); friend void apply_sensor_model(particles*); friend void importance_resampling(particles*); friend void create_bitmap(particles*,int); friend void delay (double); public: particles(int total_particles, PlayerClient* robot1, Position2dProxy* p2d, LaserProxy* lsr) { a1 = 0.15; //effect of rotation on rotation a2 = 0.01; //effect of translation on rotation a3 = 0.3; //effect of translation on translation a4 = 0.125; //effect of rotation on translation no_of_particles = total_particles; random_particles=0; robot = robot1; pp = p2d; lp = lsr; total_of_weights = 0; no_of_iterations = 0; best_particle = 0; zmax = 8; alpha = 0.2; beta = 5; var_hit = 50; zhit = 1; zrand = 0; movement_tolerance = 2; grid_res = 1; srand(time(0)); strcpy(name,"image11111.bmp"); name[5] = (char)(48 + rand()%10); name[6] = (char)(48 + rand()%10);

Page 73: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

63

name[7] = (char)(48 + rand()%10); name[8] = (char)(48 + rand()%10); name[9] = (char)(48 + rand()%10); particles_set = new particle[no_of_particles]; temp_particles = new particle[no_of_particles]; map_set = new maps[no_of_particles]; temp_maps = new maps[no_of_particles]; weights = new float[no_of_particles]; temp_weights = new float[no_of_particles]; average_weights = new float[no_of_particles]; temp_average_weights = new float[no_of_particles]; for(int x=0;x<no_of_particles;x++) { weights[x] = 0; average_weights[x] = 0; } } void set_motionmodel_params(float b1,float b2,float b3,float b4) { a1 = b1; a2 = b2; a3 = b3; a4 = b4; } void set_sensormodel_params(float a, float b, float c) { var_hit = a; zhit = b; zrand = c; } void configure_laser(float a,float b, int c, float d) { i_angle = a; f_angle = b; beams_per_deg = c; zmax = d; } void set_beams(int a, int b) { beams = a; beams_weights = b; delete[] thetas; delete[] thetas_weights; delete[] scan; delete[] scan_weights; thetas = new float[beams]; thetas_weights = new float[beams_weights];

Page 74: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

64

scan = new float[beams]; scan_weights = new float[beams_weights]; } void set_map_tolerances(float a, float b) { alpha = a; beta = b; } void set_movement_tolerance(float a) { movement_tolerance = a; } void set_grid_res(float a) { for(int x=0; x<no_of_particles; x++) { map_set[x].set_grid_res(a); } } void set_random_particles(int a) { random_particles = a; } float get_x(int particle_no) { return particles_set[particle_no].pose[0]; } float get_y(int particle_no) { return particles_set[particle_no].pose[1]; } float get_theta(int particle_no) { return particles_set[particle_no].pose[2]; } int get_best_particle() { return best_particle; } void set_initial_pose(void) { robot->Read(); initial_pose[0] = -pp->GetYPos(); initial_pose[1] = pp->GetXPos(); initial_pose[2] = 90+180*(pp->GetYaw())/pi; if(initial_pose[2]>180)

Page 75: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

65

{ initial_pose[2]-=360; } else if(initial_pose[2]<=-180) { initial_pose[2]+=360; } } void set_final_pose(void) { robot->Read(); final_pose[0] = -pp->GetYPos(); final_pose[1] = pp->GetXPos(); final_pose[2] = 90+180*(pp->GetYaw())/pi; if(final_pose[2]>180) { final_pose[2] -= 360; } else if(final_pose[2]<=-180) { final_pose[2]+=360; } } int norm_samp ( int var ) { float y=0; int samp; for(int x=0;x<1000;x++) { y = y + ( ( rand() % 3 ) - 1 ) ; } y = ( y * var ) / 500; samp = y; return samp; } void apply_motion_model(void) { for(int x = 0;x<no_of_particles;x++) { motionmodel(initial_pose,final_pose,x); } } void show_best_particle(void) { cout<<"best particle is particle number "<<best_particle<<endl;

Page 76: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

66

cout<<"weight of best particle is "<<weights[best_particle]<<endl; } void show_image_name(void) { cout<<"image name is "<<name<<endl; } void show_particles(void) { cout<<"entering show particles"<<endl; for(int x = 0;x<no_of_particles;x++) { cout<<particles_set[x].pose[0]<<" "<<particles_set[x].pose[1]<<" "<<particles_set[x].pose[2]<<" "<<endl; } cout<<"exiting show particles"<<endl; } void determine_best_particle(void) { for(int x=0; x<no_of_particles; x++) { if(weights[x]>weights[best_particle]) { best_particle = x; } } } void motionmodel (float* istate,float* fstate,int x) { float del_x = 1000 * ( *(fstate) - *(istate) ); float del_y = 1000 * ( *(fstate+1) - *(istate+1) ); //calculate the odometry parameters float trans = sqrt( pow( del_x , 2 ) + pow( del_y , 2 ) ); float rot1 = ( 180*( atan2( del_y , del_x ) ) / pi ) - *(istate+2); float rot2 = *(fstate+2) - rot1 - *(istate+2); //add noise to the odometry parameters float noisyrot1 = rot1 - norm_samp( a1*rot1 + a2*trans ); float noisytrans = trans - norm_samp( a3*trans + a4*( rot1 + rot2 ) ); float noisyrot2 = rot2 - norm_samp( a1*rot2 + a2*trans ); //get the final pose of the particle particles_set[x].pose[0] +=(noisytrans*cos( pi * (*(istate+2) + noisyrot1) / 180 ))/1000; particles_set[x].pose[1] +=(noisytrans*sin( pi * (*(istate+2) + noisyrot1) / 180 ))/1000;

Page 77: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

67

particles_set[x].pose[2] +=noisyrot1 + noisyrot2; if(particles_set[x].pose[2]>=180) { particles_set[x].pose[2] -= 360; } else if(particles_set[x].pose[2]<=-180) { particles_set[x].pose[2] += 360; } if(particles_set[x].pose[2]>=180) { particles_set[x].pose[2] -= 360; } else if(particles_set[x].pose[2]<=-180) { particles_set[x].pose[2] += 360; } } private: float initial_pose[3]; float final_pose[3]; int beams; int beams_weights; float* thetas; float* thetas_weights; float* scan; float* scan_weights; float* weights; float* temp_weights; float* average_weights; float* temp_average_weights; float i_angle; float f_angle; int beams_per_deg; int no_of_iterations; float total_of_weights; int best_particle; float zmax; float alpha; float beta; float movement_tolerance;

Page 78: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

68

float grid_res; float var_hit; float zhit; float zrand; //odometry noise parameters float a1; //effect of rotation on rotation float a2; //effect of translation on rotation float a3; //effect of translation on translation float a4; //effect of rotation on translation int no_of_particles; //total number of particles int random_particles; char name[14]; PlayerClient* robot; Position2dProxy* pp; LaserProxy* lp; particle* particles_set; particle* temp_particles; maps* map_set; maps* temp_maps; }; void set_thetas(particles); void copy_maps(maps*,maps*); void get_scan(particles*); void draw_maps(particles*); void optimize_map(particles*,int); void apply_sensor_model(particles*); void importance_resampling(particles*); void create_bitmap(particles*,int); void delay (double); void set_thetas (particles* ptr) //this function stores the set of angles at which all the beams used in the laser are situated. It uses the initial angle and the final angle of the FOV of the laser scanner along with the number of laser beams per degree. { for(int x = 0; x<ptr->beams; x++) { ptr->thetas[x] = ptr->i_angle + x*(ptr->f_angle - ptr->i_angle)/(ptr->beams - 1); } for(int x = 0; x<ptr->beams_weights; x++)

Page 79: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

69

{ ptr->thetas_weights[x] = ptr->i_angle + x*(ptr->f_angle - ptr->i_angle)/(ptr->beams_weights - 1); } } void copy_maps (maps* map1, maps* map2) //copy map1 into map2 { int res = map1->grid_res; int r = map1->rows; int r_off = map1->row_offset; int c = map1->columns; int c_off = map1->column_offset; char temp; for(int x=0;x<map2->rows;x++) { delete[] map2->map_ptr[x]; } delete[] map2->map_ptr; map2->grid_res = res; map2->rows = r; map2->columns = c; map2->column_offset = c_off; map2->row_offset = r_off; map2->map_ptr = new char* [r]; for(int x=0;x<r;x++) { map2->map_ptr[x] = new char [c]; for(int y=0;y<c;y++) { temp = *(map1->map_ptr[x]+y); *(map2->map_ptr[x]+y) = temp; } } } void get_scan(particles* ptr) { int inc = ptr->beams_per_deg*(ptr->f_angle - ptr->i_angle)/(ptr->beams - 1); int inc_weights = ptr->beams_per_deg*(ptr->f_angle - ptr->i_angle)/(ptr->beams_weights - 1); ptr->robot->Read(); for(int x = 0; x<ptr->beams; x++) { ptr->scan[x] = (*ptr->lp)[x*inc]; } for(int x=0; x<ptr->beams_weights; x++) { ptr->scan_weights[x] = (*ptr->lp)[x*inc_weights];

Page 80: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

70

} } void draw_maps(particles* ptr1) { //function to draw occupancy grid. Will take in scan and will update the resulting occupancy grid float zmax = ptr1->zmax; float alpha = ptr1->alpha; float beta = ptr1->beta; float x,y,theta,range,phi,x_offset,y_offset; float xi,yi,temp,grid_res; int r,c,k,a1,a2,b1,b2; for(int particle_no = 0;particle_no<ptr1->no_of_particles;particle_no++) { optimize_map(ptr1,particle_no); grid_res = ptr1->map_set[particle_no].get_grid_res(); x = ptr1->get_x(particle_no); y = ptr1->get_y(particle_no); theta = ptr1->get_theta(particle_no); x_offset = ptr1->map_set[particle_no].get_column_offset()/grid_res; y_offset = ptr1->map_set[particle_no].get_row_offset()/grid_res; x+=x_offset; y+=y_offset; r = ptr1->map_set[particle_no].get_rows(); c = ptr1->map_set[particle_no].get_columns(); if( theta>=0 && theta<=90 ) { a1 = y - 8*( cos(pi*theta/180) ); a2 = y + 8; b1 = x - 8*( sin(pi*theta/180) ); b2 = x + 8; } else if( theta>90 && theta<=180 ) { a1 = y + 8*( cos(pi*theta/180) ); a2 = y + 8; b1 = x - 8; b2 = x + 8*( sin(pi*theta/180) ); } else if( theta>=-180 && theta<-90 )

Page 81: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

71

{ a1 = y - 8; a2 = y - 8*( cos(pi*theta/180) ); b1 = x - 8; b2 = x - 8*( sin(pi*theta/180) ); } else if( theta>=-90 && theta<0 ) { a1 = y - 8; a2 = y + 8*( cos(pi*theta/180) ); b1 = x + 8*( sin(pi*theta/180) ); b2 = x + 8; } a1 = r - a1*grid_res; a2 = r - a2*grid_res; b1 *= grid_res; b2 *= grid_res; for (int a=a2;a<a1;a++)//rows { for (int b=b1;b<b2;b++)//columns { xi = (b+0.5)/grid_res; yi = (r-a-0.5)/grid_res; //calculate range range = sqrt( pow(xi - x,2) + pow(yi - y,2) ); phi = 180*(atan2( yi - y , xi - x))/pi; phi=phi-theta + 90; if(phi<=-180) { phi+=360; } else if(phi>180) { phi -= 360; } //first have to determine which beam the grid corresponds to k = 0; float temp2, temp_phi; temp_phi = 2*phi; temp2 = temp_phi - (int)temp_phi; k = (int)temp_phi; if(temp2>=0.5) { k++; }

Page 82: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

72

if( (*(ptr1->map_set[particle_no].map_ptr[a]+b) == 5) || (*(ptr1->map_set[particle_no].map_ptr[a]+b)== 6) ) { *(ptr1->map_set[particle_no].map_ptr[a]+b) = 0; } if( range<0.25 ) { if(k<=220 && k>=140) { *(ptr1->map_set[particle_no].map_ptr[a]+b) = 6; } else { *(ptr1->map_set[particle_no].map_ptr[a]+b) = 5; } } else if ( range >= zmax || range > ptr1->scan[k]+alpha/2 || abs( phi - ptr1->thetas[k] ) > beta/2 || *(ptr1->map_set[particle_no].map_ptr[a]+b) == 6 || ptr1->scan[k] > zmax || *(ptr1->map_set[particle_no].map_ptr[a]+b) == 5 ) { continue; //if outside the scope of the beam let it retain the previous value } else if (ptr1->scan[k] < zmax && abs( range - ptr1->scan[k] ) < alpha/2 ) { *(ptr1->map_set[particle_no].map_ptr[a]+b) = 1; //if close to the detected range declare occupied } else if( range<ptr1->scan[k] ) { *(ptr1->map_set[particle_no].map_ptr[a]+b) = 0; } } } } } void optimize_map(particles* ptr, int particle_no) { float x,y,theta,grid_res; float max_range = ptr->zmax; int r,c;

Page 83: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

73

grid_res = ptr->map_set[particle_no].grid_res; int front_margin,back_margin,left_margin,right_margin,diff_x,diff_y; int front_inc=0,back_inc=0,left_inc=0,right_inc=0; int safety_factor = (max_range+ptr->movement_tolerance)*grid_res; x = ptr->get_x(particle_no); y = ptr->get_y(particle_no); theta = ptr->get_theta(particle_no); x+=ptr->map_set[particle_no].get_column_offset()/grid_res; y+=ptr->map_set[particle_no].get_row_offset()/grid_res; r = ptr->map_set[particle_no].get_rows(); c = ptr->map_set[particle_no].get_columns(); left_margin = abs(x*grid_res); right_margin = abs(c-x*grid_res); front_margin = abs(r-y*grid_res); back_margin = abs(y*grid_res); if( (left_margin<safety_factor) && !( (theta<15) && (theta>-15) ) ) { left_inc = safety_factor-left_margin+1.5*grid_res; } if( (right_margin<safety_factor) && !( (theta<-185) && (theta>185) ) ) { right_inc = safety_factor-right_margin+1.5*grid_res; } if( (front_margin<safety_factor) && !( (theta<-75) && (theta>-105) ) ) { front_inc = safety_factor-front_margin+2*grid_res; } if( (back_margin<safety_factor) && !( (theta<105) && (theta>75) ) ) { back_inc = safety_factor-back_margin+1.5*grid_res; } if((back_inc>grid_res/4) || (front_inc>grid_res/4) || (left_inc>grid_res/4) || (right_inc>grid_res/4) ) { ptr->map_set[particle_no].expand_map(back_inc,front_inc,left_inc,right_inc); cout<<"maps expanded"<<endl; } } void apply_sensor_model(particles* ptr) { //the sensor model. input a map and a scan and a pose. it'll return the probability of the scan, well not really a probability but it'll return a weight in any case float var_hit = ptr->var_hit; float zmax = ptr->zmax; float zhit = ptr->zhit;

Page 84: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

74

float zrand = ptr->zrand; double dist2,temp,phit; int r,c,a,b,k,particle_no,temp2,temp3,grid_res; float x,y,theta,q,inc_x,inc_y,xz,yz,xi,yi; ptr->total_of_weights = 0; ptr->no_of_iterations++; for(particle_no=0;particle_no<ptr->no_of_particles;particle_no++) { x = ptr->get_x(particle_no); y = ptr->get_y(particle_no); theta = ptr->get_theta(particle_no); x += ptr->map_set[particle_no].get_column_offset()/grid_res; y += ptr->map_set[particle_no].get_row_offset()/grid_res; r = ptr->map_set[particle_no].get_rows(); c = ptr->map_set[particle_no].get_columns(); grid_res = ptr->map_set[particle_no].grid_res; if(ptr->map_set[particle_no].map_ptr==NULL) { continue; } q = 0; for(k=0;k<ptr->beams_weights;k++) { if (ptr->scan_weights[k] >= zmax) { continue; } inc_x = ptr->scan_weights[k]*cos(dtor(theta+ptr->thetas_weights[k]-90)); xz = x + inc_x; inc_y = ptr->scan_weights[k]*sin(dtor(theta+ptr->thetas_weights[k]-90)); yz = y + inc_y; dist2=pow( (0.251/cos(dtor(45))),2); int a1 = r - (yz+0.0675)*grid_res; int a2 = r - (yz-0.0675)*grid_res; int b1 = (xz-0.0675)*grid_res; int b2 = (xz+0.0675)*grid_res;

Page 85: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

75

for(a=a1; a<a2;a++) { for(b=b1;b<b2;b++) { xi = (b+0.5)/grid_res; yi = (r-a-0.5)/grid_res; if(ptr->map_set[particle_no].map_ptr[a] == NULL) { continue; } if(*(ptr->map_set[particle_no].map_ptr[a]+b)!=1) { continue; } if( dist2 >( pow(xz-xi,2) + pow(yz-yi,2)) ) { dist2 = ( pow(xz-xi,2) + pow(yz-yi,2) ); } } } dist2*=1000; if(dist2<=250/cos(dtor(45))) { dist2 = pow(dist2,8); phit = exp(-0.5*(pow(dist2,2))/var_hit); phit = phit/sqrt(2*pi*var_hit); } else { phit=0; } q += zhit*phit;// + zrand/zmax; } ptr->weights[particle_no] = 1000*q/ptr->beams_weights; if(ptr->no_of_iterations>1) { ptr->average_weights[particle_no] *= ( ptr->no_of_iterations - 1 ); ptr->average_weights[particle_no] += ptr->weights[particle_no]; ptr->average_weights[particle_no] /= ptr->no_of_iterations; } else { ptr->average_weights[particle_no] = ptr->weights[particle_no];

Page 86: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

76

} ptr->total_of_weights += ptr->weights[particle_no]; } for(int x=0;x<ptr->no_of_particles;x++) { cout<<x<<" weight = "<<ptr->weights[x]<<endl;; } cout<<"total of weights = "<<ptr->total_of_weights<<endl; cout<<ptr->no_of_iterations<<" iterations"<<endl; } void importance_resampling(particles* ptr) { cout<<"resampling entered"<<endl; int randnum, temp1,tempsize; int temp_size = (int)ptr->total_of_weights; int* temp = new int[temp_size]; int x; for(x=0;x<ptr->random_particles;x++) //include some particles regardless of their weights { randnum = rand()%ptr->no_of_particles; if (randnum >= ptr->no_of_particles) { randnum--; } ptr->temp_particles[x] = ptr->particles_set[randnum]; copy_maps(&(ptr->map_set[randnum]),&(ptr->temp_maps[x])); ptr->temp_weights[x] = ptr->weights[randnum]; ptr->temp_average_weights[x] = ptr->average_weights[randnum]; } int a=0; for (x=0;x<temp_size;x++) //initialize the array from which we have to draw indexes { temp[x] = 0; } for(x=0;x<ptr->no_of_particles;x++) //create index copies equal to their weights { for(int y=0;y<(int)ptr->weights[x];y++) { temp[a]=x; a++; } } a=0;

Page 87: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

77

for(x=ptr->random_particles;x<ptr->no_of_particles;x++) { randnum = rand()%(int)ptr->total_of_weights; temp1 = temp[randnum]; ptr->temp_particles[x] = ptr->particles_set[temp1]; copy_maps(&ptr->map_set[temp1],&(ptr->temp_maps[x])); ptr->temp_weights[x] = ptr->weights[temp1]; ptr->temp_average_weights[x] = ptr->average_weights[temp1]; } if( !(ptr->total_of_weights<=ptr->no_of_particles) ) { for(int t = 0;t<ptr->no_of_particles;t++) { ptr->particles_set[t] = ptr->temp_particles[t]; copy_maps(&(ptr->temp_maps[t]),&ptr->map_set[t]); ptr->weights[t] = ptr->temp_weights[t]; ptr->average_weights[t] = ptr->temp_average_weights[t]; } } } void create_bitmap(particles* ptr, int particle_no = -1) { if(particle_no==-1) { particle_no = ptr->best_particle; } int rows = ptr->map_set[particle_no].get_rows(); int columns = ptr->map_set[particle_no].get_columns(); char** map_ptr = ptr->map_set[particle_no].map_ptr; cout<<"entered create bitmap"<<endl; if(++ptr->name[9]>57) { ptr->name[9] = 48; ptr->name[8]++; } if(ptr->name[8]>57) { ptr->name[8] = 48; ptr->name[7]++; } if(ptr->name[7]>57) { ptr->name[7] = 48; ptr->name[6]++; } if(ptr->name[6]>57) {

Page 88: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

78

ptr->name[6] = 48; ptr->name[5]++; } cout<<"image name is image"<<ptr->name[5]<<ptr->name[6]<<ptr->name[7]<<ptr->name[8]<<ptr->name[9]<<".bmp"<<endl; ofstream myfile (reinterpret_cast<const char*>(ptr->name),ios::out); cout<<"flag"<<endl; int info,byte1,byte2,byte3,byte4; myfile.write("B",sizeof(char)); //write magic numbers myfile.write("M",sizeof(char)); char zero = 0; int padding = 0; //calculate padding bytes if( (3*columns+padding)%4!=0) { while( (3*columns+padding)%4!=0) { padding++; } } //write size (4-bytes) info = 54 + 3*rows*(columns) + rows*padding; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //application specific (4-bytes) myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); //offset to bitmap data (4-bytes) info = 54; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) );

Page 89: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

79

myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //bytes left in header(4-bytes) info = 40; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); info = columns; //width in pixels (4-bytes) byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //height in pixels (4-bytes) info = rows; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //color planes (2-bytes) info = 1; byte2 = (int)(info/256); byte1 = (int)(info - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); //bits per pixel (2-bytes) info = 24; byte2 = (int)(info/256); byte1 = (int)(info - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) );

Page 90: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

80

//compression method (4-bytes) myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); //size of raw image (4-bytes) info = 3*rows*(columns) + rows*padding; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //horizontal resolution (4-bytes) info = 2835; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //verticle resolution (4-bytes) info = 2835; byte4 = (int)(info/256/256/256); byte3 = (int)((info - 256*256*256*byte4)/256/256); byte2 = (int)((info - 256*256*256*byte4 - 256*256*byte3)/256); byte1 = (int)(info - 256*256*256*byte4 - 256*256*byte3 - 256*byte2); myfile.write( reinterpret_cast<const char*>(&byte1),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte2),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte3),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&byte4),sizeof(char) ); //no of colors in pallette (4-bytes) myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); //important colors (4-bytes) myfile.write( (&zero),sizeof(char) );

Page 91: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

81

myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); info = 255; //start of bitmap data for(int x=0;x<rows;x++) { for(int y=0;y<columns;y++) { if(*(map_ptr[rows-x-1]+y)==0) //if free space show white pixel { myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); } else if(*(map_ptr[rows-x-1]+y)==1) { //if occupied space show black pixel myfile.write( reinterpret_cast<const char*>(&zero),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&zero),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&zero),sizeof(char) ); } else if(*(map_ptr[rows-x-1]+y)==5) //if location show red pixel { myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); } else //if angle show blue pixel if(*(map_ptr[rows-x-1]+y)==6) { myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); } else //if unexplored show green pixel { myfile.write( (&zero),sizeof(char) );

Page 92: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

82

myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); myfile.write( (&zero),sizeof(char) ); } } for(int z=0;z<padding;z++) { myfile.write( reinterpret_cast<const char*>(&info),sizeof(char) ); } } myfile.close(); } void delay(double time) //a function for time delay. used mostly after the robot is told to stop, to make up for the delay in stopping due to its inertia. { for( double x=0; x<time; x++ ) for( double y=0; y<999999; y++ ); } FINAL CODE

#include <libplayerc++/playerc++.h>

#include <iostream>

#include<fstream>

#include <cstdlib>

#include <ctime>

#include <cmath>

#include"mtsslam.h"

using namespace PlayerCc;

using namespace std;

#define RAYS 32

#define pi 3.141593

#define i_angle 0.0 //initial angle from which to start the FOV of the laser

#define f_angle 180.0 //final angle for the laser FOV

#define beams 361 //no of laser beams to consider while drawing map

#define no_of_particles 75 //total number of particles

#define beams_per_deg 2 //no of laser beams per degree

#define grid_res 20.0 //no of grid cells along one meter

#define max_range 8.0 //maximum range for laser

Page 93: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

83

#define beams_weights 81 //no of laser beams to consider while calculating weights

#define min_front_dist 1.000

#define really_min_front_dist 0.500

#define random_particles 0

char bot;

double range[360];

double areaL,areaC,areaR,areaLC,areaRC,rL,rC,rR,rRC,rLC,

exploreR,exploreC,exploreL,exploreLC,exploreRC,chck;

int bug=1;

PlayerClient peoplebot("localhost"); //create objects of the required proxy classes

Position2dProxy pp(&peoplebot,0);

LaserProxy lp(&peoplebot,0);

//SonarProxy sp (&peoplebot, 0);

//LogProxy lgp(&peoplebot,0);

particles particles1(no_of_particles,&peoplebot,&pp,&lp);

maps map_set[no_of_particles];

void obstacle_avoid(void);

int main()

{

cout<<"initializing . . . . ."<<endl;

for(int x=0;x<1;x++)

{

peoplebot.Read();

}

srand(time(0));

pp.SetMotorEnable (true);

//lgp.SetWriteState(1);

particles1.configure_laser(i_angle,f_angle,beams_per_deg, max_range);

Page 94: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

84

particles1.set_beams(beams,beams_weights);

set_thetas(&particles1);

particles1.set_grid_res(grid_res);

cout<<"done"<<endl;

pp.ResetOdometry();

particles1.set_initial_pose();

particles1.set_final_pose();

particles1.apply_motion_model();

get_scan(&particles1);

draw_maps(&particles1);

particles1.set_initial_pose();

while(true)

{

pp.SetSpeed(0,0);

delay(300);

particles1.set_final_pose();

get_scan(&particles1);

particles1.apply_motion_model();

cout<<"assigning weights . . . . . . . . . . "<<endl;

apply_sensor_model(&particles1);

cout<<"done"<<endl;

particles1.determine_best_particle();

particles1.show_best_particle();

cout<<"resampling . . . . . . . . . . "<<endl;

importance_resampling(&particles1);

cout<<"done"<<endl;

cout<<"drawing map . . . . . . . . . . "<<endl;

draw_maps(&particles1);

cout<<"done"<<endl;

create_bitmap(&particles1);

Page 95: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

85

particles1.set_initial_pose();

for(int x=0;x<25;x++)

{

obstacle_avoid();

particles1.show_image_name();

}

}

return 0;

}

void obstacle_avoid(void)

{

double newspeed = 0;

double newturnrate = 0;

double minR=1e9, minL=1e9;

peoplebot.Read();

uint i;

for (i=0;i<360;i++)

{

range[i]=lp.GetRange(i);

}

rL=rC=rR=rLC=rRC=0;

int j=120,k=240;

for (i=0;i<120;i++)

{

rR+=range[i];

rC+=range[j];

rL+=range[k];

j++;

k++;

}

exploreR=0;

exploreC=0;

Page 96: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

86

exploreL=0;

if(rR>220)//150

exploreR=1;

else if(rC>150)//300

exploreC=1;

else if(rL>220)//150

exploreL=1;

else

exploreR=1;

exploreC=1;

exploreL=1;

areaL=rL/250;//150

areaC=rC/150;//300

areaR=rR/250;//150

uint count=lp.GetCount();

for (uint j=0;j<count/2;++j)

{

if (minR > lp[j])

minR = lp[j];

}

for (uint j=count/2;j<count;++j)

{

if (minL > lp[j])

minL = lp[j];

}

cout << " areaL: " << areaL

<< " areaC: " << areaC

<< " areaR: " << areaR

<< " minR: " << minR

<< " minL: " << minL

<<" chck: " << chck

<<" expR: " << exploreR

<<" expL: " << exploreL

<<" expC: " << exploreC;

newspeed=.2;

chck = 0;

if (bug == 1)

Page 97: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

87

{

if(areaR>0.8 && exploreR==1 && areaL>0.8 && exploreL==1)

{

newturnrate=0;

bug=1;

}

else if (areaR<0.8)

{

bug=0;

}

else if (areaL<0.8)

{

bug=2;

}

}

else if(bug == 2)

{

if(areaL>0.8 && minL>0.8 && exploreL==1 && chck==0)//1.4,minl>1

{

newspeed=0.15;

newturnrate=0.35;

chck=4;

}

if(areaC>0.8 && exploreC==1 && chck==0 && areaR> 0.6 &&

areaL>0.6 )

{

newspeed=0.2;

newturnrate=0;

if(minR<0.8) //<1

{newspeed=0.1;

newturnrate=.4;}

if(minL<0.8) //<1

{newspeed=0.1;

newturnrate=-.4;}

chck=3;

}

if(areaR>0.8 && minR>1 && exploreR==1 && chck == 0 )

{

Page 98: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

88

newspeed=0.15;

newturnrate=-0.35;

chck=1;

}

if((areaL<0.8 && areaC<0.8 && areaR<0.8)||(minR<0.8 &&

minL<0.8))//1,0.9

{

newspeed=0;

newturnrate=1;

}

bug = 2;

}

else if (bug == 0)

{

if(areaR>1.4 && minR>1 && exploreR==1 && chck==0 )

{

newspeed=0.15;

newturnrate=-0.35;

chck=4;

}

if(areaC>0.8 && exploreC==1 && chck==0 && areaR> 0.6 &&

areaL>0.6)

{

newspeed=0.2;

newturnrate=0;

if(minR<1)

{newspeed=0.1;

newturnrate=.4;}

if(minL<1)

{newspeed=0.1;

newturnrate=-.4;}

chck=2;

}

if(areaL>1.4 && minL>1 && exploreL==1 && chck==0)

{

newspeed=0.15;

newturnrate=0.35;

chck==1;

Page 99: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

89

}

if((areaR<1 && areaC<0.5 && areaL<1)||(minR<1 && minL<1))

{

newspeed=0;

newturnrate=1;

}

bug = 0;

}

cout << " speed: " << newspeed

<< "turn: " << newturnrate

<< "bug = " << bug

<< endl;

pp.SetSpeed(newspeed, newturnrate);

}

Page 100: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

90

ANNEXURE B MATLAB Codes

1. Arctan Function %function for atan2, could have used matlab version but prefered own

function atanxy = arctan2(x,y)

if x > 0

atanxy = atand(y/x);

elseif x < 0

if y~=0

atanxy = sign(y) * (180 - atand( abs(y)/abs(x) ));

else

atanxy = 180;

end

elseif x == 0 && y == 0

atanxy = 0;

elseif x == 0 && y ~= 0

atanxy = sign(y) * 90;

end

2. Map Drawing %function to draw occupancy grid. will take in scan and will plot the

%resulting occupancy grid

function map = drawmap(scan,pose)

%generate the map in matrix form and then pass it to the other function to

%plot it

map = zeros(10,10);

map = map+2;

zmax = 8;

alpha = 1;

beta = 5;

[r,c] = size(map);

thetas = [0;45;90;135;180];

x = pose(1);

y = pose(2);

Page 101: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

91

theta = pose(3);

if theta >= 180

theta = theta-360;

end

if theta <= -180

theta = theta+360;

end

%there will be hundred cells for the 10x10 matrix go through all the cells

%one at a time

for a = 1:r

for b = 1:c

xi = b-0.5;

yi = r-a+0.5;

%calculate range

range = sqrt( power(xi - x,2) + power(yi - y,2) );

phi = arctan2( xi - x, yi - y);

% if phi>180

% phi = phi - 360;

% end

phi = phi- theta + 90;

if phi>180

phi = phi - 360;

end

if phi<=-180

phi = phi+360;

end

%first have to determine which beam the grid corresponds to

k = 1;

for d = 2:5

temp = abs( phi - thetas(d) );

if temp < abs( phi - thetas(k) )

k = d;

end

end

if range >= zmax || range > scan(k)+alpha/2 || abs( phi - thetas(k) ) > beta/2

map(a,b)=2;%map(xi+0.5,r-yi+0.5) = 2

%end

elseif scan(k) < zmax && abs( range - scan(k) ) < alpha/2

map(a,b)=1;%map(xi+0.5,r-yi+0.5) = 1

Page 102: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

92

%end

elseif range<scan(k)

map(a,b)=0;%map(xi+0.5,r-yi+0.5) = 0

end

end

end

initmap(map,pose);

end

3. Initializing Map %function to draw occupancy grid from given map in matrix form and pose of

%the robot

function map = initmap(array,pose)

[r,c] = size(array);

figure(11);

xmin = 0;

xmax = c;

ymin = 0;

ymax = r;

axis([xmin xmax ymin ymax]);

hold on;

drawrobot( [pose(1) pose(2) deg2rad(pose(3))],'k',3,1,1);

for x = xmin:xmax

l = arlinefeature(10,[0;x],0.001*eye(2));

draw(l,1,'k');

end

for y = ymin:ymax

l = arlinefeature(10,[pi/2;y],0.001*eye(2));

draw(l,1,'k');

end

for y = 1:r

for x = 1:c

if array(y,x) == 1

p1 = pointfeature(201,[x-0.5;r-y+0.5],0.003*eye(2));

map = draw(p1,0,'k');

end

if array(y,x) == 2

p1 = pointfeature(201,[x-0.5;r-y+0.5],0.003*eye(2));

map = draw(p1,0,'g');

Page 103: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

93

end

end

end

end

4. Odometry Noise %function to add noise to odoparams

function noisyparams = odonoise(odoparams)

a1 = 0.025; %effect of rotation on rotation

a2 = 0.002; %effect of translation on rotation

a3 = 5;%0.05; %effect of translation on translation

a4 = 0.25; %effect of rotation on translation

rot1 = odoparams(1);

trans = odoparams(2);

rot2 = odoparams(3);

noisyparams(1) = rot1 - samp( a1*rot1 + a2*trans );

noisyparams(2) = trans - samp( a3*trans + a4*( rot1 + rot2 ) );

noisyparams(3) = rot2 - samp( a1*rot2 + a2*trans );

%noisyparams = noisyparams';

end

5. Odometry parameters %funtion to calculate the odometry params with noise included

function params = odoparams ( istate , fstate )

x1 = istate(1);

y1 = istate(2);

theta1 = istate(3);

x2 = fstate(1);

y2 = fstate(2);

theta2 = fstate(3);

trans = sqrt( power( (x2-x1),2 ) + power( (y2-y1),2 ) );

rot1 = arctan2( (x2-x1),(y2-y1) ) - theta1;

rot2 = theta2 - rot1 - theta1;

params = [ rot1; trans; rot2 ];

end

6. Sampling %function to sample from normal distribution

Page 104: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

94

%mean is 0 and variance is b

function sample = samp (b)

%randn('seed',sum(100*clock));

sample = sqrt(b) * randn(1);

end

7. Motion Model %function to implement motion model for odometery

function plotstate = motionmodel( istate, fstate )

randn('seed',sum(100*clock));

figure(10);

hold on;

axis equal;

axis([-50 50 -50 50]);

x1 = 1000*istate(1);

y1 = 1000*istate(2);

theta1 = istate(3);

x2 = 1000*fstate(1);

y2 = 1000*fstate(2);

theta2 = fstate(3);

%calculate the odo parameters

trans = sqrt( power( (x2-x1),2 ) + power( (y2-y1),2 ) );

rot1 = arctan2( (x2-x1),(y2-y1) ) - theta1;

rot2 = theta2 - rot1 - theta1;

%generate the required number of particles

for x = 1:10

%add noise to the oddo parameters

a1 = 1.25; %effect of rotation on rotation

a2 = 0.002; %effect of translation on rotation

a3 = 20;%0.05; %effect of translation on translation

a4 = 0.25; %effect of rotation on translation

noisyrot1 = rot1 - samp( a1*rot1 + a2*trans );

noisytrans = trans - samp( a3*trans + a4*( rot1 + rot2 ) );

noisyrot2 = rot2 - samp( a1*rot2 + a2*trans );

%get the final pose of the particle

xf = x1 + noisytrans*cosd( theta1 + noisyrot1 );

yf = y1 + noisytrans*sind( theta1 + noisyrot1 );

Page 105: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

95

thetaf = theta1 + noisyrot1 + noisyrot2;

drawrobot( [xf/1000 yf/1000 deg2rad(thetaf)],'g',3,1,1);

end

plotstate = 0;

end

8. Sensor Model %the sensor model. input a map and a scan and a pose. it'll return the

%probability of the scan, well not really a probability but it'll return a

%weight in any case

%sample map of 10x10 occupancy grid

%map = [0 0 0 0 0 0 0 0 0 0;0 0 1 1 1 1 1 1 0 0;0 0 1 0 0 0 1 0 0 0;...

%0 0 1 0 0 0 0 1 0 0;0 0 1 0 0 0 0 1 0 0;0 0 1 0 0 0 0 0 1 0;0 1 0 0 0 0 0 0 1 0;...

%0 1 0 0 0 0 0 0 1 0;1 0 0 0 0 0 0 0 0 1;1 0 0 0 0 0 0 0 0 1;]

function scanweight = sensormodel ( map, scan, pose )

var_hit = 1;

zmax = 8;

zhit = 1;

zrand = 0.5;

[r,c] = size(map);

theta_sense = [270;0];%[0; 45; 90; 135; 180];

x = pose(1);

y = pose(2);

theta = pose(3);

q = 1;

for k = 1:2

if scan(k) >= zmax

continue;

end

xz = x + scan(k)*cosd(theta+theta_sense(k));

yz = y + scan(k)*sind(theta+theta_sense(k));

dist2=0;

for b = 1:r

Page 106: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

96

for a = 1:c

if map(b,a)==1

dist2 = ( power(xz-a+0.5,2) + power(yz-r+b-0.5,2));

break;

end

end

if dist2~=0

break;

end

end

for b = 1:r

for a = 1:c

if map(b,a)==1

temp = ( power(xz-a+0.5,2) + power(yz-r+b-0.5,2));

if temp<dist2

dist2 = temp;

end

end

end

end

phit = exp(-0.5*(power(dist2,2))/var_hit);

phit = phit/sqrt(2*pi*var_hit);

q = q*( zhit*phit + zrand/zmax);

end

scanweight = q;

end

9. Status Update

%function to update the state using odometery motion model and plot it

function newstate = stateupdate(state1,state2)

x1 = state1(1);

y1 = state1(2);

theta1 = state1(3);

params = odonoise(odoparams(state1,state2));

rot1 = params(1);

trans = params(2);

rot2 = params(3);

Page 107: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

97

x2 = x1 + trans*cosd( theta1 + rot1 );

y2 = y1 + trans*sind( theta1 + rot1 );

theta2 = theta1 + rot1 + rot2;

newstate(1) = x2;

newstate(2) = y2;

newstate(3) = theta2;

figure(1);

%clf;

hold on;

axis equal;

axis([-50 50 -50 50]);

drawrobot( [x2/1000 y2/1000 deg2rad(theta2)],'g',3,1,1);

end

Page 108: IMPLEMENTATION OF SIMULTANEOUS LOCALIZATION AND MAPPING ON PEOPLEBOT

98

ANNEXURE C

Script for Wireless Settings of PeopleBot

#! bin bash

iwconfig eth1 essid “slam”

iwconfig eth1 mode ad-hoc

ifconfig eth0 down

ifconfig eth1 down

ifconfig eth1 up