50
IN DEGREE PROJECT COMPUTER ENGINEERING, FIRST CYCLE, 15 CREDITS , STOCKHOLM SWEDEN 2018 Improving path planning of autonomous vacuum cleaners using obstacle classification Förbättrad vägplanering för självgående robotdammsugare genom hinderklassificering NOA SPELE TOMAS ANDERSSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE

Improving path planning of autonomous vacuum cleaners

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Improving path planning of autonomous vacuum cleaners

IN DEGREE PROJECT COMPUTER ENGINEERING,FIRST CYCLE, 15 CREDITS

, STOCKHOLM SWEDEN 2018

Improving path planning of autonomous vacuum cleaners using obstacle classification

Förbättrad vägplanering för självgående robotdammsugare genom hinderklassificering

NOA SPELE

TOMAS ANDERSSON

KTH ROYAL INSTITUTE OF TECHNOLOGYSCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE

Page 2: Improving path planning of autonomous vacuum cleaners

Improving path planning of autonomousvacuum cleaners using obstacle

classification

Forbattrad vagplanering for sjalvgaende robotdammsugare genomhinderklassificering

Noa SpeleTomas Andersson

Degree project in Computer Science, DD142XSupervisor: Jana TumovaExaminer: Orjan Ekeberg

School of Electrical Engineering and Computer ScienceKTH Royal Institute of Technology

Stockholm, SwedenJune 4, 2018

Page 3: Improving path planning of autonomous vacuum cleaners

Abstract

To effectively plan their movement, robotic vacuum cleaners requireinformation about their surrounding environment. This thesis presentsa mapping algorithm which uses collision input from a moving robot tocreate a grid map of the environment. The algorithm also classifies eachobstacle cell in the grid map as either mobile or immobile.

Methods for creating maps of unknown environments already exist.However, they often rely on information from expensive sensory equip-ment, and the maps do not include information about the mobility of theobjects in the map. Hence, the aim of this thesis was to investigate if themap could be created using only limited information from more accessiblesensors.

A testing environment was created using the Unity 3D game engine, inwhich the robot was represented by a circular object. The robot had accessto perfect information about its current position in relation to its startingposition, the direction in which it was heading and any incoming collisions.Three test scenes were created in the simulation environment, representingdifferent common operating spaces for a robotic vacuum cleaner. Thescenes contained different kinds of mobile and immobile obstacles thatthe robot collided with.

A series of tests were then conducted in the test scenes. The testsexamined the performance of the created algorithm. The results indicatedthat the algorithm can create a grid map representation of an unknownenvironment and classify objects within it with an average correctness ofaround 80%. However, it is hard to say whether the algorithm would beeffective in a real situation, due to the inconsistent results and unrealisticassumptions.

Page 4: Improving path planning of autonomous vacuum cleaners

Sammanfattning

Autonoma robotdammsugare behover information om den omgivandemiljon for att kunna planera sin resvag pa ett effektivt satt. I den har rap-porten presenteras en kartlaggningsalgoritm som med hjalp av kollisionsinformation skapar en rutnatskarta av den omgivande miljon. Algoritmenklassificerar aven hindren i rutnatskartan som mobila eller immobila.

Det existerar redan metoder for att kartlagga okanda miljoer. Dockanvander manga av dessa metoder information fran mer avancerade ochdyrare sensorer, och kartorna innehaller ingen information om mobilitetenhos hindren i miljon. Syftet med denna rapport ar darfor att undersokahuruvida en sadan karta skulle kunna skapas med hjalp av mer tillgangligaoch enklare sensorer.

En testmiljo skapades med hjalp av spelmotorn Unity 3D, och i den-na miljo representerades roboten av ett cirkulart objekt. Roboten hadetillgang till felfri information rorande sin position i relation till sin start-position, riktningen i vilken roboten rorde sig i, och alla inkommandekollisioner. Tre stycken olika testrum skapades, vilka representerade oli-ka vanliga miljoer i vilka autonoma robotdammsugare brukar anvandas.Rummen inneholl olika typer av mobila och immobila hinder, vilka robo-ten kolliderade med.

En uppsattning av tester utfordes sedan i dessa testrum. Testerna un-dersokte den skapade algoritmens prestanda. Resultaten visade pa attalgoritmen kan skapa en rutnatskarta som representerar en okand miljo,och klassificerar mobiliteten hos hindren med en genomsnittlig korrekthetpa runt 80%. Dock ar det svart att avgora huruvida algoritmen kan pre-stera lika bra i en verklig miljo, pa grund av de inkonsekventa resultatenoch de formodligen orealistiska antagandena.

Page 5: Improving path planning of autonomous vacuum cleaners

Contents

1 Introduction 11.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Outlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Background 32.1 Global and local navigation . . . . . . . . . . . . . . . . . . . . . 32.2 Simple exploration algorithms . . . . . . . . . . . . . . . . . . . . 3

2.2.1 Random motion . . . . . . . . . . . . . . . . . . . . . . . 42.2.2 Contour following . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Path planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.1 Path planning in static environments . . . . . . . . . . . . 62.3.2 Path planning in dynamic environments . . . . . . . . . . 62.3.3 The A* and D* Algorithms . . . . . . . . . . . . . . . . . 7

2.4 Map building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.1 Mapping with ray-cast data (SLAM) . . . . . . . . . . . . 7

3 Method 93.1 Simulation and testing environment . . . . . . . . . . . . . . . . 9

3.1.1 Obstacle movement . . . . . . . . . . . . . . . . . . . . . 93.1.2 Test scenes . . . . . . . . . . . . . . . . . . . . . . . . . . 103.1.3 Scene 1: regular room . . . . . . . . . . . . . . . . . . . . 103.1.4 Scene 2: messy room . . . . . . . . . . . . . . . . . . . . . 113.1.5 Scene 3: hallway . . . . . . . . . . . . . . . . . . . . . . . 12

3.2 Robot sensory limitations . . . . . . . . . . . . . . . . . . . . . . 133.3 Exploration strategy . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Mapping algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.4.1 Pseudo-code . . . . . . . . . . . . . . . . . . . . . . . . . . 143.5 Simulation parameters . . . . . . . . . . . . . . . . . . . . . . . . 173.6 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Results 194.1 Algorithm visualization . . . . . . . . . . . . . . . . . . . . . . . 194.2 Result set 1: Classifying mobile and immobile obstacles . . . . . 204.3 Result set 2: Classifying specific obstacles . . . . . . . . . . . . . 24

5 Discussion 265.1 Result analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.1.1 Discretization problems . . . . . . . . . . . . . . . . . . . 265.1.2 Exploration effectiveness . . . . . . . . . . . . . . . . . . . 265.1.3 Mobility limit . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.2 Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 275.2.1 Assumed perfect localization . . . . . . . . . . . . . . . . 27

Page 6: Improving path planning of autonomous vacuum cleaners

5.2.2 Reliance on chance . . . . . . . . . . . . . . . . . . . . . . 285.2.3 Search algorithm for finding coherent obstacles . . . . . . 285.2.4 Potentially unrealistic robot movement . . . . . . . . . . . 285.2.5 Mobility estimation outside the simulation . . . . . . . . . 285.2.6 Verification method . . . . . . . . . . . . . . . . . . . . . 29

6 Conclusion 30

7 References 31

8 Appendix 328.1 Program code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

8.1.1 DiscreteMap.cs . . . . . . . . . . . . . . . . . . . . . . . . 328.1.2 MapGenerator.cs . . . . . . . . . . . . . . . . . . . . . . . 348.1.3 MapObstacle.cs . . . . . . . . . . . . . . . . . . . . . . . . 358.1.4 MapPoint.cs . . . . . . . . . . . . . . . . . . . . . . . . . . 358.1.5 ObstacleBehaviour.cs . . . . . . . . . . . . . . . . . . . . 368.1.6 RandomMove.cs . . . . . . . . . . . . . . . . . . . . . . . 378.1.7 Verificator.cs . . . . . . . . . . . . . . . . . . . . . . . . . 40

Page 7: Improving path planning of autonomous vacuum cleaners

1 Introduction

The use of robotic vacuum cleaners, robotic lawn mowers, and other autonomousrobots for performing basic everyday tasks is becoming more and more commonin modern homes. In the field of autonomous robotics, a core problem is totry to determine and understand the unknown operating space which the robotis working in. This includes trying to figure out where the robot is in theenvironment and identifying the surrounding geometry. In order to find a timeefficient and space optimized path, it is important for the robot to understandthe different types of obstacles it can encounter. It can also be beneficial for therobot to be able to adjust to potential changes in the operating space, withoutlosing track of its localization and the information it has gathered.

The complexity and pricing of robotic vacuum cleaners vary greatly depend-ing on their sensory capabilities. Although advanced and effective coverageand mapping algorithms exist, many of them rely on expensive sensory input,such as sonar sensors or computer vision. While they work in theory and forsome professional applications, this makes any robot using these methods tooexpensive for use in an average private household. As a result, there is a greatdisparity between the simple algorithms currently used by most private roboticvacuum cleaners, and the much more sophisticated theoretical algorithms.

1.1 Purpose

The purpose of this report is to bridge the gap between current practical andtheoretical coverage strategies. This will be done by investigating how simplesensors (such as bumper sensors) can be used to generate a map of a dynamicdomestic environment. The idea is that this map could then be used for moreadvanced path planning algorithms.

1.2 Problem statement

The thesis will investigate the following:

• How can obstacles in a dynamic domestic environment be mapped, usingthe limited sensory capabilities of a simple robotic vacuum cleaner?

• Is it possible to accurately classify these obstacles based on their mobility?

1.3 Scope

Today’s robotic vacuum cleaners aimed at private households do usually nothave access to advanced visual perception devices, because of the fact that suchdevices would drastically increase the price of the device. Most of the devicestoday have access to instruments which indicate if the unit has collided withan obstacle, and an IR laser which can be used to follow objects and measurethe distance to surrounding objects. These instruments are the sources of theinformation that we are going to use.

1

Page 8: Improving path planning of autonomous vacuum cleaners

This study is therefore focused on analyzing the possibility of using the exist-ing technology in autonomous robotic devices in order to map the surroundingenvironment. Because of this limitation we are not taking into account the pos-sibilities of more advanced sensors and we are not trying to achieve the mostprecise and detailed performance, instead focusing on trying to improve roboticvacuum cleaners with limited sensory capabilities. We do not intend to utilizeadvanced hardware or computer vision.

Also note that while some simple path planning algorithms will be presentedand used in the report, the main focus of the study is to investigate environmentmapping. Although the intended end purpose of the map is to be applied formore sophisticated path planning algorithms, this application is not within thescope of this report.

1.4 Outlines

In section two, the most commonly used strategies for exploring unknown envi-ronments, path planning, and map creation will be presented.

In section three, the developed mapping algorithm will be presented andexplained. Information about the testing environment and how the tests wereconducted is also presented in this section.

In section four, the results from the conducted test is presented. The follow-ing section, section five, is devoted to discussing the produced results.

In section six, conclusions are made based on the discussion in section five,and the problem statement is addressed.

Finally, the references used in the thesis are presented, followed by an ap-pendix including the complete source code used to generate the results.

2

Page 9: Improving path planning of autonomous vacuum cleaners

2 Background

In order to create an efficient and well-performing algorithm it is important tounderstand the present work in the field, and the target devices of the map whichthe algorithm is producing. In this section, different common path-coveragestrategies will be presented in order for the reader to be able to understand themain components of the algorithm. After that, different simple path planningalgorithms will be presented, in order to create an understanding of where thealgorithm will be used. The section ends with a subsection discussing existingstrategies for map creation.

2.1 Global and local navigation

The navigation problem of autonomous robots concerns finding the position ofthe robot, finding the goal position we aim to move it to and calculating acollision-free path from the robot to the goal position. Although the coverageproblem is slightly different from finding a path from point A to point B, thenavigation problem is still relevant for more advanced coverage algorithms. Thenavigation problem can be divided into two sub-problems; global navigationand local navigation [1]. In global navigation, the environments geometry isknown beforehand, and the problem revolves around finding a collision-free pathfrom the robots current position to a specific goal. In local navigation, theenvironment is not known beforehand and the aim is to utilize sensory input todetect obstacles and react to them.

2.2 Simple exploration algorithms

In order to create a map based on encountered collisions, the robot has tostart by exploring the environment, using one or several simple explorationalgorithms. To minimize the robots hardware requirements, the algorithm maynot require more sensory input than short-distance obstacle detectors, such asbumper sensors and IR-lasers. As the robot collects information about theenvironment, it could potentially switch to using more sophisticated algorithmslater on. However, this is not within the scope of this report. Some examples ofcommon simple exploration and path-coverage algorithms are random motionand contour following. The details of their implementation may vary, but herefollows a description of each algorithm as they are going to be implemented inthis report:

• Random motion: The robot travels straight forward until it hits an ob-stacle. Any time it hits an obstacle it turns to a random direction awayfrom the obstacle and resumes the forward motion.

• Contour following: The robot travels forward while turning slightly leftuntil it hits an obstacle. It then stops, turns right and resumes the forward-left-tuning motion. The aim of this motion is to follow the contour of alarge obstacle, such as a wall, on the left side of the robot.

3

Page 10: Improving path planning of autonomous vacuum cleaners

2.2.1 Random motion

As the name suggests, the random motion algorithm is based on randomization.The central idea of the algorithm is that if a robot is moving randomly in a closedenvironment, the robot will most likely cover the whole environment if it getsenough time.

The random motion based algorithm has three main advantages:

• It does not rely on any information about the environment, which is oftenthe case in real world applications where the environment is often initiallyunknown.

• It does not get stuck in a repeating pattern, where the robot only coversa few specific parts of the environment repeatedly.

• It is not dependent on any advanced sensors, and only requires the robotto have access to bumper sensors.

The random motion algorithm can be implemented in various ways, but the corestructure of the algorithm is the same in each approach. The core structure canbe described as [4]:

1. Move forward until a collision is detected.

2. When a collision is detected, analyze the info gathered from the bumpersensor regarding the collision.

3. Determine an interval for the new direction which the robot is supposedto follow from the data collected.

4. Generate a random number that determines which direction to chose fromthe interval.

5. Repeat the process until a certain time has elapsed, or another restraintis reached.

The benefit of using a similar implementation as the one described above isthat this algorithm is computationally cheap, which means that the robot doesnot need to have an advanced CPU. Another benefit of using random motion isthat it would try to explore areas that might be falsely marked as non-reachable,due to its unpredictable behavior which can be utilized as a tool for exploration.Figure 1 depicts the general motion of the random motion algorithm.

4

Page 11: Improving path planning of autonomous vacuum cleaners

Figure 1: General motion of the random motion algorithm.

2.2.2 Contour following

The contour following algorithm, also known as the wall follow algorithm [4], isused in situations where it is important to follow a wall or go around an objectin an environment. The main benefit of the contour following algorithm, com-pared to the random motion algorithm, is that this algorithm generates groupedcollision points which can be treated as a coherent object. However, unlike therandom motion algorithm, the contour following algorithm is only going to movein a repeated pattern or around a single object or a cluster of objects. The con-tour following algorithm, like the random motion algorithm, does not need anyknowledge about the environment in which the robot is supposed to operate,other than collision input. The general flow of the algorithm is as follows:

1. Go forward and turn left or right until a collision is detected.

2. Turn in the opposite direction until the robot faces α degrees away fromthe contact point.

3. Go to step 1.

Just like the random motion algorithm, the contour following algorithm is com-putationally cheap. The algorithm does not need to save any large data setsabout the environment in which it is operating, and the algorithm is thereforesuited for use on appliances with a low capacity integrated CPU. The mainbenefit of this algorithm in the field of autonomous robotic vacuum cleaners isthat much of the dust in a room is typically located around objects and alongthe walls of a room, which is covered by this algorithm. Figure 2 depicts thegeneral motion of the contour following algorithm.

5

Page 12: Improving path planning of autonomous vacuum cleaners

Figure 2: General motion of the contour following algorithm.

2.3 Path planning

Path planning in contrary to the previously mentioned exploration algorithmsrely on information about its surroundings, and uses the information to find anoptimal path, in such a way that the whole area is visited during the traversalalong the chosen path, and that the operating time is as optimized as possible.Path planning is used in order to reach a goal as fast as possible. More commonlyin the case of robotic vacuum cleaners, it is used to clean every part of a roomas fast as possible.

2.3.1 Path planning in static environments

Static environments are the simplest abstraction of the real world geometry,where all obstacles are assumed to be static. This is useful since this allows therobot to be able to proceed through the environment without the risk of anycollisions.

2.3.2 Path planning in dynamic environments

Since objects often are moved around in homes, a static approach is often notsuitable for a robotic vacuum cleaner which is operating in a generic house-hold. A dynamic approach is often necessary in order to achieve a sophisticatedmovement behaviour. The general idea of path planning in dynamic environ-ments is to start with a simple static approach, and in the case of an obstaclein the chosen path, alter the path according to the information gathered fromthe collision or detection of the distracting object.

6

Page 13: Improving path planning of autonomous vacuum cleaners

2.3.3 The A* and D* Algorithms

The A* algorithm is an algorithm commonly used for path-planning in au-tonomous robots, such as robotic vacuum cleaners. It is an information-basedalgorithm that relies on a static grid-composed representation of the environ-ment in which the appliance is intended to travel. The grid-composed mapconsists of small parts called nodes, and each node represents a part of theenvironment and each grid is classed as either free space or an obstacle. Thealgorithm uses a heuristic-based Dijkstra algorithm in order to find a path froma start node to an end node. The path is chosen in such a way that the chosenpath incurs the smallest cost, which can be based on time, length or anotherfactor. The benefit of this algorithm is its performance and accuracy, but inorder to achieve this the algorithm is required to have a complete map of theenvironment, which often means unnecessary data and large data sets.

The A* algorithm can be expanded to be used in a dynamic environmentand is then often referred to as the D* algorithm. The algorithm can be imple-mented in multiple ways, but has a common core idea. The idea of the algorithmis to find a path in the environment from a start node to an end node, the envi-ronment is assumed to be static during this process in order to find a first path.The appliance then starts to move along the found path, until any abnormalityis detected. The initial static map is then updated with the new informationand a new path is generated from the map. This process is then repeated untilthe end node is reached or the algorithm determines that it is impossible toreach the end node. [3] and [10] are some examples of path planning algorithmswhich use the A*/D* algorithms.

2.4 Map building

The central idea of mapping is to translate a complex and often large envi-ronment into a computer readable file or data set. A common method is tostore every object in the mapped environment as a set of vertices, and the linesegments which are defined by these vertices define the outline of the object.In robotics the vertices are often extracted using on-board sensors and cam-eras, during the exploration of the unknown environment. Another approach isto use this input to construct a discretized representation of the environment,which can improve the efficiency of some algorithms and allow faster access togeometry data at specified positions, at the cost of precision.

2.4.1 Mapping with ray-cast data (SLAM)

Simultaneous Localization and Mapping (SLAM) is one of the most commonapproaches for mapping an unknown environment with an autonomous robot.The core idea of the approach is to use a ray-cast scanning device in orderto create an occupancy grid map of an unknown environment. The generalstructure of the approach is as follows:

1. Perform a 360 degrees ray-cast scan from the current location.

7

Page 14: Improving path planning of autonomous vacuum cleaners

2. Identify all frontier cells (explained down below) using the informationgather from the ray-cast and all earlier information gathered.

3. Find a path from the current location to one of the frontier cells.

4. Travel along the path to the frontier cell and repeat the process until thereare no more frontier cells left in the occupancy grid map.

In the occupancy grid map, each node is marked as either free space, an ob-stacle, or as unexplored. Each node is initially marked as unexplored [2]. Aftereach ray-cast measurement, the distance which the ray traveled is measured ineach direction, and all nodes which the rays traveled through are marked as freespace. The nodes which include the points where the rays bounced back fromare marked as obstacles [11]. Frontier cells are nodes in the occupancy grid mapmarked as free space, with one or more adjacent nodes marked as unexplored.Which frontier cell to travel to in each iteration can be chosen at random or bytrying to find the most beneficial one. [5], [9] and [8] are some examples of howthis method can be used.

8

Page 15: Improving path planning of autonomous vacuum cleaners

3 Method

In this section we will present a new mapping algorithm, constructed to solvethe mapping and obstacle mobility classification problem.

A simple simulation environment was created in the Unity 3D game en-gine. The environment contained a mobile robot and obstacles which it collidedwith. The movement of the robot was simulated in real time. It exploredthe environment by switching between the previously mentioned random andcontour-following exploration strategies.

During the simulation, incoming collision events were used to send input toa mapping algorithm. In the mapping algorithm, the environment space wasdiscretized into small cells. In the end result of the algorithm, each cell in thegrid map was classified as either obstacle, empty space or unexplored, dependingon the recorded collision events at that cell. Additionally, any cell classified asobstacle was also classified as either mobile or immobile.

3.1 Simulation and testing environment

To investigate the problem we implemented a graphical simulation environmentusing Unity 3D version 2017.3.0f3. Documentation for this version of Unity canbe found at [7]. Associated scripts were written in the C# programming lan-guage. The simulation environment consisted of multiple 2D scenarios depictingexample domestic environments. Each scene contained multiple obstacles suchas walls and furniture that triggered collisions with the robot.

3.1.1 Obstacle movement

Each obstacle in the scenes had an attached ObstacleBehaviour script. Thisscript controlled the movement of the obstacle and contained methods for check-ing whether or not the obstacle is mobile. Each obstacle used one of the followingthree movement patterns:

• Stationary: The obstacle does not move around during the simulation.

• Random: Given two intervals I1 and I2, and a radius r, move the obstacleto a new random location within r distance units of the obstacles startingposition. The movement takes I1 seconds and is repeated after additionalI2 seconds.

• Back-and-forth: Given two intervals I1 and I2, and a vector v, move theobstacle to its starting position offset by v distance units over I1 seconds.After an additional I2 seconds, move back to the obstacles starting positionover I1 seconds. Wait for I2 seconds before repeating the pattern.

Obstacles using the Stationary movement pattern were considered immobilein the verification algorithm. Obstacles using any of the other patterns wereconsidered mobile. Note that this correct set of classifications was only used to

9

Page 16: Improving path planning of autonomous vacuum cleaners

verify the results of the mapping algorithm. The mapping algorithm itself didnot have any access to the ObstacleBehaviour class.

3.1.2 Test scenes

3.1.3 Scene 1: regular room

The first scene consisted of a single room containing a chair, a very small sta-tionary object and two large rectangular objects. There was a small wall at thetop of the room. The purpose of the scene was to test the algorithm’s perfor-mance in an environment where most of the obstacles were perfectly alignedwith the map matrices coordinates.

Figure 3: Overview of the regular room scene.

Obstacle mobility:

1. Wall up: stationary

2. Wall down: stationary

3. Wall left: stationary

4. Wall right: stationary

5. Wall mid: stationary

6. Small object: stationary

7. Rectangle: stationary

8. Moving chair: random movement (1.0 distance units over 1 second with10 second breaks)

9. Moving rectangle: random movement (1.83 distance units over 1 secondwith 15 second breaks)

10

Page 17: Improving path planning of autonomous vacuum cleaners

3.1.4 Scene 2: messy room

The second scene consisted of two rooms divided by a wall with an open door-way. It contained five chairs and two rectangular objects. The moving objectsmoved relatively rarely. The purpose of the scene was to test the algorithm’sperformance in an unorganized environment filled with many moving obstacles.

Figure 4: Overview of the messy room scene.

Obstacle mobility:

1. Wall up: stationary

2. Wall down: stationary

3. Wall left: stationary

4. Wall right: stationary

5. Wall mid 1: stationary

6. Wall mid 2: stationary

7. Chair 1: stationary

8. Chair 2: stationary

9. Cube 1: stationary

10. Cube 2: stationary

11. Moving chair 1: random movement (1.0 distance units over 1 second with30 second breaks)

11

Page 18: Improving path planning of autonomous vacuum cleaners

12. Moving chair 2: random movement (1.0 distance units over 1 second with30 second breaks)

13. Moving chair 3: random movement (1.0 distance units over 1 second with30 second breaks)

3.1.5 Scene 3: hallway

The third scene consisted of a tight hallway with two closed doorways. Thehallway contained a chair and a moving rectangular object. The purpose of thescene was to test a common room layout, as well as testing the special casewhere an object moves along a stationary wall.

Figure 5: Overview of the hallway scene.

Obstacle mobility:

1. Wall up 1: stationary

2. Wall up 2: stationary

3. Wall up 3: stationary

4. Wall down 1: stationary

5. Wall down 2: stationary

6. Wall down 3: stationary

7. Wall left: stationary

8. Wall right: stationary

9. Moving chair: random movement (0.58 distance units over 1 second with15 second breaks)

12

Page 19: Improving path planning of autonomous vacuum cleaners

10. Moving rectangle: back-and-forth (1.414 distance units over 1 second with10 second breaks)

3.2 Robot sensory limitations

During the simulation, the robot object had access to its current rotation andposition in global plane coordinates. The robot was also able to perfectly detectany incoming collisions with obstacles in 2D space. From each collision event,the robot was able to extract information about the location of the collisionscontact point (in global coordinates) as well as the colliding surface’s normalvector.

3.3 Exploration strategy

The robot was placed in an unknown environment and did not have any initialinformation about the environments geometry. The only information that waspresent at the start of the exploration was the robot’s starting position androtation in global coordinates. The robot used the previously mentioned randommotion and contour following motion patterns to explore the environment.

The robot started by using the random motion mode. The robot switchedfrom random motion to contour-following mode when it collided with an ob-stacle after running random motion mode for a certain duration. Similarly, therobot switched from contour-following to random motion mode after runningcontour-following mode for a certain duration. If the robot did not encounterany obstacles during this time, it switched algorithm sooner.

During the simulation, the position of the robot was continuously sent tothe mapping algorithm. Also, any time a collision occurred, information aboutthe position of the collision was sent to the mapping algorithm.

3.4 Mapping algorithm

The map generation program in the simulation used the following key classes:

• MapObstacle: Object representing what the mapping algorithm inter-preters as a coherent obstacle in the simulation environment.

• MapPoint: Object representing a single discretized cell in the generatedgrid map. The object stores the cell’s current obstacle classification (obsta-cle, empty space or unexplored), the cell’s current local mobility estimate,as well as a pointer to the MapObstacle it is considered to be part of.

• DiscreteMap: The generated collision-based grid map. It represents adiscretized section of the global coordinate plane and stores a quadraticmatrix of MapPoints.

The program stored information about previously encountered collisions asMapPoints in a quadratic matrix m in the DiscreteMap. The matrix was adiscretized representation of a limited section of the global coordinate plane.

13

Page 20: Improving path planning of autonomous vacuum cleaners

A collision event at the position (x, y) in continuous global space updated thematrix data at m[s · round(x), s · round(y)], where s is the specific scaling of mrelative to global coordinates and round(a) returns a rounded off to the closestinteger.

Each MapPoint object in the matrix contained a queue of boolean valuesand a pointer to a MapObstacle. The queue stored a constant number of themost recent collision statuses at the point. Every time a new value was addedto the queue, the oldest value was removed. When a collision was registered atthe point, true was added to the queue. When the robot moved over the pointwithout registering a collision, the queue was completely filled with false. Abuilt-in timer prevented the queue from being modified more than once everyfew seconds, meaning that the robot had to return to the position at a latertime to modify the collision data again. The local mobility estimate of the pointwas calculated as the percentage of values in the queue that were false.

After the robot had explored the environment for a specific duration, thegenerated map was post-processed. The program iterated through each Map-Point that was registered as an obstacle. From each MapPoint that had notbeen previously visited during this process, a breadth-first-search (BFS) wasperformed to find a coherent obstacle containing that point. Edges in the BFSwere drawn to any point that was registered as an obstacle and was horizontally,vertically or diagonally adjacent to the current point. For each search, a newMapObstacle was created and every MapPoint that was found by the searchwas assigned to that obstacle. Every MapObstacle that was created during thisprocess was assigned a final mobility estimate equal to the average mobility es-timate of the points in the obstacle. Any MapObstacle with a mobility estimatethat exceeded a specific constant, referred to as MobilityLimit in this report,was considered mobile.

3.4.1 Pseudo-code

The following pseudo-code segments present the key components of the mappingalgorithm. Note that the actual implementation was done using an object-oriented approach in C#. The program code can be found in the appendix. Inthe pseudo-code, the classes used have been simplified to only be containers ofdata. The class property x of an object o is denoted o.x.

A MapPoint-object has the following properties:

• type: specifies whether the MapPoint is currently recognized as an obsta-cle, empty space or if it is unexplored.

• lastHitTime: stores the last time the mobility estimate or type of theMapPoint was modified.

• group: the MapObstacle object pointer that the MapPoint is consideredpart of.

• history: queue structure of booleans which is used to store and calculatethe MapPoint’s local mobility estimate.

14

Page 21: Improving path planning of autonomous vacuum cleaners

A MapObstacle-object only has a single property:

• mobilityEstimate: final mobility estimate of the MapObstacle, and therebyall MapPoints assigned to that obstacle.

MarkObstacle MarkObstacle is called when the robot detects a collision. Itmarks the MapPoint at the collision’s contact point (x, y) in the matrix P as anobstacle. If the MapPoint was already marked as obstacle, the mobility estimateof the MapPoint is decreased.

Input: MapPoint-matrix P , integers (x, y) within the boundaries of P , cur-rent time t.

procedure MarkObstacle(P , (x, y), t)if P [x][y].type = obstacle then

AddHitV alue(P [x][y], true, t)else

P [x][y].type← obstacleP [x][y].lastHitT ime← t

VisitCircle VisitCircle is called regularly during exploration to mark anyMapPoints under the robot as empty space. This is done by finding all Map-Points in P in a circle with radius r around the robots position (x, y) and settingtheir type to empty.

Input: MapPoint-matrix P , integers (x, y) within the boundaries of P , radiusr, current time t.

procedure VisitCircle(P , (x, y), r, t)for i← x− r to x+ r do

d← distance between (x, y) and (i, j)if d ≤ r then

AddHitV alue(P [i][j], false, t)P [i][j].type← emptyP [i][j].group← none

AssignGroups AssignGroups finds coherent obstacles in P by grouping to-gether any adjacent MapPoints with the type obstacle. MapPoints in the sameadjacency group are assigned the same MapObstacle. The mobility estimate ofa MapObstacle is the average local mobility estimate of the MapPoints assignedto that MapObstacle. The final mobility estimate of a MapPoint is simply themobility estimate of its MapObstacle.

Input: MapPoint-matrix P with the size n× n.

15

Page 22: Improving path planning of autonomous vacuum cleaners

procedure AssignGroups(P , n)R← all MapPoints in Pfor i← 0 to n− 1 do

for j ← 0 to n− 1 doif P [i][j].type = obstacle and R contains P [i][j] then

G←MatrixBFS(i, j, P )o← new obstacle objectforeach p in G do

p.group← oo.mobilityEstimate← o.mobilityEstimate

+GetMobilityEstimate(p)/G.sizeremove p from R

MatrixBFS MatrixBFS uses a variant of BFS to find a coherent group ofobstacle points and return it. Edges in the BFS consist of any points that arehorizontally, vertically or diagonally adjacent to the current point.

Input: MapPoint-matrix P , integers (xstart, ystart) within the boundariesof P .

procedure MatrixBFS(p, (xstart, ystart))V ← P [i][j]Q← (xstart, ystart)while Q is not empty do

(cx, cy)← Dequeue(Q)for i← −1 to 1 do

for j ← −1 to 1 dox← cx+ iy ← cy + jif P [x][y].type = obstacle and V does not contain P [x][y] then

Enqueue(Q, (x, y))add P [x][y] to V

return V

AddHitValue AddHitValue is used to change the local mobility estimate ofa MapPoint p. If a collision was triggered at p, b should be true. If p wasvisited without triggering a collision, b should be false. The function preventsthe mobility estimate from being modified more than once every c seconds.

Input: MapPoint-instance p, boolean value b, current time t.

procedure AddHitValue(p, b, t)if t− p.lastHitT ime > CONSTANT c then

p.lastHitT ime← t

16

Page 23: Improving path planning of autonomous vacuum cleaners

if p.type = obstacle thenDequeue(p.history)Enqueue(p.history, b)

elsefill p.history with false

GetMobilityEstimate GetMobilityEstimate calculates and returns the localmobility estimate of a MapPoint.

Input: MapPoint-instance p.

procedure GetMobilityEstimate(p)count← 0foreach b in p.history do

if b = false thencount← count+ 1

3.5 Simulation parameters

This section contains a list of robot movement and environment mapping pa-rameters that could be changed in the simulation. All results were generatedusing the same values for all parameters except for Mobility limit, where mul-tiple values were tested. The values in parenthesis after the parameter namesare the values used to generate the data for the result section.

Robot movement parameters:

• Radius (0.32): Radius of the robots circular collision component.

• Speed (8): Movement speed of the robot, in Unity distance units persecond.

• Rotation speed (140): Angular speed of the robot when using the contour-following movement mode, in degrees per second.

• Contour-following angle (0): Angle outwards from the edge of the wallthat the robot rotates to, upon colliding with an obstacle in the contour-following mode.

• Minimum random-mode time (2): Minimum time that the robot uses therandom motion mode, before switching mode on the next collision.

• Maximum contour-mode time (3): Maximum time that the robot uses thecontour-following mode before switching mode.

• Maximum contour-mode time without collisions (1): Maximum time thatthe robot uses the contour-following mode before switching mode, if nocollisions are detected.

17

Page 24: Improving path planning of autonomous vacuum cleaners

Mapping algorithm parameters:

• Rows and columns (128): Number of rows and columns in the mappingalgorithms quadratic MapPoint-matrix.

• Distance scaling (6): Map scale in relation to Unity distance units.

• Robot radius factor (0.72): factor to scale the robot radius with whenmarking empty space under the robot. Note that radius is also scaledwith Distance scaling.

• History length (2): Number of values in each MapPoints history queue.

• Minimum time between updates (3): The minimum time that has to passbetween modifications of the mobility data of a MapPoint.

• Mobility limit (0.15, 0.3, 0.45, 0.6, 0.85): The limit where an obstaclecell is considered mobile. If the final mobility estimate of a cell exceedsthis limit, the obstacle is classified as mobile. Otherwise, it is classified asimmobile.

3.6 Verification

To verify the correctness of a set of obstacle classifications, the simulation pro-gram kept track of the actual last encountered obstacle object at each point.This was done by storing the obstacle’s attached ObstacleBehaviour object eachtime a collision occurred.

The ObstacleBehaviour objects were stored in a matrix that correspondedto the DiscreteMap’s MapPoint-matrix. The object’s position in the matrixwas based on the collisions position global coordinates, and the same Obstacle-Behaviour object pointer could be stored at multiple positions in the matrix.When the verification was executed, the verification program iterated througheach MapPoint in the DiscreteMap that was considered an obstacle. It com-pared the mobility classification of the MapPoint’s assigned MapObstacle to theactual obstacles mobility classification by fetching the ObstacleBehaviour objectat the same position in the verificator’s ObstacleBehaviour-matrix. If these wereequal, the point was deemed correctly classified. Note that the classification ofan obstacle could be deemed partially correct if only some of the correspondingMapPoints were correctly classified.

To estimate the overall correctness of the algorithm, the average correctnessof all obstacles was calculated. It was also useful to compile the average cor-rectness for mobile and immobile obstacles separately, since the most commoncategory could otherwise skew the data. Note that the verification process onlytook into account the average correctness of each obstacle classification. It didnot account for different sizes of obstacles or how well explored each obstaclewas.

18

Page 25: Improving path planning of autonomous vacuum cleaners

4 Results

The purpose of this section is to evaluate the effectiveness of the mapping algo-rithm presented in the previous section. The results were generated by runningthe simulation in real time and printing logs containing output from the previ-ously mentioned verification algorithm. The tests were generated using differentMobilityLimit-values, which specifies the limit where an obstacle is consideredmobile.

The result generation was divided into two parts:

• Result set 1 compared the accuracy of classifying mobile immobile obsta-cles respectively in each scene, using different MobilityLimit-values.

• Result set 2 compared the accuracy of classifying the mobility of eachspecific obstacle in every scene, using different MobilityLimit-values.

4.1 Algorithm visualization

Figure 6 shows an example visualization of the obstacle point mobility estima-tion in scene 2. In the simulation GUI, points with a high mobility estimate arevisualized with a bright pink color, while points with a low mobility estimateare blue. Points with a mobility estimate near 0.5 are purple. Note that themobility estimates are not the final classifications; the mobility estimates varybetween 0.0 and 1.0, and the obstacle point is classified as mobile if and only ifits mobility estimate exceeds a specific MobilityLimit-value.

Figure 6: Screenshot showcasing the algorithm’s obstacle mobility estimationin scene 2.

19

Page 26: Improving path planning of autonomous vacuum cleaners

4.2 Result set 1: Classifying mobile and immobile obsta-cles

Result set 1 aimed to evaluate the accuracy of classifying mobile and immo-bile obstacles respectively, using different MobilityLimit-values. A low limitimproves the accuracy of classifying mobile obstacles at the cost of accuracyclassifying immobile obstacles, and vice-versa. It is therefore relevant to find abalanced MobilityLimit-value that provides a high success-rate classifying bothtypes of obstacles.

The results were generated for each of the three test scenes, using Mobil-ityLimit-values 0.15, 0.30, 0.45, 0.6 and 0.85. Figures 7, 9 and 11 show theaverage correct mobile and immobile classification respectively for each scene,using different limits. The y-axis is the average correctness using the specifiedlimit and the x-axis is the verification iteration number. One verification wasperformed every 10 seconds, meaning that verification 30 occurred after 300seconds.

The graphs were created using Gnuplot version 5.0 [6]. The graphs werecalculated using the built in fit-function in Gnuplot, which uses the nonlin-ear least-squares Marquardt-Levenberg algorithm. The graphs were fitted to ax/(x + a) function, chosen due to the fact that this function goes through theorigin and asymptotically approaches 1. The fit-function was using a sample of200 simulation values for each scene and MobilityLimit.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

Figure 7: Results from scene 1.

20

Page 27: Improving path planning of autonomous vacuum cleaners

Figure 8: Average correct classification of all obstacles in scene 1.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

Figure 9: Results from scene 2.

21

Page 28: Improving path planning of autonomous vacuum cleaners

Figure 10: Average correct classification of all obstacles in scene 2.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

Figure 11: Results from scene 3.

22

Page 29: Improving path planning of autonomous vacuum cleaners

Figure 12: Average correct classification of all obstacles in scene 3.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

Figure 13: Results from all scenes together.

23

Page 30: Improving path planning of autonomous vacuum cleaners

Figure 14: Average correct classification of all obstacles in all scenes together.

4.3 Result set 2: Classifying specific obstacles

Result set 2 aimed to evaluate the accuracy of classifying each specific obstaclein the scenes. This was useful to find what sorts of obstacles the algorithm hadtrouble classifying.

The results were generated for each of the three test scenes, using Mobil-ityLimit-values 0.15, 0.30, 0.45, 0.6 and 0.85. Figures 15, 16 and 17 show theaverage correct classification for each specific obstacle in each scene after run-ning the simulation for 300 seconds, using different limits. Each average valuein the graphs was calculated from a sample of 100 simulation values.

Figure 15: Chart showing the obstacle classification performance of scene 1.

24

Page 31: Improving path planning of autonomous vacuum cleaners

Figure 16: Chart showing the obstacle classification performance of scene 2.

Figure 17: Chart showing the obstacle classification performance of scene 3.

25

Page 32: Improving path planning of autonomous vacuum cleaners

5 Discussion

In this section we will discuss the accuracy of the results and the problems theyreveal. We will also discuss some known limitations that affect the validity ofthe results and the practical usefulness of the presented mapping algorithm.

5.1 Result analysis

5.1.1 Discretization problems

The results show that the algorithm, given enough time, could produce quiteaccurate mobility classifications of the obstacles in the given test scenes. How-ever, there were special cases where the algorithm had problems classifying theobstacles. Most of these problems were side-effects of using a discretization ofworld space to represent the map. For example, the results show that the al-gorithm had trouble classifying the obstacle Right wall in Scene 1 (see figure15). This was likely because the wall was perfectly aligned with the y-axis ofthe discretized map, combined with colliding edge being positioned midway be-tween two discretized points on the x-axis. This caused the program to firsttrigger a collision at the closest points to the wall, marking them as obstacles,and then include those same points in the space visiting procedure, markingthem as empty space. The result was that Right wall in many cases was clas-sified as mobile. A similar issue occurred when exploring many angled obstaclegeometries. However, that problem was reduced by setting the final mobilityestimate of each point to the average local mobility estimate of all points in thesame coherent obstacle.

A highly inaccurate obstacle classification can be found in figure 17, whereMoving rectangle was mostly incorrectly classified for mobility limits above 0.15.This obstacle moved closely along a stationary wall. Due to its close proximityto the wall, the mapping algorithm sometimes considered Moving rectangle asa part of the same obstacle as the wall, which caused it to be incorrectly clas-sified. Another contributing factor to this incorrect classification was that theobstacle moved such a short distance that it permanently overlapped some ofthe discretized cells. Consequentially, these cells were categorized as immobile.

5.1.2 Exploration effectiveness

The algorithm correctly classified most walls, but had less success classifyingsmall stationary objects in the middle of the scenes, as can be seen in figures 15and 16. Examples include Rectangle in scene 1 and Chair 1, Chair 2, Cube 1 andCube 2 in scene 2. This may be because the robot uses very simple explorationstrategies that are better suited for finding and exploring walls. Random motionhas a high chance of finding walls due to their large collision boundaries, andcontour-following is well suited for both exploring walls and finding adjacentwalls. The robot is significantly less likely to find a small isolated obstacle inthe middle of the scene using any of these patterns. It is possible that the robotwould be better at finding these obstacles if it also occasionally switched to spiral

26

Page 33: Improving path planning of autonomous vacuum cleaners

motion. This was an idea which we discarded due to being more suitable forpath coverage rather than exploration, but the spiral motion could compensatefor the lack of long range sensors. For further studies it could be interesting tofind how different exploration strategies impact the effectiveness of the mappingalgorithm.

5.1.3 Mobility limit

Looking at the average correct classification in figure 13, it seems that a veryhigh MobilityLimit-value, such as 0.85, provides better classifications. However,that result is greatly influenced by the fact that the immobile obstacles in theexample scenes greatly outnumber the mobile obstacles. Looking at the resultsfrom scene 2 (figure 9), where the obstacles move less frequently than in theother scenes, the accuracy of classifying mobile obstacles decreases significantlywith higher limits. In that specific scene, a suitable mobility limit to achieveaccurate classifications for both types of obstacles seems to be between 0.30and 0.45. A suitable MoblityLimit for achieving a good average result for bothcategories in all scenes seems to be 0.6, as can be seen in figure 13. This resultsin an average correctness of around 80% for both categories.

It should also be noted that the accuracy of classifying immobile obstaclesincreased noticeably during the entire simulation runs. This is because newlydetected obstacle points are classified as mobile by default, and their mobilityestimates decrease when they are repeatedly bumped into. As a consequence,longer simulation runs provide better immobile classifications. Figure 13 (a)indicates that the average correct immobile classification was not yet close to amaximum when the simulations were terminated, especially for lower mobilitylimits. For further studies it would be interesting to find the average simulationtime until the accuracy stops increasing.

5.2 Known limitations

5.2.1 Assumed perfect localization

One of the greatest limitations of the presented solution is that it is dependenton perfect localization of the robot. The algorithm requires information aboutthe exact position of the robot in relation to its starting point. It also requiresthe exact collision positions of any collisions that are registered. One way thatlocalization could be implemented is by continuously monitoring the distance therobot travels from its starting position and tracking its rotation. The problemwith using that localization method for this algorithm is that the positionaland rotational error would most likely grow during the exploration process,which would cause the mapping algorithm to detect small movements from alldetected obstacles. This would significantly worsen the algorithms ability toclassify obstacle mobility.

27

Page 34: Improving path planning of autonomous vacuum cleaners

5.2.2 Reliance on chance

While the mapping algorithm itself is deterministic, the movement of the robotin the simulation relied on random number generation. Since the robot move-ment is essential for the effectiveness of the mapping algorithm, this means thatthe generated results were greatly influenced by random variation. To reducethis problem, most of the results were presented using the average of at least100 simulation result values. However, more reliable results could have beengenerated using a higher sample size.

5.2.3 Search algorithm for finding coherent obstacles

The algorithm uses a variant of breadth-first-search to find coherent obstaclesin the discretized point matrix. A problem with using this method to findcoherent obstacles is that the minimum distance for two points to be consideredconnected becomes heavily dependent on the resolution of the discretized map.For example, if the matrix size 256 × 256 is used instead of 128 × 128, notonly will the level of detail in the map geometry be greater, but some pointsthat were previously interpreted as being part of the same obstacle will besplit. This makes it very difficult for the algorithm to find coherent obstacles inhigh-resolution maps. Ideally it would be possible to configure a limit on howfar apart two points can be and still be considered part of the same obstacle.However, this is not possible using the current BFS-based solution withoutmaking it significantly slower.

5.2.4 Potentially unrealistic robot movement

During the test runs, the simulation used a slightly unrealistic value for thecontour-following angle parameter. This caused the robot to always follow thewall perfectly without really “bouncing off” the surface. It allowed the simula-tion robot to find contours very effectively, which greatly increased the speed ofthe map generation, compared to using a rougher contour-following pattern.

5.2.5 Mobility estimation outside the simulation

One of the main problems encountered when constructing the algorithm and testcases was which degrees of mobility the robot should be able to detect. Anotheruncertainty was over what period of time the map should be generated. In theend result of the algorithm, each discretized obstacle point is classified as eithermobile or stationary. The result is verified by checking if the obstacle in thesimulation environment has a behavior that causes it to move. However, in areal world situation, the limit where a human would perceive an object as mobilewould most likely depend on the period of time over which it is observed. Anobject such as a chair may be immobile over the course of a minute or an hour,but mobile over the course of a day.

28

Page 35: Improving path planning of autonomous vacuum cleaners

5.2.6 Verification method

Another limitation was that the verification algorithm did not keep track ofhow well an obstacle had been explored. If only one point of the obstacle hadbeen explored, and that specific point was correctly classified, that obstacle wasdeemed 100% correctly classified. Since all newly encountered obstacles wereclassified as mobile by default, this can make the results a bit misleading. Alsonote that an obstacle’s impact on the results in the verification algorithm wasnot weighted by its size. A small obstacle being incorrectly classified had theexact same impact on the average correctness as a large obstacle in the sameroom.

29

Page 36: Improving path planning of autonomous vacuum cleaners

6 Conclusion

The results show that obstacles in a dynamic domestic environment can bemapped and classified based on mobility with an average correctness of around80%, using only localization data and collision detection data from a randomlymoving circular robot as input. This can be done using a discretized map of theenvironment which records collision events at each cell in the map. The accuracyof the mobility classification in this study varied and was tied to the obstacle’sshape, position in relation to other obstacles and alignment to the map’s coor-dinate axes. Mobile obstacles that moved less frequently were generally slightlyharder to accurately classify than frequently moving obstacles.

30

Page 37: Improving path planning of autonomous vacuum cleaners

7 References

[1] F. Cherni, Y. Boutereaa, C. Rekik, and N. Derbel. Autonomous mobilerobot navigation algorithm for planning collision-free path designed in dy-namic environments. In Electrical and Electronics Engineering Conference(JIEEEC), 2015 9th Jordanian International, pages 1–6. IEEE, 2015.

[2] E. Galceran and M. Carreras. A survey on coverage path planning forrobotics. Robotics and Autonomous systems, 61(12):1258–1276, 2013.

[3] M. Ganeshmurthy and G. Suresh. Path planning algorithm for autonomousmobile robot in dynamic environment. In Signal Processing, Communi-cation and Networking (ICSCN), 2015 3rd International Conference on,pages 1–6. IEEE, 2015.

[4] K. M. Hasan, K. J. Reza, et al. Path planning algorithm development forautonomous vacuum cleaner robots. In Informatics, Electronics & Vision(ICIEV), 2014 International Conference on, pages 1–6. IEEE, 2014.

[5] K. Lenac, A. Kitanov, I. Maurovic, M. Dakulovic, and I. Petrovic. Fastactive slam for accurate and complete coverage mapping of unknown envi-ronments. In Intelligent Autonomous Systems 13, pages 415–428. Springer,2016.

[6] C. K. Thomas Williams and many others. Gnuplot 5.0: an interactiveplotting program, January 2015.

[7] Unity 2017.4 documentation, 2018.

[8] R. Valencia and J. Andrade-Cetto. Active pose slam. In Mapping, Planningand Exploration with Pose SLAM, pages 89–108. Springer, 2018.

[9] J. Vallve and J. Andrade-Cetto. Active pose slam with rrt. In Roboticsand Automation (ICRA), 2015 IEEE International Conference on, pages2167–2173. IEEE, 2015.

[10] J. Van Den Berg, D. Ferguson, and J. Kuffner. Anytime path planning andreplanning in dynamic environments. In Robotics and Automation, 2006.ICRA 2006. Proceedings 2006 IEEE International Conference on, pages2366–2371. IEEE, 2006.

[11] Y. Yan and Y. Li. Mobile robot autonomous path planning based on fuzzylogic and filter smoothing in dynamic environment. In Intelligent Controland Automation (WCICA), 2016 12th World Congress on, pages 1479–1484. IEEE, 2016.

31

Page 38: Improving path planning of autonomous vacuum cleaners

8 Appendix

8.1 Program code

8.1.1 DiscreteMap.cs

/*

The DiscreteMap class contains a matrix of MapPoints which represents a discretized section of the global

coordinate plane. It contains methods used for retrieving data and registering collision events at

specific positions in the map.

*/

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using System.Text;

using System;

using System.Linq;

public class DiscreteMap

{

private MapPoint[][] points;

public int side { get; private set; } // Note: the matrix is quadratic; columns = rows

public float scale { get; private set; } // The map is [scale] times larger than world space coordinates.

public DiscreteMap(int rows, float scale)

{

this.side = rows;

this.scale = scale;

points = new MapPoint[rows][];

for (int i = 0; i < points.Length; i++)

{

points[i] = new MapPoint[rows];

}

for (int i = 0; i < rows; i++)

{

for (int j = 0; j < rows; j++)

{

points[i][j] = new MapPoint();

}

}

}

/// Get the MapPoint data at a specific position.

public MapPoint this[int x, int y]

{

get

{

return points[x][y];

}

}

/// Mark points under the robot as empty space.

public void VisitCircle(Vector2Int center, float scaledRadius, float time)

{

var dRadius = Round(scaledRadius);

for (int x = center.x - dRadius; x <= center.x + dRadius; x++)

{

for (int y = center.y - dRadius; y <= center.y + dRadius; y++)

{

if (IsWithinBounds(x, y))

{

float xDiff = x - center.x;

float yDiff = y - center.y;

// If point is within circle, mark as empty space.

float distance = Mathf.Sqrt(xDiff * xDiff + yDiff * yDiff);

if (distance <= scaledRadius)

{

points[x][y].AddHitValue(false, time);

points[x][y].type = MapPoint.PointType.empty;

points[x][y].group = null;

}

}

}

}

}

/// Register a collision at a position.

public void MarkObstacle(Vector2Int position, float time)

{

if (IsWithinBounds(position.x, position.y))

{

if (points[position.x][position.y].IsObstacle())

{

points[position.x][position.y].AddHitValue(true, time);

32

Page 39: Improving path planning of autonomous vacuum cleaners

}

else

{

points[position.x][position.y].type = MapPoint.PointType.obstacle;

points[position.x][position.y].lastHitTime = time;

points[position.x][position.y].RefreshData();

}

}

}

/// Check if a discretized position is within the map bounds.

public bool IsWithinBounds(int x, int y)

{

return x >= 0 && y >= 0 && x < side && y < side;

}

/// Find a coherent obstacle, starting from points[x][y], using BFS. Assumes that (x, y) is an obstacle point.

List<MapPoint> MatrixBFS(int x, int y)

{

Queue<Vector2Int> q = new Queue<Vector2Int>();

List<MapPoint> visited = new List<MapPoint>();

visited.Add(points[x][y]);

q.Enqueue(new Vector2Int(x, y));

while (!(q.Count == 0))

{

var current = q.Dequeue();

for (int i = -1; i <= 1; i++)

{

for (int j = -1; j <= 1; j++)

{

var cx = current.x + i;

var cy = current.y + j;

if (IsWithinBounds(cx, cy) && points[cx][cy].IsObstacle() && !visited.Contains(points[cx][cy]))

{

q.Enqueue(new Vector2Int(cx, cy));

visited.Add(points[cx][cy]);

}

}

}

}

return visited;

}

/// AssignGroups modifies the map by finding coherent obstacles and grouping together any

/// adjacent MapPoints with the type "obstacle".

public void AssignGroups()

{

List<MapPoint> remainingPoints = new List<MapPoint>();

for (int i = 0; i < side; i++)

{

remainingPoints.AddRange(points[i]);

}

for (int i = 0; i < side; i++)

{

for (int j = 0; j < side; j++)

{

if (points[i][j].IsObstacle() && remainingPoints.Contains(points[i][j]))

{

var group = MatrixBFS(i, j);

MapObstacle obstacle = new MapObstacle();

foreach (var item in group)

{

item.group = obstacle;

obstacle.mobilityEstimate += item.GetMobilityEstimate() / group.Count;

remainingPoints.Remove(item);

}

}

}

}

for (int i = 0; i < side; i++)

{

for (int j = 0; j < side; j++)

{

points[i][j].RefreshData();

}

}

}

public static int Round(float value)

{

return (int)Mathf.Round(value);

}

public Vector2Int VectorToMapCoordinates(Vector2 value)

{

return Round(value * scale);

}

public static Vector2Int Round(Vector2 value)

{

33

Page 40: Improving path planning of autonomous vacuum cleaners

return new Vector2Int(Round(value.x), Round(value.y));

}

}

8.1.2 MapGenerator.cs

/*

MapGenerator is a MonoBehaviour which should be attached to the robot object in the scene.

It is used to pass data from the robot to the DiscreteMap and the Verificator.

*/

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using UnityEngine.SceneManagement;

public class MapGenerator : MonoBehaviour

{

enum ResultMode { result1, result2 }

[SerializeField]

ResultMode resultMode;

float mobilityLimitSingle = 0.3f;

float[] mobilityLimitMultiple = new float[] { 0.15f, 0.3f, 0.45f, 0.6f, 0.85f };

[SerializeField]

bool autoVerify;

[SerializeField]

float autoVerifyInterval = 10;

[SerializeField]

bool runApplicationInBackground;

[SerializeField]

int verificationsPerRun = 10;

int currentVerification;

public DiscreteMap dMap = new DiscreteMap(128, 6);

public Verificator verificator;

float timer;

/// Start is called at the beginning of the simulation.

void Start()

{

verificator = new Verificator(dMap);

Application.runInBackground = runApplicationInBackground;

}

/// Update is called every frame during the simulation.

void Update()

{

// Check input. Mainly used for debugging.

if (Input.GetKeyDown("r"))

{

dMap.AssignGroups();

}

if (Input.GetKeyDown("v"))

{

verificator.RunVerification(mobilityLimitSingle);

}

if (Input.GetKeyDown("a"))

{

dMap.AssignGroups();

verificator.RunVerification(mobilityLimitSingle);

}

if (autoVerify && autoVerifyInterval >= 5)

{

timer += Time.deltaTime;

if (timer >= autoVerifyInterval)

{

timer = 0;

dMap.AssignGroups();

if (resultMode == ResultMode.result1)

{

verificator.RunVerification(mobilityLimitMultiple);

verificator.PrintResult1();

currentVerification++;

if (currentVerification >= verificationsPerRun)

{

SceneManager.LoadScene(SceneManager.GetActiveScene().name);

}

}

else if (resultMode == ResultMode.result2)

{

verificator.RunVerification(mobilityLimitMultiple);

currentVerification++;

if (currentVerification >= verificationsPerRun)

{

verificator.PrintResult2(verificationsPerRun * autoVerifyInterval);

34

Page 41: Improving path planning of autonomous vacuum cleaners

SceneManager.LoadScene(SceneManager.GetActiveScene().name);

}

}

}

}

}

public void VisitPosition(Vector2 center, float radius)

{

dMap.VisitCircle(dMap.VectorToMapCoordinates(center), radius * dMap.scale, Time.deltaTime);

}

public void TriggerCollision(ContactPoint2D point)

{

dMap.MarkObstacle(dMap.VectorToMapCoordinates(point.point), Time.time);

verificator.MarkObstacle(dMap.VectorToMapCoordinates(point.point), point.collider.gameObject);

}

}

8.1.3 MapObstacle.cs

/*

MapObstacle represents a coherent obstacle in the DiscreteMap. Note that

a MapObstacle does not contain pointers to the MapPoints it consists of.

Instead, each MapPoint has a pointer to its MapObstacle.

*/

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

public class MapObstacle

{

public float mobilityEstimate = 0;

public bool IsMobile(float mobilityLimit)

{

return mobilityEstimate > mobilityLimit;

}

}

8.1.4 MapPoint.cs

/*

MapPoint represents a discretized location point in the DiscreteMap.

Each MapPoint has a local, temporary mobility estimate and is given its

final mobility estimate by its MapObstacle.

*/

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using System;

public class MapPoint

{

public enum PointType { unexplored, empty, obstacle }

public PointType type = PointType.unexplored;

const int historyLength = 2;

const bool immobileByDefault = false;

const float minimumTimeBetweenUpdates = 3;

public event Action OnUpdateData;

Queue<bool> history = new Queue<bool>();

public float lastHitTime;

private MapObstacle _group;

public MapObstacle group

{

get

{

return _group;

}

set

{

_group = value;

if (OnUpdateData != null)

{

OnUpdateData();

}

}

}

35

Page 42: Improving path planning of autonomous vacuum cleaners

public MapPoint(MapObstacle group = null)

{

this.group = group;

for (int i = 0; i < historyLength; i++)

{

history.Enqueue(immobileByDefault);

}

}

public bool IsObstacle()

{

return type == PointType.obstacle;

}

public bool IsEmptySpace()

{

return type == PointType.empty;

}

public bool IsUnexplored()

{

return type == PointType.unexplored;

}

/// AddHitValue is used to change the local mobility estimate of a MapPoint.

/// If an obstacle was deteced at the point, use isObstacle = true. If the

/// point was visited without triggering a collision, use isObstacle = false.

public void AddHitValue(bool isObstacle, float currentTime)

{

if (currentTime - lastHitTime > minimumTimeBetweenUpdates)

{

lastHitTime = currentTime;

if (isObstacle)

{

history.Enqueue(isObstacle);

if (history.Count > historyLength)

{

history.Dequeue();

}

}

else

{

FillHistory(false);

}

OnUpdateData();

}

}

void FillHistory(bool value)

{

for (int i = 0; i < historyLength; i++)

{

history.Enqueue(value);

history.Dequeue();

}

}

/// Returns the points local mobility estimate. This is not the final mobility

/// estimate, which should be retreived from the MapObstacle.

public float GetMobilityEstimate()

{

float value = 0;

if (history.Count > 0)

{

foreach (var item in history)

{

if (item == false)

{

value++;

}

}

return value / history.Count;

}

return 0;

}

/// Trigger a data update event, used

public void RefreshData()

{

OnUpdateData();

}

}

8.1.5 ObstacleBehaviour.cs

/*

ObstacleBehaviour is a MonoBehaviour which should be attached to any obstacles

in the scene. It contains methods for moving the obstacle and checking whether or

not it is mobile.

*/

36

Page 43: Improving path planning of autonomous vacuum cleaners

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

public class ObstacleBehaviour : MonoBehaviour

{

enum MovementPattern { stationary, random, twoPoint }

[SerializeField]

MovementPattern movementPattern;

[SerializeField]

Vector2 twoPointMoveDistance;

[SerializeField]

float randomMoveRadius;

[SerializeField]

float interval = 1;

[SerializeField]

float moveTime = 1;

float timer;

bool moving = false;

int direction = 1;

Vector3 originalPosition;

Vector3 targetPosition;

/// Start is called at the beginning of the simulation.

void Start()

{

originalPosition = transform.position;

}

/// Update is called every frame during the simulation.

void Update()

{

if (IsMobile())

{

timer += Time.deltaTime;

if (moving)

{

transform.position += (targetPosition - transform.position) / moveTime * Time.deltaTime;

if (timer > moveTime)

{

timer = 0;

moving = false;

direction *= -1;

}

}

else

{

if (timer > interval)

{

timer = 0;

moving = true;

if (movementPattern == MovementPattern.random)

{

var angle = Random.Range(0, 2 * Mathf.PI);

var x = randomMoveRadius * Mathf.Cos(angle);

var y = randomMoveRadius * Mathf.Sin(angle);

targetPosition = originalPosition + new Vector3(x, y, 0);

}

else if (movementPattern == MovementPattern.twoPoint)

{

targetPosition = transform.position + (Vector3)twoPointMoveDistance * direction;

}

}

}

}

}

/// Returns true if the obstacle has a mobile movement pattern.

public bool IsMobile()

{

return movementPattern != MovementPattern.stationary;

}

}

8.1.6 RandomMove.cs

/*

RandomMove is a MonoBehaviour which should be attached to the robot object in the scene.

It is responsible for moving the robot, detecting input and sending that input to the

MapGenerator.

*/

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using System.Text;

37

Page 44: Improving path planning of autonomous vacuum cleaners

public class RandomMove : MonoBehaviour

{

enum MovementPattern { random, contourFollow }

[SerializeField]

MovementPattern pattern = MovementPattern.random;

const float speed = 8;

const float rotationSpeed = 140;

const float contourFollowAngle = 0;

const float collisionRotationRange = 89;

const float radiusFactor = 0.72f;

float timeSinceLastCollision;

float timeSincePatternSwitch;

Rigidbody2D body;

List<Collision2D> collisions = new List<Collision2D>();

MapGenerator mapGenerator;

const float MaxRandomTime = 2;

const float MaxContourTime = 3;

const float MaxContourTimeWithoutCollision = 1;

int contourFollowDirection = 1;

float radius;

/// Start is called at the beginning the simulation.

void Start()

{

body = GetComponent<Rigidbody2D>();

mapGenerator = GetComponent<MapGenerator>();

radius = transform.lossyScale.y * GetComponent<CircleCollider2D>().radius;

}

/// FixedUpdate is called 50 times per second during the simulation.

void FixedUpdate()

{

timeSinceLastCollision += Time.fixedDeltaTime;

timeSincePatternSwitch += Time.fixedDeltaTime;

mapGenerator.VisitPosition(transform.position, radius * radiusFactor);

if (pattern == MovementPattern.random)

{

RandomMoveUpdate();

}

if (pattern == MovementPattern.contourFollow)

{

contourFollowUpdate();

}

if (collisions.Count > 0)

{

timeSinceLastCollision = 0;

}

collisions.Clear();

}

void RandomMoveUpdate()

{

body.velocity = speed * transform.right;

body.angularVelocity = 0;

if (collisions.Count != 0)

{

float lowerBound = 0;

float upperBound = float.MaxValue;

foreach (var item in collisions)

{

var a = VectorToAngle(item.contacts[0].normal);

if (a - collisionRotationRange > lowerBound)

{

lowerBound = a - collisionRotationRange;

}

if (a + collisionRotationRange < upperBound)

{

upperBound = a + collisionRotationRange;

}

if (upperBound < lowerBound)

{

upperBound += 360;

}

}

var angle = Random.Range(lowerBound, upperBound);

transform.rotation = Quaternion.Euler(0, 0, angle);

if (timeSincePatternSwitch > MaxRandomTime)

{

if (Random.Range(0, 1f) < 0.3f)

{

contourFollowDirection *= -1;

}

pattern = MovementPattern.contourFollow;

timeSincePatternSwitch = 0;

38

Page 45: Improving path planning of autonomous vacuum cleaners

}

}

}

void contourFollowUpdate()

{

body.velocity = speed * transform.right;

body.angularVelocity = 0;

if (collisions.Count != 0)

{

float a = LimitAngle(VectorToAngle(collisions[0].contacts[0].normal));

if (contourFollowDirection == -1)

{

foreach (var item in collisions)

{

float b = LimitAngle(VectorToAngle(item.contacts[0].normal));

if ((b > a && b < a + 180) || (b + 360 > a && b + 360 < a + 180))

{

a = b;

}

}

}

else

{

foreach (var item in collisions)

{

float b = LimitAngle(VectorToAngle(item.contacts[0].normal));

if (!((b > a && b < a + 180) || (b + 360 > a && b + 360 < a + 180)))

{

a = b;

}

}

}

a += (90 - contourFollowAngle) * contourFollowDirection;

transform.rotation = Quaternion.Euler(0, 0, a);

}

else

{

transform.rotation = Quaternion.Euler(0, 0, transform.rotation.eulerAngles.z + rotationSpeed * Time.fixedDeltaTime * speed

* contourFollowDirection);

}

if (timeSinceLastCollision > MaxContourTimeWithoutCollision || timeSincePatternSwitch > MaxContourTime)

{

pattern = MovementPattern.random;

timeSincePatternSwitch = 0;

}

}

float LimitAngle(float angle)

{

while (angle < 0)

{

angle += 360;

}

return angle % 360;

}

void PrintCollisons()

{

StringBuilder stringBuilder = new StringBuilder();

foreach (var item in collisions)

{

stringBuilder.Append("Object: " + item.gameObject + ", Normal: " + item.contacts[0].normal + "\n");

}

if (stringBuilder.Length != 0)

{

print(stringBuilder.ToString());

}

}

void OnCollisionStay2D(Collision2D other)

{

CollisionTriggererd(other);

}

void OnCollisionEnter2D(Collision2D other)

{

CollisionTriggererd(other);

}

void CollisionTriggererd(Collision2D other)

{

if (!collisions.Contains(other))

{

collisions.Add(other);

mapGenerator.TriggerCollision(other.contacts[0]);

}

}

float VectorToAngle(Vector2 v)

{

39

Page 46: Improving path planning of autonomous vacuum cleaners

float angle = Mathf.Asin(v.y);

if (v.x < 0)

{

angle = Mathf.PI - angle;

}

angle *= Mathf.Rad2Deg;

return LimitAngle(angle);

}

public Vector2 Rotate(Vector2 v, float degrees)

{

float radians = degrees * Mathf.Deg2Rad;

float sin = Mathf.Sin(radians);

float cos = Mathf.Cos(radians);

float tx = v.x;

float ty = v.y;

return new Vector2(cos * tx - sin * ty, sin * tx + cos * ty);

}

}

8.1.7 Verificator.cs

/*

Verificator is responsible for verifying that the mapping algorithms categorizations

are correct, as well as producing the output used in the Result-section of the thesis.

*/

using System.Collections;

using System.Collections.Generic;

using UnityEngine;

using System.Linq;

using System.Text;

using System.IO;

using System;

using UnityEngine.SceneManagement;

public class Verificator

{

class ObstacleResult

{

public ObstacleBehaviour obstacle;

public float mobilitySum;

public int numberOfPoints;

public float correctness;

}

struct VerificationResult

{

public List<ObstacleResult> results;

public float averageCorrectImmobile;

public float averageCorrectMobile;

public float averageCorrectAll;

public float mobilityLimit;

public float time;

}

VerificationResult lastVerification;

/// Contains the results using different mobility limit values

List<VerificationResult> lastVerificationList = new List<VerificationResult>();

string outputPath;

ObstacleBehaviour[][] obstacleMap;

DiscreteMap map;

DateTime startDate;

string fileName;

ObstacleBehaviour[] allObstacles;

int currentVerification = 0;

bool enableLogging = true;

public Verificator(DiscreteMap map)

{

outputPath = System.Environment.GetFolderPath(System.Environment.SpecialFolder.MyDocuments) + "/RVC Results/";

this.map = map;

obstacleMap = new ObstacleBehaviour[map.side][];

for (int i = 0; i < map.side; i++)

{

obstacleMap[i] = new ObstacleBehaviour[map.side];

}

allObstacles = MonoBehaviour.FindObjectsOfType<ObstacleBehaviour>();

Array.Sort(allObstacles, (x, y) => x.gameObject.name.CompareTo(y.gameObject.name));

startDate = DateTime.Now;

string sceneName = SceneManager.GetActiveScene().name;

fileName = sceneName + " " + startDate.ToString("yyyy-MM-dd HH.mm.ss") + ".txt";

if (enableLogging)

{

StreamWriter writer = new StreamWriter(outputPath + "Logs/" + fileName, true);

writer.WriteLine("Starting new test run...");

writer.WriteLine("Start time: " + DateTime.Now.ToString("HH:mm:ss"));

writer.WriteLine("Scene: " + sceneName);

writer.WriteLine();

40

Page 47: Improving path planning of autonomous vacuum cleaners

writer.Close();

}

}

public void RunVerification(float[] limits)

{

lastVerificationList.Clear();

StringBuilder stringBuilder = new StringBuilder();

currentVerification++;

stringBuilder.AppendLine("\tVerification " + currentVerification + ", " + Mathf.RoundToInt(Time.timeSinceLevelLoad) + "

seconds in:");

foreach (var mobilityLimit in limits)

{

AppendVerification(mobilityLimit, stringBuilder);

}

if (enableLogging)

{

StreamWriter writer = new StreamWriter(outputPath + "Logs/" + fileName, true);

writer.WriteLine(stringBuilder.ToString());

writer.Close();

}

}

public float RunVerification(float mobilityLimit)

{

lastVerificationList.Clear();

StringBuilder stringBuilder = new StringBuilder();

currentVerification++;

stringBuilder.AppendLine("\tVerification " + currentVerification + ", " + Mathf.RoundToInt(Time.timeSinceLevelLoad) + "

seconds in:");

float result = AppendVerification(mobilityLimit, stringBuilder);

if (enableLogging)

{

StreamWriter writer = new StreamWriter(outputPath + "Logs/" + fileName, true);

writer.WriteLine(stringBuilder.ToString());

writer.Close();

}

return result;

}

float AppendVerification(float mobilityLimit, StringBuilder stringBuilder)

{

List<ObstacleResult> results = ObtainResults(mobilityLimit);

stringBuilder.AppendLine("\t\tUsing mobility limit " + mobilityLimit + ":");

float correctnessSum = 0;

float mobileCorrectnessSum = 0;

float immobileCorrectnessSum = 0;

int numMobile = 0;

int numImmobile = 0;

foreach (var item in allObstacles)

{

var result = results.Find(x => x.obstacle == item);

stringBuilder.Append("\t\t\t" + item.gameObject.name + ": ");

if (result != null)

{

if (item.IsMobile())

{

numMobile++;

result.correctness = result.mobilitySum / result.numberOfPoints;

stringBuilder.Append("Expected mobile, " + Mathf.RoundToInt(result.correctness * 100) + " % correct");

mobileCorrectnessSum += result.correctness;

}

else

{

numImmobile++;

result.correctness = 1f - result.mobilitySum / result.numberOfPoints;

stringBuilder.Append("Expected immobile, " + Mathf.RoundToInt(result.correctness * 100) + " % correct");

immobileCorrectnessSum += result.correctness;

}

correctnessSum += result.correctness;

}

else

{

if (item.IsMobile())

{

numMobile++;

stringBuilder.Append("Expected mobile, Not found - 0 % correct");

}

else

{

numImmobile++;

stringBuilder.Append("Expected immobile, Not found - 0 % correct");

}

}

stringBuilder.AppendLine();

}

stringBuilder.AppendLine("\t\tAverage correctness: " + Mathf.RoundToInt(correctnessSum / allObstacles.Length * 100) + " %");

stringBuilder.AppendLine("\t\tAverage mobile correctness: " + Mathf.RoundToInt(mobileCorrectnessSum / numMobile * 100) + "

%");

41

Page 48: Improving path planning of autonomous vacuum cleaners

stringBuilder.AppendLine("\t\tAverage immobile correctness: " + Mathf.RoundToInt(immobileCorrectnessSum / numImmobile * 100)

+ " %");

stringBuilder.AppendLine();

lastVerification = new VerificationResult();

lastVerification.results = results;

lastVerification.mobilityLimit = mobilityLimit;

lastVerification.averageCorrectAll = correctnessSum / allObstacles.Length;

lastVerification.averageCorrectMobile = mobileCorrectnessSum / numMobile;

lastVerification.averageCorrectImmobile = immobileCorrectnessSum / numImmobile;

lastVerification.time = Time.timeSinceLevelLoad;

lastVerificationList.Add(lastVerification);

return correctnessSum / allObstacles.Length;

}

List<ObstacleResult> ObtainResults(float mobilityLimit)

{

List<ObstacleResult> results = new List<ObstacleResult>();

for (int i = 0; i < map.side; i++)

{

for (int j = 0; j < map.side; j++)

{

if (map[i, j].IsObstacle())

{

var res = results.Find(x => x.obstacle == obstacleMap[i][j]);

if (res == null)

{

res = new ObstacleResult();

results.Add(res);

}

res.obstacle = obstacleMap[i][j];

res.mobilitySum += map[i, j].group.IsMobile(mobilityLimit) ? 1 : 0;

res.numberOfPoints++;

}

}

}

return results;

}

public void PrintResult1()

{

StreamWriter writer = new StreamWriter(outputPath + "Result set 2.txt", true);

// Format:

// scene mobility_limit verification_nr time avg_correct_mobile avg_correct_immobile avg_correct

foreach (var verification in lastVerificationList)

{

string line = SceneManager.GetActiveScene().name + "\t" + verification.mobilityLimit + "\t" + currentVerification + "\t" +

Time.timeSinceLevelLoad.ToString("0.0000") + "\t" + verification.averageCorrectMobile.ToString("0.0000") + "\t" +

verification.averageCorrectImmobile.ToString("0.0000") + "\t" + verification.averageCorrectAll.ToString("0.0000");

writer.WriteLine(line);

}

writer.Close();

}

public void PrintResult2(float targetTime)

{

StreamWriter writer = new StreamWriter(outputPath + "Result set 3.txt", true);

// Format:

// scene time mobility_limit obstacle is_actually_mobile correctness

foreach (var verification in lastVerificationList)

{

foreach (var result in verification.results)

{

string line = SceneManager.GetActiveScene().name + "\t" + targetTime.ToString("0") + "\t" + verification.mobilityLimit +

"\t" +

result.obstacle.gameObject.name.Replace(" ", "_") + "\t" + (result.obstacle.IsMobile() ? 1 : 0) + "\t" +

result.correctness.ToString("0.000");

writer.WriteLine(line);

}

}

writer.Close();

}

public void MarkObstacle(Vector2Int position, GameObject obstacle)

{

var behaviour = obstacle.GetComponent<ObstacleBehaviour>();

if (behaviour == null)

{

behaviour = obstacle.GetComponentInParent<ObstacleBehaviour>();

}

if (map.IsWithinBounds(position.x, position.y))

{

obstacleMap[position.x][position.y] = behaviour;

}

}

}

42

Page 49: Improving path planning of autonomous vacuum cleaners
Page 50: Improving path planning of autonomous vacuum cleaners

www.kth.se