38
SLAM Robot MECHENG 706 David Browne Ariah-Bella Peters Abigail Birkin-Hall Jonathon Beardmore PROJECT IN MECHATRONICS ENGINEERING Department of Mechanical Engineering The University of Auckland 7 June 2015

PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

  • Upload
    ngotram

  • View
    232

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

SLAM RobotMECHENG 706

David BrowneAriah-Bella PetersAbigail Birkin-HallJonathon Beardmore

PROJECT IN MECHATRONICS ENGINEERING

Department of Mechanical EngineeringThe University of Auckland

7 June 2015

Page 2: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Table of Contents

Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Project Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

3 Solution Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

4 SLAM Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

5 System Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55.1 Arduino Mega 2560 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55.2 Infrared Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65.3 Ultrasonic Range Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65.4 MPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

5.4.1 Overflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75.4.2 Position Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75.4.3 Orientation Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85.4.4 Final Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

5.5 Sensor Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.5.1 Initial Robot Sensor Configuration . . . . . . . . . . . . . . . . . . . . . . 95.5.2 Final Robot Sensor Configuration . . . . . . . . . . . . . . . . . . . . . . . 9

5.6 Data Acquisition and Sensor Processing . . . . . . . . . . . . . . . . . . . . . . . 105.6.1 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.6.2 1D Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.6.3 Normalisation of Sensors to Robot Coordinates . . . . . . . . . . . . . . . 11

5.7 Mecanum Wheels - add section refernce . . . . . . . . . . . . . . . . . . . . . . . 11

6 Drive Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126.1 Drive Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

6.1.1 Modelling of Drive System . . . . . . . . . . . . . . . . . . . . . . . . . . 126.1.2 Gyroscopic Yaw Correction . . . . . . . . . . . . . . . . . . . . . . . . . . 136.1.3 Gyroscopic Rotation Control . . . . . . . . . . . . . . . . . . . . . . . . . 14

6.2 Wall Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

7 Control Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.1 Wall Detection and Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

7.1.1 Gradient Based Corner Detection . . . . . . . . . . . . . . . . . . . . . . . 157.1.2 RANSAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157.1.3 Obstacle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

7.2 Initialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167.3 Obstacle Avoidance and Trajectory Planning . . . . . . . . . . . . . . . . . . . . . 17

7.3.1 Finite State Machine Design . . . . . . . . . . . . . . . . . . . . . . . . . . 177.4 Occupancy Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

8 Graphical Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228.1 Initialise Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228.2 Position Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238.3 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

ii

Page 3: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

9 Difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.1 Hardware Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

9.1.1 Blown Regulator in Board . . . . . . . . . . . . . . . . . . . . . . . . . . . 239.1.2 MPU Crashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

9.2 Original Obstacle Avoidance Approach . . . . . . . . . . . . . . . . . . . . . . . . 24

10 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

11 Conclusions and Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Appendix A IR Sensor Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29A.1 Testing of Raw Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29A.2 Effect of a Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30A.3 Effect of Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Appendix B Finite State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

iii

Page 4: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

List of Figures

Figure 1 Roomba Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Figure 2 Task Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 3 SLAM Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Figure 4 SLAM Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Figure 5 1D Kalman Filter Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Figure 6 Accelerometer Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Figure 7 Sensor Arrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Figure 8 Mecanum Wheel Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . 12Figure 9 Wall Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Figure 10 Corner Detection Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Figure 11 Object Identification Trial . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 12 Visualisation of Collision Detection Process . . . . . . . . . . . . . . . . . 18Figure 13 Expected Robot Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Figure 14 Grid Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Figure 15 Position Visualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Figure 16 Grid Visualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Figure 17 Robot Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Figure A1 Raw Sensor Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Figure A2 Effect of Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Figure A3 Effect of Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Figure B1 Summary FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Figure B2 Detailed FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

List of Tables

Table 1 Sensor Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

iv

Page 5: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Abbreviations

ADC Analog to Digital Converter

DOF Degrees Of Freedom

EKF Extended Kalman Filter

FIFO First In First Out

IR Infra-Red

MPU Motion Processing Unit

MUX Multiplexer

PWM Pulse Width Modulation

RANSAC Random Sample Consensus

SLAM Simultaneous Localisation And Mapping

v

Page 6: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

1. Introduction

Simultaneous Localization and Mapping (SLAM) can be used to enable motorised autonomousrobots to navigate and map an unknown area. SLAM is regarded as a difficult problem inautomation and its implementation has only become popular in recent years. SLAM is nowbeing used to achieve automated surveying of areas that are inhospitable to humans, self-drivingcars and autonomous area coverage.

The SLAM problem is challenging to solve due to the inherent uncertainties in the data usedto estimate both robot position and information about the surrounding environment. Solutions tothe SLAM problem usually utilise a fusion algorithm (such as the Extended Kalman Filter) tocombine sensor readings and odometry data - simultaneously providing a more accurate estimateof both the robot’s pose and the features of the surrounding environment. By performing poseprediction and feature detection simultaneously, the robot is then able to navigate and map anarea without any prior knowledge of the environment.

Figure 1 Roomba Robot

The iRobot Roomba robot is an automated system that uses SLAM to achieve complete coverageof an area in order to perform a vacuum cleaning task. Using a chassis assembled with stockVEX components, an Arduino Mega 2560 for control duties, and a variety of range sensors,a similar system has been implemented in order to achieve fully autonomous coverage of anunknown area without colliding into any obstacles in that area. This report details the full designand implementation process for this SLAM robot.

2. Project Specifications

This project required that a mobile intelligent robot be developed in the context of performingan autonomous vacuum cleaning task. The resulting robot was expected to completely cover thefloor area within a rectangular arena of unspecified dimensions. The robot needed to achieve full

1

Page 7: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

coverage in the shortest time possible whilst avoiding several obstacles located randomly withinthe arena. The obstacles will be circular in shape and large enough to be easily detected by IRsensors. It was to be assumed that no prior knowledge of the environment would be available.The implemented design was built to allow for flexibility of dimensions of the arena, limitedonly by the onboard memory of the Arduino Mega 2560 microcontroller board.

3. Solution Overview

The project was broken up into a number of smaller, more attainable tasks which could then becombined into a final solution. This approach led to a number of modular components, each ofwhich performed a specific task and could be easily modified and debugged. An overview ofthe task breakdown is shown in Figure 2.

Figure 2 Task Breakdown

The sensors available for use included infrared range finders, a motion processing unit, andultrasonic range finder. These were strategically placed on the robot in order to minimise blindspots and reduce sensor error. This was vital to the robot’s functioning, as incorrect readings orblind spots could lead to unexpected behavior and collisions.

An gridded map is used to store previous locations of the robot as well as the location ofobstacles and walls as they are detected. This allows the robot to plan its trajectory based onwhat parts of the grid it has already covered. The desired coverage pattern was an inwardsspiral shape, as this pattern works to cover the whole area efficiently, while allowing the robotto remain parallel to walls.

2

Page 8: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Obstacle avoidance is the process of identifying obstacles, calculating the best path around thembased on their position relative to the robot, and returning to the original path once an evasivemanoeuvre has been completed. This takes a higher priority than the area coverage task.

Trajectory planning is used to determine the best direction for the robot to move in based onthe uncovered ground within the map. This is designed to optimise the robot’s path of the travelsuch that maximum area is covered in the shortest time possible.

Motion control takes the required movement of the robot and translates this into PWM signalsto drive the motors. It also uses the motor output to gather basic odometry measurements, whichcan be used to estimate the robot’s position when little sensor data is available.

Wall and object detection is required for orientation of the robot during operation. Knowing robotbearings relative to a straight wall allows the robot to more accurately know its current location.Detection of walls requires the robot to be able to distinguish between walls and cylindricalobstacles to know when to turn around the object or reorient against a wall.

Wall alignment is an extension of this, which makes use of a PID controller to orient the robotparallel to a wall. The accuracy of this step is important in avoiding collisions with walls.

The integration of these system components through a finite state machine gives rise to aresponsive yet stable overall control scheme.

4. SLAM Theory

For an autonomous robot to perform a mapping exercise, the robot must be aware of it’s positionrelative to it’s surrounding environment. Additionally, in order to know its relative position,knowledge of the surrounding area is needed. Hence the two critical pieces of informationrequired the solve the SLAM problem are deeply interdependent and must be somehow acquiredsimultaneously. SLAM is the process of building a map of the area surrounding a mobile robotwhile also navigating this environment [1]. This section gives a brief introduction to the SLAMproblem, some of the well established solutions, and details its implementation on the mobilerobot.

The full SLAM process consists of a number of steps, as shown in Figure 3 [1]. In a mobilerobot, odometry is often unreliable due to slip, backlash, and uneven surfaces. This means thatit cannot be relied on alone to give accurate positional data. Sensor data is used in SLAM tocorrect this odometry error.

Multiple approaches can be used in SLAM, however the extended Kalman filter approach wasrecommended for this project. An EKF is responsible for updating the robot’s estimated positionbased upon the combination of sensor data and odometry. Landmarks found using sensors arestored so that as the robot moves it can attempt to associate previously seen landmarks withcurrent observations. The EKF also keeps track of an estimate of uncertainty in the robot’sposition. The variables listed below describe the four major components of the SLAM system,x is position, z is features,u is control inputs, and m is the map. This model can be seen inFigure 4 [2, 3].

The EKF assumes linear models for motion, and that the uncertainty probability is gaussian. Thecomplete EKF equation system is given by Equation set 1, whereXt is the believed position ofthe robot at time t, u is the control signal at time t, z is the observation matrix at time t, A andB are state transition matrices, Q and R are the control and sensor noise respectively, H is thesensor model, K is the Kalman gain and P is the uncertainty matrix [4].

3

Page 9: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Figure 3 SLAM Flow Diagram

Figure 4 SLAM Overview

Xt = AXt−1 +ButP = APAT +Q

K = PHT (HPHT +R)−1

Xt = Xt +K(zt −HXt)P = (I −KH)P

(1)

The first step in the EKF is the prediction step and the second is the correction step. Theprediction step calculated an estimate for the position at time t from the previous position andthe known control inputs, while the correction step used the weighted means from the twogaussian models to get a more accurate estimate for xt. The effect of the correction step isshown in Figure 5. The effect of the weighted mean is clear as the correction, shown in blue, ismuch closer to the measurement, shown in green, than the prediction, shown in red.

These calculations are performed at every time step to maintain an up to date record of position.

As the brief introduction above shows, SLAM is a very complex problem that requires a lot oftime to be spent implementing the various components. In order to reduce computations andbecause of a lack of sensors, full SLAM was not used in the mobile robot.

4

Page 10: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

(a) (b)

(c)

Figure 5 1D Kalman Filter Example

The mean position model shown in Equation 1 has been used to predict robot position. Themodel used was tested and numerous times to reduce the effect of noise. This has reduced theneed for the EKF as it is assumed to be accurate within the dimensions of the expected arena.

By removing the feature prediction and the entire EKF equation, the computation time to predictcurrent position is heavily reduced.

5. System Breakdown

The provided robot consisted of four 2-wire motors attached to mecanum wheels with an ArduinoMega board controller. IR and sonar sensors were added to the robot to provide distancemeasurements in various directions. A 9-DOF MPU was also added in order to increase theaccuracy of the pose estimate obtained from odometry. This section details the set-up andplacement of the various components that make up the intelligent device.

5.1 Arduino Mega 2560

An Arduino Mega 2560 microcontroller board is used to perform processing and control of therobot. The Mega 2560 has an integrated ADC with a 16 to 1 MUX, allowing for up to 16analog inputs to be connected. On the completed robot, analog inputs were utilised for battery

5

Page 11: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

monitoring and data acquisition from the analog sensors. The drive motors and 2D scannerwere controlled using 5 of the boards 15 PWM outputs. Additionally the serial communicationcapabilities of the board were used to control and read data from the MPU via I2C. An externalBluetooth transmitter was also connected to the Mega 2560 via UART, allowing for wirelesscommunication with an external computer. This is used for visualisation of the occupancy gridas explained in Section 8 and debugging purposes. The Mega 2560 has 256 KB of programmemory and 8 KB of SRAM, of which 11% and 54% were used respectively for the final controlcode.

5.2 Infrared Sensors

Four analog IR sensors are used by the robot to detect the location of nearby objects. Threedifferent models of IR sensors with varying specifications were used, as detailed in Table 1.

Table 1 Sensor Ranges

Sensor Measurement Range

IR Long 20 - 150 cm

IR Medium 10 - 80 cm

IR Short 4 - 30 cm

Sonar 6 - 255 in

During testing it was found that the IR sensors produced a significant amount of noise whendetecting an object. This was because the current drawn when each sensor fired would causethe main voltage rail to drop slightly below the required 5V . This resulted in skewed readingswith large jumps in output. A 2200µF capacitor was placed in parallel with the sensors onthe voltage supply rail in an attempt reduce this noise. While this was largely effective, theaddition of a 1D kalman filter resulted in the best performance in terms of both noise rejectionand stability.

5.3 Ultrasonic Range Sensor

One ultrasonic range sensor was used to provide another source of displacement detection. Thesonar sensor provided continuous long distance measurements with a resolution of 2.5 cm. Theoverall range of the sonar is shown in Table 1. Measurements under the the low measurementrange are read as a distance 15 cm.

The provided sonar yields a controlled wide beam of coverage. Readings from the sonar givethe displacement of an object that could be anywhere within the field of vision, subsequentlyvalues from the sonar could refer to a range of positions in front of the sonar.

It was intended for the sonar to point behind the robot in the unlikely event that the robot hadto reverse. As explained in Section 9.1.1, the sensor was not included in the final design. Thiswas not a large set back because of the relatively small input it was giving at the time.

5.4 MPU

An external MPU-9150 Motion Processing Unit was used to accurately detect the orientationand acceleration of the robot. The MPU was connected to the main controller (Mega 2560)via an I2C serial interface - acting as the slave device while the Mega 2560 acted as master.Configuration of the MPU was performed during system initialisation, in which the Mega 2560

6

Page 12: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

wrote configuration data to the appropriate control registers. Initialisation included tasks such asenabling the built in Digital Motion Processor and enabling the external interrupt functionalityof the MPU. The MPU consists of a 3-DOF accelerometer, a 3-DOF gyroscope and a 3-DOFmagnetometer and it simultaneously reads analog data from each of these sensors at a fixedrate. This data is then stored on a FIFO buffer register located on the MPU. The Mega 2560performed read operations from this buffer via the I2C interface in order to access the latestsensor data.

5.4.1 Overflow

During conversion, the MPU would constantly write sensor data to the FIFO buffer. If thisinformation were not read from the buffer at the same rate which it was written then the datawould build up until the buffer was full and overflow occurred. Buffer overflow caused the datastored in the buffer to fall out of sync. This resulted in the Mega 2560 receiving incorrect datawhen it read from the FIFO buffer. The optimal solution to the FIFO overflow issue involvedperforming read operations at a high enough frequency so that the overflow never occurred. In thecontext of the robot control program, this approach was not feasible as the program was requiredto execute large amounts of code for the rest of the robot functionality. It could therefore not beguaranteed that polling would occur at a great enough rate. This gave rise to the possibility ofbuffer overflow occurring at any point during the robot’s operation.

In an attempt to avoid overflow occurring, different methods were tested. One approach involvedbypassing the buffer and reading directly from the MPU sensor registers, however this inexplicablyintroduced a large delay between physically manipulating the MPU and the resulting changesin sensor values propagating to the Mega 2560. It was speculated that by bypassing the FIFObuffer and therefore allowing it to constantly overflow, the overall operation of the device wasinadvertently compromised. A second, more successful approach utilised the external interruptgeneration functionality of the MPU to notify the Mega 2560 when a new data conversion hadoccurred. A corresponding interrupt routine was set up in the control code for the Mega 2560which triggered a read operation of the FIFO buffer. This ensured that the FIFO buffer was alwaysread in a timely manner and reduced the occurrence of overflow. While this approach solved alarge portion of the issues, the FIFO buffer would still occasionally fall out of sync, resulting in asingle false read occurring. This was mitigated via implementation of a spike detection algorithmon the Mega 2560 to reject obviously false readings. This solution successfully rejected all falsereadings from the MPU and allowed for consistent data to be continuously read from the MPU.

5.4.2 Position Estimation

An attempt was made to determine the robot position in the x-y plane by using data acquiredfrom the accelerometer. In order to ensure that variation in mounting orientation of the MPUwould not affect this data, an initial setup procedure measured the orientation of the gravitationalforce vector detected by the MPU. This was then used to transform subsequent readings ofacceleration in the x-y plane to ensure that errors weren’t introduced by gravitational acceleration.Acceleration data was integrated twice to obtain a value for the position of the robot. Whilethis is conceptually feasible given an ideal signal, the double integration process is extremelysensitive to signal noise.

After studying the accelerometer output, it was determined that the noise measured by theaccelerometer due to the vibration of the robot while driving exceeded the amplitude of theactual signal by orders of magnitude. This is shown in Figure 6, which shows the accelerometeroutput in both the x and y axis while the robot first accelerated and then decelerated in the xdirection. Both axis measure almost zero while the robot is stationary, but become very noisy

7

Page 13: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

during movement. Although the magnitude of the noise in the axis of movement was greater thanin the other axis, the only discernable information is whether the robot is in motion. Applicationof a 1D Kalman Filter in an attempt to remove this noise did not improve the accuracy to asatisfactory level. After trying various other noise rejection and smoothing techniques to improveaccuracy, this concept was eventually rejected.

Figure 6 Accelerometer Test (Driving Forwards)

Time100 200 300 400 500 600

Acc

eler

atio

n(m

ms-2

)

-1500

-1000

-500

0

500

1000

1500

xraw

yraw

5.4.3 Orientation Estimation

Robot orientation was estimated by using sensor data provided by the 3-DOF gyroscope unitincorporated into the MPU. Raw gyroscope data was converted to a roll-pitch-yaw vector, ofwhich only the yaw information was utilised. This allowed for accurate determination of relativerobot orientation during operation. This data was utilised by the robot’s drive control module toimprove accuracy of pose estimation and provide closed loop feedback to the modules yaw PIDcontroller.

5.4.4 Final Implementation

Despite extensive troubleshooting efforts, the MPU unit still exhibited serious stability issuesresulting in failure to respond to commands from the Mega 2560. This locking behaviour hadthe potential to freeze the entire control program if it were to occur at specific points of programexecution. The result of this was that the robot was unable to consistently operate for the periodrequired for it to complete full run of the demonstration arena. A last minute decision was madeto disable the MPU for the live demonstration of the robot. The functionality it provided wasreplaced by using fallback odometry based methods for pose estimation and control. Additionallya slower, but more reliable method was used to provide correction to odometry error by utilisingthe IR range sensors mounted on the robot. This implementation is detailed further in Section 6.2.

8

Page 14: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

5.5 Sensor Configuration

5.5.1 Initial Robot Sensor Configuration

To maximise the coverage from the sensors they were initially placed in the layout shown inFigure 7. This configuration allowed the medium IR sensors to monitor distance from anyobstacles as the robot passed close to an obstacle or wall. The scanning unit is mounted on thefront of the robot to provide a sweeping view of what is in front of the robot.

The scanning unit consists of a long range and a short range IR sensor mounted on top ofthe servo motor. The servo motor rotates through 160 ◦ which allows the IR sensors to gatherdistance information from anywhere within that range. Sensors with different ranges were usedto increase the effective range of the scanner to 3-150 cm from the individual ranges listed inTable 1.

The sonar sensor was mounted facing the rear of the robot so that it could detect obstacles whenthe robot was reversing. Through testing it was found that the distance values from the sonarsensor were a lot less accurate than those obtained from the IR sensors, so it could not be usedfor wall detection or obstacle detection where a precise distance was required. However, byelevating the sonar sensor and positioning it so that it pointed down behind the robot, the sensorwas accurate enough to detect when an object was too close to the back of the robot.

5.5.2 Final Robot Sensor Configuration

The design in the previous section had reduced visibility in close proximity due to the sensorlower effective range. The original design was altered to account for this deadband by arrangingthe sensors as shown in Figure 7.

ARDUINO

MPU

MPU

ARDUINO

Sensor deadband

Figure 7 Sensor Arrangement: Original Layout (left) and Final Layout (right)

The medium sensors are placed on the opposite side of the robot to allow their deadband toencompass the centre of the robot. This allows the effective range to start not much beyond theedge of the robot. The same logic was applied when moving the scanner unit back further into

9

Page 15: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

the centre. The MPU has been mounted onto a vertical support in the centre of the robot inorder to get more accurate readings from the accelerometer.

5.6 Data Acquisition and Sensor Processing

5.6.1 Data Acquisition

Data acquisition from the analog sensors was performed using the 10-bit onboard ADC of theMega 2560. The ADC input is multiplexed, allowing for fast switching between each of thesensors. Reading from the ADC using standard Arduino functionality comes with an associatedoverhead of waiting for the conversion to complete before the program can continue. A regularADC conversion takes 13 clock cycles to complete (0.8µs). While this overhead is small, itwas incurred extremely frequently during the operation of the robot. A custom solution wasimplemented to utilise the internal interrupt generated by the ADC to notify the main programwhen a conversion had completed. The converted value was stored in program memory uponcompletion, allowing it to be read in a non-blocking manner when required. This procedureallowed the ADC to run continuously, cycling through all 4 analog sensor inputs and ensuringup to date sensor readings are available to the rest of the program.

Program 1 Interrupt Service Routine for ADC Conversion

1 ISR(ADC_vect)2 {3 // Read ADC registers4 volatile uint16_t reading = ADC;5 // Save reading to the sensor object6 AnalogSensor::update_current(reading);7 // Start ADC conversion on the next sensor in the collection8 AnalogSensor::read_next();9 }

5.6.2 1D Kalman Filter

The principle of the kalman filter is to make educated updates to the sensor readings based onthe current raw data, previous readings, and standard deviation of each sensor. This makes itideal for processing signals which are prone to high process and signal noise. The Kalman filteralgorithm consists of two main update stages, as detailed below.

1. Update the reading based on control values and add the uncertainty in control values to theuncertainty in the sensor reading. Due to fast reading times in our code, the change in valuedue to robot speed between each sensor reading is assumed to be small. The uncertainty inthese readings is reasonably large to allow the filter to compensate for sudden movements.

2. Update the Kalman gain based on the current uncertainty value and the standard deviationof sensor readings, update the reading based on the raw input, and update the uncertaintybased on the Kalman gain. This is the step which intelligently combines the previous andcurrent data points based on the weighted certainty of each.

Program 2 1D Kalman Filter

1 float AnalogSensor::apply_filter(float input)2 {3 // Prediction update4 _kalman_state.p = _kalman_state.p + _kalman_state.q;

10

Page 16: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

56 // Measurement update7 _kalman_state.k = _kalman_state.p / (_kalman_state.p + _kalman_state.r);8 _kalman_state.x = _kalman_state.x + _kalman_state.k * (input - _kalman_state.x

);9 _kalman_state.p = (1 - _kalman_state.k) * _kalman_state.p;

1011 return _kalman_state.x;12 }

5.6.3 Normalisation of Sensors to Robot Coordinates

In order to perform feature and obstacle detection, data from all of the sensors needed to benormalised and fused. This was straightforward for the side mounted IR sensors, as a simpletranslation could be performed. For the scanner, data from the long and short range sensorsneeded to be fused as well as normalised. Fusion was achieved by taking readings from onlythe long range sensor when the short range was out of its effective zone and vice versa. Whereboth sensors were within range, an average of the two was taken.

Normalisation of the readings to the robot coordinate system was achieved by first determining therelationship between servo position and absolute angle, and then using this angle to decomposethe distance readings into x and y coordinates. These readings relative to the centre of the servocould then be translated into the coordinate system about the centre of the robot.

Program 3 Position Normalisation

1 Point AnalogSensor::transform_to_robot_origin(float value)2 {3 return(Point){4 value * cos(_position.theta) + _position.x, // x coordinate5 value * sin(_position.theta) + _position.y // y coordinate6 };7 }

5.7 Mecanum Wheels - add section refernce

The mecanum wheels use the tilted rollers positioned around the edge of the wheel to move therobot in any 2D direction by controlling the rotation of the motors. Figure 8 shows the velocityvector that results from a positive rotation about the y axis of each wheel [5].

Equation 2 is derived by analysing how each wheel affects the overall movement of the robot.Equation 2 shows the resulting movement of the robot for a given set of wheel rotations. θais the angular velocity of wheel a, vj is the velocity of the robot in the j direction, ωz is theangular velocity about the z-axis.

vxvyωz

=R

4

1 1 1 1−1 1 1 −1−1

l1+l21

l1+l2−1

l1+l21

l1+l2

θ1θ2θ3θ4

(2)

A VEX motor controller 29 is used to set the input to the 2-wire motors that are attached toeach mecanum wheel. The motor controller converts a standard PWM signal into a voltage fordriving a motor. Refer to Section 6 for more details.

11

Page 17: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Figure 8 Mecanum Wheel Kinematics

6. Drive Controller

6.1 Drive Controller

6.1.1 Modelling of Drive System

A set of drive actuation commands were determined based on the inverse kinematics for mecanumwheels as described by Equation 3. Control commands were derived to perform forwards,reverse, left and right strafing, and rotational movements. This allowed the higher level controlmechanisms to make full use of the flexibility provided by the mecanum wheels without needingto be concerned with implementation.

ω1

ω2

ω3

ω4

=1

R

1 1 −(l1 + l2)1 −1 l1 + l21 −1 −(l1 + l2)1 1 l1 + l2

·

vxvyωz

(3)

To obtain an accurate estimate of the robot pose during operation, the inverse kinematic modelfor the mecanum wheeled robot Equation 2 was employed to determine instantaneous velocity ofthe robot. A discrete integration was implemented to measure linear and angular displacementof the robot based on this instantaneous velocity. It was found via experimentation that whilethe kinematics are idealised by Equation 2, an amount of slip occurs during strafing and rotationthat is not accounted for by this model. A slip coefficient was determined for each of the motion

12

Page 18: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

types experimentally and implemented as shown in Program 4.

Program 4 Implementation of Forwards Kinematics for MacanumWheels

1 // Convert to x, y, and theta velocities, computation time is reduced by usingpre-computed (wheel radius)/4

2 float x = time_delta_s * Rw_DIV_4 * (w[0] + w[1] + w[2] + w[3]);3 float y = Y_AXIS_CORRECTION * time_delta_s * Rw_DIV_4 * (w[0] - w[1] - w[2] +

w[3]);4 float theta = THETA_CORRECTION * time_delta_s * Rw_DIV_4 * ((w[0] * L_RATIO) -

(w[1] * L_RATIO) + (w[2] * L_RATIO) - (w[3] * L_RATIO));

Additionally it was noted that when the stop command was sent to the servo motors, the robotwould coast for a period rather than actively braking. This resulted in a slow creep in odometryaccuracy over the period of the robot operation. This was corrected by assuming that the robotundergoes a linear deceleration during a braking period after the stop command is sent. Thislinear deceleration was determined experimentally for each of the motion types (forward/reverse,left/right, and rotation) and implemented as shown in Program 5. This additional modellingprocess resulted in a much greater accuracy of the odometry based pose estimation, allowing therobot to function to an acceptable level of accuracy without the need for more complex SLAMimplementations such as the Extended Kalman Filter.

Program 5 Moddeling of Coasting Behaviour of Robot During Braking

1 if(_previous_state.direction == FORWARDS || _previous_state.direction == REVERSE)

2 deceleration = BRAKING_X;3 else if(_previous_state.direction == LEFT || _previous_state.direction ==

RIGHT)4 deceleration = BRAKING_Y;5 else6 deceleration = BRAKING_THETA;78 // Calculate correction to displacements based on linear deceleration ramp. This

logic accounts for this method being called9 // at any point in time after the stop command has been given, and

incrementally applies the correction based on elapsed time10 float correction = stopping_time_delta * (prev_w - ( deceleration * (2*

stopping_time + stopping_time_delta) / 2 ));

6.1.2 Gyroscopic Yaw Correction

Due to manufacturing variations in the drive system, it was found that the robot had a tendencyto veer off to one side and had difficulty maintaining a constant orientation. This was particularlypresent when operating at low speeds. To ensure that the robot would maintain the desired yaworientation, a PID controller was implemented, using the gyroscope data acquired from the MPUto form a closed loop control system. The controller operated at a fixed sampling rate viaimplementation of a software timer interrupt on the Mega 2560. This allowed the controller tooperate continuously in the background without the need to explicitly initiate control. The finaltuned PID controller was extremely effective and resulted in much improved straight line trackingcapabilities. Unfortunately due to the stability issues encountered with the MPU, the PID was notoperational during the official demonstration. Instead a fallback was implemented that appliedfixed offset adjustment values to each motor based on experimentally measured differences inangular velocity.

13

Page 19: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

6.1.3 Gyroscopic Rotation Control

Accurate angular position control was obtained by repurposing the same PID controller discussedabove, but creating an offset on the setpoint angle by the desired rotation for the robot. Thisallowed the robot to operate precisely within the arena without the need to continually re-reference to the walls to verify angular position. Again, this feature had to be disabled for thedemonstration due to MPU stability issues. A fall back was implemented that required the robotto manually re-align to the walls of the arena at each orientation change. This was accurate andeffective, however incurred a large time cost.

6.2 Wall Alignment

Once the IR sensors had been calibrated and normalised, a function was made to allow the robotto align to a wall based on the difference between the scanner and side IR readings. As shownin Figure 9, once the scanner data has been decomposed into its X and Y components, thesecan be compared directly to measurements given by the left side sensor. A PID controller wasimplemented to achieve accurate and time efficient response.

Figure 9 Wall Alignment

7. Control Overview

The control system designed used a combination of reactive and deliberative control to achievecoverage and obstacle avoidance.

14

Page 20: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

7.1 Wall Detection and Orientation

7.1.1 Gradient Based Corner Detection

Corner detection was initially implemented by performing numerical differentiation on scan data.From this, points at the pinnacle of steep changes in gradient could be identified. Thresholdvalues were set experimentally to differentiate between arena corners and noise in sensor readings.Pseudocode for this corner detection method can be seen in Program 6 below.

Program 6 Find Corner Using Gradient Change

1 int FindCorner(float raw_readings)2 {3 //corner extraction4 for(int i = 3; i<158; i++)5 {6 int diff = (readings[i-2] - readings[i-3]) + (readings[i-1] - readings[i-2])

+ (readings[i] - readings[i-1]) + (readings[i] - readings[i+1]) + (readings[i+1] - readings[i+2]) + (readings[i+2] - readings[i+3]);

7 if(diff>cornerPosValue)8 {9 cornerPos = i;

10 cornerPosValue = diff;11 }12 }13 cornerPosValue = 0;14 }

While fast and somewhat effective, this method was found to be highly susceptible to detectingfalse corners due to both sensor noise and angles created by objects adjacent to walls. Althoughthis problem could have been somewhat alleviated by adding special cases and filtering the inputdata further, it was felt that a more robust system was necessary.

7.1.2 RANSAC

A lightweight version of the RANSAC algorithm was the final solution to identifying walls andcorners from the data points returned by the scanner. A set of points and various thresholdingvalues are input to the function and the equation for a wall relative to the centre of the robot isoutput. The function may be run multiple times on the same set of data, allowing the discoveryof multiple walls from a single scan. Flags within each point structure indicate whether the pointhas already been associated with a previously discovered wall, preventing the algorithm fromfinding the same wall multiple times. These wall equations can then be solved simultaneouslyto give the coordinates of any corners within the scanned area. It operates based on the pseudocode outlined below.

1. Check that the number of available, unused points is greater than the minimum numberrequired to identify a wall. This saves time by exiting the function early with an errorcode if there are not enough points available.

2. Sample random points until two unused points have been identified

3. Form the equation for a line between these points

4. Cycle through all points, calculating the distance between each unused point and theproposed line. Count the total number of unused points close enough to be considered aninlier, and calculate the average distance between the line and associated inliers.

15

Page 21: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

5. If the number of inliers for the proposed line is greater than the set minimum, and theaverage inlier distance is less than the best line previously proposed, this line becomes thebest line.

6. Repeat process from step 2 for the specified number of iterations.

7. Mark all points associated with the best found wall as used, and return the equation forthis wall.

This function was able to detect multiple walls within a single scan, even when there wereobjects between the robot and the wall, with correctly tuned parameters. A separate functionwas defined to find corners when given the equations for multiple walls. The completed outputof a wall and corner identification sequence can be seen in Figure 10. Here, the blue pointsrepresent the data returned by the infrared scan, red lines show the identified walls, and theyellow point shows the calculated position of the corner. Further outputs were also implementedto give information of the endpoints of the wall segment seen by the robot.

(a) Robot Position-40 -30 -20 -10 0 10 20 30

0

10

20

30

40

50

(b) RANSAC Output

Figure 10 Corner Detection Example

This system was robust enough to allow for actions such as calculating angle of the robot relativeto the wall and turning to be parallel to it, or finding the minimum distance from the robot tothe wall, even when this point was out of the scanner’s field of view.

7.1.3 Obstacle Detection

An adapted version of the Spike object detection algorithm described by [1] was implementedto detect obstacles within the scan field. This worked on the basis of finding large jumps in thescan values, with a corresponding change back to the original value at an appropriate distancefrom the first. One of the trials of this algorithm can be seen in Figure 11, with the (incorrect)object centre identified marked in yellow. While this algorithm was adequate for identifying theapproximate position and size of obstacles, it was deemed unnecessary in the final implementationof the control system.

7.2 Initialisation

In order to perform an inwards spiralling sweep of the area to be cleaned, it was decided that therobot should find the perimeter of the arena after being placed in the random starting position.This was achieved by scanning for a wall and finding the angle of this wall relative to the robot.

16

Page 22: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Figure 11 Object Identification Trial

This information could then be used to turn and drive the robot towards the wall. When itreached a specified distance from the wall, the robot turned by 90 ◦ and aligned itself parallelwith the wall. If a wall was not found in the first scan of the area, the robot would rotate andperform another scan.

7.3 Obstacle Avoidance and Trajectory Planning

7.3.1 Finite State Machine Design

An intelligent controller in the form of a Finite State Machine (FSM) was implemented to ensuremaximum area coverage was achieved, while also performing obstacle avoidance duties. The useof a FSM for control resulted in predictable, deterministic behaviour of the robot. This greatlyreduced development time and simplified the debugging process. The FSM design paradigm alsoforces modularity and reduced coupling between components of the program. The effect of thiswas increased stability, which reduced the occurrence of unexpected side effects from tweakingthe behaviour of the robot.

A series of states were devised to describe the behaviour of the robot as it explored the unknownenvironment. The one condition that was required for the controller to perform properly was thatcontrol must be initiated with the robot located parallel and close to one of the arena walls. Thiscondition was satisfied during the robot initialisation procedure described in Section 7.2. A UMLFinite State Diagram describing the function of each state and the state transition conditions islocated in Appendix B.

Roaming

The roaming state controls the behaviour of the robot as it travels towards a given targetcoordinate. When the controller is in the roaming state, the pose of the robot is continuouslyinspected and compared with the position of the target. Based on this information, the controllerdetermines the appropriate drive commands required to transport the robot to the target and

17

Page 23: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

executes these commands.

Additionally, a continuous sweep of the area in front of the robot is performed by the 2Dscanner. In the event that an obstacle is detected by the scanner in the window in front ofthe robot, calculations based on the current robot velocity are performed to determine when thecollision between robot and obstacle is expected to occur. Using this time to expected collision,an avoidance maneuver is scheduled and the scanner will remain continuously searching foradditional obstacles. This implementation allows the robot to have a short term memory ofpredicted collisions, meaning that the object does not need to remain in the field of view of thescanner for the obstacle to be avoided. This allows the robot to approach multiple objects headon and compute which of these is of highest priority based on its distance from the robot. Thisis demonstrated in Figure 12. Once the controller determines that a scheduled collision is due tooccur, the robot will come to a halt and a transition to the ‘observing’ state is made in order toperform further analysis on the obstacle.

time = t time = t + n

a) Obstacle detected and avoidance maneuver scheduled

b) Second obstacle detected with a greater priority. Avoidance maneuver is rescheduled

Robot collision window

Figure 12 Visualisation of Collision Detection Process

Observing

The ‘observing’ state is used to perform a detailed inspection on an obstacle once it has beendetected. While the robot is in this state it will perform a full scan of the area directly in frontof it. The scan data is then processed using the previously described RANSAC algorithm todetermine if the detected object is a wall or an obstacle. If a wall is detected, the robot willtransition to the wall aligning state and perform reorientation. Otherwise, the controller willdetermine the dimensions of the object as measured by the scanner and then transition to the

18

Page 24: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

‘avoiding’ state to execute an avoidance maneuver.

Avoiding

The purpose of the ‘avoidance’ state is to determine the most appropriate maneuver for avoidinga detected object and then execute that maneuver. Whilst executing in the avoidance state, therobot will calculate the distance and direction required to strafe around an object which hasbeen previously detected during the ‘observation’ state. A check is performed using the laterallymounted IR range sensors to ensure that it is safe to perform the maneuver. Finally the strafe isexecuted and the controller returns to the ‘observing’ state to verify that the path in front of therobot is now clear.

Passing

In order to ensure that the robot has fully passed an obstacle before attempting to return to thetarget path, the ‘passing’ state was implemented. During this state, similar logic for collisiondetection is included as is present in the ‘roaming’ state. However additional functionality isimplemented to check the laterally facing sensors to ensure that the robot does not attempt toreturn to its target path before completely clearing the obstacle. Once the robot has passed anobstacle, the controller will return to the ‘roaming’ state.

Wall Aligning

During execution in the ‘wall aligning’ state, the controller will reorient the robot parallel tothe wall as detected during the previous ‘observation’ state. Once a rough alignment has beenexecuted using an odometry based estimation, the controller utilises the wall alignment PID asdescribed in the Section 7.2 to perform fine adjustments to the robot alignment. Upon exitingthis state, the robot will transition to the ‘observing’ state to ensure the path is clear beforecontinuing.

Changing Wall

To travel the desired trajectory as determined by the trajectory planning module, it is oftenrequired that the robot performs a 90 ◦ turn and ends up parallel to one of the arena walls. Thisis particularly challenging when the robot nears the center of the arena as scanned data of thewall is likely not available to determine the correct turning angle. The ‘changing wall’ statecauses the robot to perform an odometry based 90 ◦ turn, which is then further corrected usingsensor data if the necessary sensors are in range. Upon completion, a transition is made to the‘observing’ state to ensure the path is clear before continuing.

19

Page 25: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Trajectory Planning

START

FINISH

Figure 13 Expected Robot Path

Full area coverage was ensured by dynamically creating an inwards spiral trajectory plan for therobot. An example ideal path generated by this algorithm is given in Figure 13. The algorithmuses the onboard odometry calculations to measure the length and width of the arena during thefirst two stages of navigation. A navigation stage is defined as the process of navigating parallelto a single wall of the arena. In Figure 13, the first two stages refer to the paths in whichthe robot travels directly alongside Wall A and then Wall B. Using the measured dimensions ofthe arena, a target (x, y) coordinate was then generated for each subsequent navigation stage.The spiral pattern was generated by increasing the distance from the target coordinate fromthe edge of the arena by a robot width upon the completion of each circuit of the arena. Asecondary finite state machine was implemented for the purpose of transitioning between stagesand automatic generation of the spiral trajectory. A code snippet detailing the state transitions isgiven in Program 7.

Program 7 Secondary FSM For Spiral Generation

1 static int circuits = 0;2 if(_stage == WALL_A)3 {4 // We want to maintain the current x position and travel along the y direction5 _target_x = pose[0];67 // If we don’t yet know the width of the arena, set the target large enough

that8 // the robot will encounter a wall before it reaches the target.9 if(_arena_width == 0)

10 _target_y = 99999;11 else

20

Page 26: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

12 _target_y = _arena_width - SPIRAL_OFFSET * circuits;13 // Transition to WALL_B14 _stage = WALL_B;15 }16 else if (_stage == WALL_B)17 {18 _target_y = pose[1];19 _target_x = SPIRAL_OFFSET * circuits;20 _stage = WALL_C;21 }22 else if (_stage == WALL_C)23 {24 circuits++;25 _target_x = pose[0];26 _target_y = SPIRAL_OFFSET *circuits;27 _stage = WALL_D;28 }29 else if (_stage == WALL_D)30 {31 _target_x = _arena_length - SPIRAL_OFFSET * circuits;32 _target_y = pose[1];33 _stage = WALL_A;34 }3536 // Here we initialise arena length and width using odometry data37 if(_arena_length == 0)38 _arena_length = pose[0];39 else if (_arena_width == 0)40 _arena_width = pose[1];

7.4 Occupancy Grid

An occupancy grid was implemented to keep a continuous log of the robot’s position with respectto world coordinates. The grid consisted of a dynamic array of unsigned 8 bit characters whereeach set of two bits was used to represent a 60×60mm square in the physical world. Using twobits to represent a physical area instead of one byte in the array reduced the memory needed tostore the grid and allowed for a higher grid resolution. Figure 14 shows how each grid spaceconsists of two bits, the first bit indicates if the grid position contains an object, the secondbit indicates whether the robot has covered that space. The robot is 200×200mm and so isrepresented by a 3×3 square of grid blocks.

The grid was initialised, setting the size of the grid and the starting position of the robot atcoordinate(0, 0). During the robot’s function, the current position is updated on the grid. Inthe case of a discontinuous path being generated as a result of infrequent position updates, thegrid interpolates the available position data to create a continuous path between the previous andcurrent position of the robot.

By using the grid to store information regarding obstacle location, it becomes possible to avoidobstacles that are no longer in the field of view for the robot’s sensors.

The status of any grid square relative to the centre of the robot can be accessed at any time. Ifthe grid space being checked contains the location of an object then a 2 will be returned, if therobot has travelled through the space then 1 will be returned, otherwise 0 is returned. Figure 16shows a grid that has been visualised using the graphical interface.

From Figure 14 it easy to see how the robot knows what area is has covered. Reading from the

21

Page 27: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Detected Objects

{1,0} / {1,1}

Figure 14 Grid Layout

grid squares lets the robot see where it has yet to cover and plan its trajectory accordingly.

8. Graphical Interface

Bluetooth provides a wireless serial connection between the robot and an external computer. Therobot is completely independant and so cannot receive information from the computer. Howeverthe computer can read data from this connection to create visualisation of the robot position.

The robot keeps a running track of its current position as well as a grid containing informationabout the total explored area. This information can be transferred over the Bluetooth connectionand used to create a plot of robot location or total explored area.

MATLAB was used to read from the robot and manipulate the incoming information. MATLABhas the functionality to create arrays with increasing length while simultaneously creating visuallyappealing plots of the incoming data.

8.1 Initialise Connection

To pass data via Bluetooth, the mobile device must send the data via Serial1 at a baud rate of115200. The data sent over Serial1 is stored in a FIFO input buffer. The information on theinput buffer can be read from an external computer in the order that it is uploaded.

MATLAB Instrument Control Toolbox contains functions to check the availability of devices andcreate a Bluetooth object for a specific device based on its status. Once a Bluetooth deviceobject has been created, communications can be opened whenever the device is available. Oncecommunications are open, data can be read from the input buffer using the fscanf command.

If no data has been placed on the input buffer, fscanf will result in an error when run. TheBytesAvailable command is used to wait for information to be available before MATLAB tries

22

Page 28: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

to read any information.

8.2 Position Tracking

The robot keeps track of its current position in terms of global x and y coordinates. Thesepositions are sent over Bluetooth to MATLAB as soon as they are updated. MATLAB adds thex and y coordinates to an array of values and can then plot the values dynamically.

The data read is first compared with an indicator string. When the condition is true then the nextdata transferred over Serial1 will be the next position information for the robot. The drawnowfunction is used to update the plot with every iteration. A plot of robot position for a practicerun at two positions, with start and end points added as a visual aid, is shown in Figure 15.

(a) (b)

Figure 15 Position Visualisation

8.3 Coverage

During the robot’s operation it is very difficult to check the status of the grid. Printing out theentire grid over the standard Serial connection produces a large array of numbers. It is difficultto see exactly what is happening and so Bluetooth was used to visually create the grid.

Using the surf command in MATLAB, an accurate representation of the grid is plotted. Figure 16shows the grid status for a practice run of the robot. Points with a height of one are positionsthat the robot has visited, the unexplored area is represented by a height of zero.

From Figure 16 the path of the robot is more apparent. This makes it easier to find reasons forwhy the robot would turn without apparent reason.

9. Difficulties

9.1 Hardware Faults

9.1.1 Blown Regulator in Board

During testing it was found that all IR sensors were giving incorrect values. After hours ofre-configuration and troubleshooting it was found that the voltage regulator on the arduino boardhad blown, resulting in the 5V rail only outputting 2V . This caused major errors in the sensorresolution.

This resulted in a severe reduction in productivity the day before the project demonstration. It isbelieved that the fault in the board was the result of connecting the sonar sensor to the arduinoboard incorrectly. The sonar was subsequently removed to avoid further faults, as it played a

23

Page 29: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Figure 16 Grid Visualisation

very minor part in the project code.

9.1.2 MPU Crashing

Testing on the final software design revealed that the MPU would sometimes crash duringoperation. This froze the robot’s functionality as it would wait for MPU data that would nevercome. The MPU failure was unpredictable. Sometimes failure would occur very close to thebeginning of the robot’s functionality and sometimes it would not crash during the robot’s entirerun.

Restricted by the small amount of time left before the project deadline, the fault causing thiscrash could not be debugged, instead the MPU was removed from operation. The odometry wasused to completely track the angular position and this leads to a gradual drift in position. In anattempt to reduce the error from the odometry, the robot used wall alignment at every turn tomake sure it is in the correct position. This heavily increased the operation time but preventedcollisions with the edge of the arena.

9.2 Original Obstacle Avoidance Approach

The idea behind this approach was to implement obstacle avoidance first and then add in trajectoryplanning based on a map of areas of the arena that the robot had already traversed. The obstacleavoidance was implemented as an individual module without any consideration of which trajectoryplanning method would be used. Once obstacle avoidance was successfully achieved, it cametime to choose a method of trajectory planning.

Several different trajectory planning methods were considered, including optimal view path plan-ning and an A* algorithm [6, 7]. The “spiral inwards” method was chosen as it would be easyto adapt the existing obstacle avoidance algorithm to include this method and it has been shownto be an effective method [8].

However, problems were encountered when trying to adapt the obstacle avoidance algorithm toinclude trajectory planning. The main reason for these problems was that the algorithm had noway of differentiating between objects and walls.

24

Page 30: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

The obstacle avoidance algorithm utilised the IR sensors placed on the front and sides of therobot. It was designed to make the robot traverse the arena in a clockwise direction withoutcolliding into any objects it encountered. When an object was encountered, the robot would turnclockwise until the object was no longer in front of it and then continue travelling forwards.

This was achieved by setting a trigger distance for each of the sensors. If the reading returnedfrom one or more of the IR sensors was less than the specified trigger distance, the programwould run through a series of cases and decide on what action to perform. Which action waschosen depended on the combination of sensors that had been triggered.

The front IR sensors were mounted on a servo motor to enable a wider range of sight aheadof the robot. The angle of the servo motor was changed every loop iteration so that the frontsensors were constantly scanning from side to side in search of potential obstacles.

If an obstacle was detected, the angle of the servo motor was altered so that the front sensorstrack the object as the robot moves away from it. This enabled the robot to ensure that it hadsuccessfully avoided the object before it continued scanning for other objects.

In addition to this, if an obstacle was detected by the front sensor while it is positioned to theleft, it would scan for objects to the right before moving. This prevented the robot from collidinginto one object while moving away from another.

In order to make the robot cover the entire arena by spiralling inwards, the code explained abovehad to be modified so that instead of just turning clockwise to avoid an obstacle, it would goaround the obstacle, return back to the wall and then continue forwards. The occupancy griddescribed in Section 7.4 would then be used to record areas where the robot had already beenand these areas would be treated as walls.

This is where problems were encountered. Because all decisions in the code were made usingonly trigger distances, it was almost impossible to make the robot react differently to obstaclesand walls.

It was apparent that this approach would not be successful and as a consequence our approachwas changed to one which implements obstacle avoidance and trajectory planning together.

Since changing the approach, it has become clear that even if the problem of differentiatingbetween walls and obstacles using trigger distances alone could have been solved, further prob-lems still would have been .encountered This is because the above approach would have reliedheavily on odometry to the populate the map with where the robot had been and also accuratelyestimate the robot’s current position in the arena. In a lot of cases, the obstacle avoidance codemade the robot make short, sharp movements at varying speeds. This would have made theodometry readings a lot less accurate.

10. Results

Figure 17 shows the coverage of the robot during it’s final run. The red circles show the positionof obstacles, the black circles show the centre of a tracking paper placed on the back of therobot during the run. The robot started in the centre of the environment and finished in a similarposition.

False readings from the IR sensors caused strafing during operation and slowing down the runtime. This can be seen in the top left corner of Figure 17.

25

Page 31: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Figure 17 Robot Path

Aligning to a wall after a change in orientation took up the most time during the run. Waitingfor the robot alignment to produce an acceptable error depended heavily on the initial error. Alarge error was easily resolved whilst a small error would have to wait for the increasing integralcomponent to trigger movement.

Figure 17 does not give an accurate representation of the number of collisions. No collisionsoccurred during the final run. This was expected from the scanning range and accuracy of thedevice.

Overall the mobile robot performed with above average results with no collisions and acceptablecoverage. Aside from a relatively slow completion time, this was considered a successful robotdesign.

26

Page 32: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

11. Conclusions and Recommendations

Reflecting on the project outcomes, it is apparent that the completed system was negativelyimpacted by attempting to solve problems that were outside of the project scope. Valuable timewas spent implementing environment mapping and performing advanced trajectory planning ina venture aiming to solve the full SLAM problem. While the project was presented as if afull SLAM implementation was required for success, this was not the case. Consequently, muchtime was be devoted to these aspects without a corresponding improvement in robot performanceduring the presentation. Full mapping and continuous, accurate localisation are an importantaspect of full SLAM projects, which may operate in entirely unknown, often variable andhazardous environments. However is not necessary for a robot operating with simple obstacleavoidance requirements in mostly known conditions to perform such complex processing routines.

Faster and more complete coverage could have been achieved if purely reactive control had beenimplemented, as this would have allowed for a greater focus on tuning the robot parameters tothe given field. Alternate coverage routines, such as a straight back and forth movement, couldalso have been advantageous as this would have allowed more opportunities for the robot torealign with the field perimeter throughout the later stages of its run.

Overall robot performance would benefit from the use of additional hardware. An example ofthis is the lack of odometry accuracy and resulting workarounds that were required. This issuecould be largely mitigated by the introduction of rotary encoders on the drive shafts to provideclosed loop feedback on wheel rotation.

This being said, the designed system met the basic requirements for both coverage and obstacleavoidance. The control system provided the basis for a robust and well designed area coveragerobot. With further development, the overall cleaning system could be extended into a reliableand adaptable product, with wide market appeal.

27

Page 33: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

References

[1] Riisgaard, S., and Blas, M. R. (2003). SLAM for Dummies. A Tutorial Approach toSimultaneous Localization and Mapping, 22(1-127), p. 126.

[2] Siciliano, B., and Khatib, O. (2008). Springer handbook of robotics. Springer Science &Business Media.

[3] Thrun, S., and Leonard, J. J. (2008). “Simultaneous localization and mapping”. In Springerhandbook of robotics. Springer, pp. 871–889.

[4] Kohanbash, D. (2014). Kalman Filtering: A Practical Implementation Guide. Retreived June07, 2015 from:http://robotsforroboticists.com/kalman-filtering/.

[5] Doroftei, I., Grosu, V., and Spinu, V. (2007). Omnidirectional mobile robot–design andimplementation. INTECH Open Access Publisher.

[6] Haner, S., and Heyden, A. (2011). “Optimal view path planning for visual slam”. In ImageAnalysis. Springer, pp. 370–380.

[7] Duchon, F., Babinec, A., Kajan, M., Beno, P., Florek, M., Fico, T., and Jurišica, L. (2014).Path Planning with Modified a Star Algorithm for a Mobile Robot. Procedia Engineering,96, pp. 59–69.

[8] Waanders, M. (2011). ‘Coverage Path Planning for Mobile Cleaning Robots’, 15th TwenteStudent Conference on IT, The Netherlands. Copyright.

28

Page 34: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Appendix A IR Sensor Calibration

A.1 Testing of Raw Data

Each sensor was placed a fixed distance from a stationary object and a set of unfiltered readingswere taken. Results for each sensor are shown in their respective Figure.

27

28

29

30

31

32

33

0 20 40 60 80 100 120 140

Mea

sure

d D

ista

ne

(cm

)

Reading Number

Instantaneous Sensor Reading Reference Position

(a) Long IR Sensor

22

24

26

28

30

32

34

0 50 100 150 200 250 300 350 400

Mea

sure

Va

lue

(cm

)

Reading Number

Instantaneous Sensor Reading Reference Position

(b) Medium IR Sensor

13

14

15

16

17

18

19

20

0 50 100 150 200 250 300

Mea

sure

d D

ista

nce

(cm

)

Reading Number

Instantaneous Sensor Reading Reference Position

(c) Short IR Sensor

Figure A1 Raw Sensor Data

29

Page 35: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

A.2 Effect of a Capacitor

A medium IR sensor was placed a fixed distance from a stationary object. A 100µF capacitorwas added to the sensor and a set of unfiltered and filtered readings were taken. The smoothingeffect from the capacitor is shown in the following Figures.

42

44

46

48

50

52

54

56

58

0 10 20 30 40 50 60 70 80 90

Mea

sure

d D

ista

nce

(cm

)

Reading Number

Kalman Filtered Data Raw Data Reference Position

(a) No Capacitor

40

42

44

46

48

50

52

54

56

58

0 10 20 30 40 50 60 70

Mea

sure

d D

ista

nce

(cm

)

Reading Number

Kalman Filtered Data Raw Data Reference Position

(b) 100µF

42

44

46

48

50

52

54

56

58

0 10 20 30 40 50 60 70 80

Mea

sure

d D

ista

nce

(cm

)

Reading Number

Kalman Filtered Data Raw Data Reference Position

(c) 100µF Direct

Figure A2 Effect of Capacitor

30

Page 36: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

A.3 Effect of Kalman Filter

A medium IR sensor with a 2200µF capacitor attached was placed a distance from a stationaryobject. The sensor was then moved further from the object, held there for a short time, and thenmoved back.

The filter was tuned by changing the values for sensor and control variance in order to achievethe best compromise between smoothing and swiftness of response. While a large value forsensor variance resulted in good noise rejection, it caused a significant delay between movementand sensor response - this effect is shown in Figure A3a. When this variance was reduced andcombined with an appropriate control variance, the output shown in Figure A3b was obtained.

0

10

20

30

40

50

60

0 10 20 30 40 50 60 70 80 90

Mea

sure

d D

ista

nce

(cm

)

Reading Number

Kalman Filtered Data Raw Data

(a) Initial Implementation

0

10

20

30

40

50

60

0 10 20 30 40 50 60

Mea

sure

d D

ista

nce

(cm

)

Reading Number

Kalman Filtered Data Raw Data

(b) Tuned Filter

Figure A3 Effect of Kalman Filter

31

Page 37: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Appendix B Finite State Machine

SLAM

Ro

bo

t

Grid

Sen

sor

Drive

Avo

idan

ce and

Tra

jectory

MP

U

Scann

er

RA

NSA

C

Initialisatio

n

Ba

ttery

Figure B1 Summary FSM

32

Page 38: PROJECT IN MECHATRONICS ENGINEERING - Aucklandhomepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/... · PROJECT IN MECHATRONICS ENGINEERING ... 5.6 Data Acquisition and

Observin

g

do /

Scan surroundings

Perform R

AN

SAC

to ch

eck for w

allsC

heck for [no

n-w

all] obstacles

Initial

Ro

amin

g

do /

Incre

men

tally

scan

area

in fro

nt o

f rob

ot

Ch

eck if scann

ed p

oin

t is in w

ind

ow

of in

terestSch

edu

le avoid

ance m

aneu

verC

he

ck if in lin

e w

ith targe

t po

sition

(x, y)Sen

d co

ntro

l com

man

d to

drivetrain

Avo

iding

entry /

Check obstacle po

sition and side sensors

Decid

e on avoidance strategy

Calculate d

istance to strafe aroun

d object

do

/ C

heck side sen

sors

Perform avoid

ance strafe u

ntil desired

distance reach

ed

Passin

g Obstacle

entry /

Calculate initial estim

ate of distance to p

ass obstacle

do /

Contin

uously scan area in fro

nt of robot for o

bstaclesC

heck side senso

rs for ob

staclesIf an

obstacle is fou

nd, re-calculate distance req

uired

to p

ass

Wall A

lignin

g

en

try /

Turn to w

all angle usin

g od

om

etry estimatio

n (rou

gh

alig

nm

en

t)U

se PID contro

l with sen

sors to perform fine align

men

td

o /

Strafe ou

t to desired

distance fro

m w

all

Chan

ge Wall (at d

istance fro

m the

wall)

do /

Turn 90 deg

rees u

sing odo

metry estim

ation

Check if sen

sors are in range w

ith a w

allA

ttmept to

perform

PID alignm

ent to w

all if sen

sors are in ran

geU

pdate target coordinate for next w

all

Finished

No

obstacle or w

all foun

d

Ob

stacle

observe

d

Wa

ll dete

cted

Collisio

n D

ete

cted

Rob

ot has reached

cente

ro

f arena

Ob

stacle

detected

in front o

f robo

t

Ob

stacle

succesfully p

assed

Target coordina

te reached

Wa

ll alignm

ent co

mplete

Reo

rientation

of ro

bot com

plete

Ob

stacle

observe

d to left o

r right o

f robo

t

Avo

idance m

aneu

vercom

pleted

Figure B2 Detailed FSM33