10
1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona Index Terms—Computer vision, Robotics, Remote sensing, Motion, Image Processing, Raspberry Pi Abstract—The aim of this project is to discover new techniques to combine software and hardware components to be applied on an autonomous robot. The appearance of new cheap com- putational modules like Raspberry Pi and high-resolution cheap cameras makes it possible to combine both, applied to computer vision. This purpose has been achieved by exploring ways of object detection and navigation control using computer vision, low-cost computing devices and sending the path wirelessly via fast wireless network communication. This would favor the adoption of new low-cost autonomous robots in peoples ev- eryday life. The system that I purpose in this paper is based in 3 hardware devices; one actuated robot with a Raspberry Pi, another equipped with another Raspberry Pi and a web camera and one with a smartphone running a user control app. The result is a fully working navigation system. This project is developed under the context of an educational final project and aimed also to reuse the implementation as a practicum material for a future use of the students. 1 I NTRODUCTION T ROUGHOUT the 20th century the use of autonomous robots has been increasing. These devices attempt to analyze the environment and interact with it to achieve a concrete action usually assigned by a human. To modelize and digitalize the environment, they are equipped with different types of sensors, depending on the use. Computer vision, on the other hand, tries to analyze the environment via image processing using only image capturing. The increase of processing power and the appearance of new technologies are making it possible for this technique to be applied to autonomous robots. These robots are making headway today with the appearance, for example, of autonomous vehicles, drones or security systems which implement advanced computer vision to improve peoples lives. The army and other big companies have already announced their interest in this technology with the intention of using it in their missions and in the field of long-term research as well. Given the high price of the most common sensors such as radar, IR, laser etc., the aim of this project is to create an autonomous robot using computer vision as the only sensor of the system. The high cost of these sensors and devices currently allows only very few organizations, agencies and institutions, to access them, hindering the investigation and delaying the incorporation of these into society. The appearance of low-cost computational models such Raspberry Pi is perfect for this kind of projects. The combination of hardware and software of this device is designed to be adaptive and really easy to use. Using this device, the production cost would be cheaper, causing them to be manufactured massively. This would end in devices able to improve the human-machine interaction using artificial intelligence with, for example, guiding robots for blind people or managing the traffic of autonomous vehicles in city intersections. The action robot will be sensorless itself and to cover the absence of traditional sensors, there will be an independent vision module located in the scene that will be the responsible one for calculating, coordinating and submitting the movements of the robots.

Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

1

Computer Vision applied to autonomousrobots using Raspberry Pi

Daniel Jimenez MendozaUniversitat Autonoma de Barcelona

Index Terms—Computer vision, Robotics, Remote sensing, Motion, Image Processing, Raspberry Pi

F

Abstract—The aim of this project is to discover new techniquesto combine software and hardware components to be appliedon an autonomous robot. The appearance of new cheap com-putational modules like Raspberry Pi and high-resolution cheapcameras makes it possible to combine both, applied to computervision. This purpose has been achieved by exploring ways ofobject detection and navigation control using computer vision,low-cost computing devices and sending the path wirelesslyvia fast wireless network communication. This would favor theadoption of new low-cost autonomous robots in peoples ev-eryday life. The system that I purpose in this paper is basedin 3 hardware devices; one actuated robot with a RaspberryPi, another equipped with another Raspberry Pi and a webcamera and one with a smartphone running a user control app.The result is a fully working navigation system. This project isdeveloped under the context of an educational final project andaimed also to reuse the implementation as a practicum materialfor a future use of the students.

1 INTRODUCTION

T ROUGHOUT the 20th century the use ofautonomous robots has been increasing.

These devices attempt to analyze theenvironment and interact with it to achieve aconcrete action usually assigned by a human.To modelize and digitalize the environment,they are equipped with different types ofsensors, depending on the use. Computervision, on the other hand, tries to analyze theenvironment via image processing using onlyimage capturing. The increase of processingpower and the appearance of new technologiesare making it possible for this technique to beapplied to autonomous robots.

These robots are making headway todaywith the appearance, for example, of

autonomous vehicles, drones or securitysystems which implement advanced computervision to improve peoples lives. The army andother big companies have already announcedtheir interest in this technology with theintention of using it in their missions and inthe field of long-term research as well.

Given the high price of the most commonsensors such as radar, IR, laser etc., the aim ofthis project is to create an autonomous robotusing computer vision as the only sensor ofthe system. The high cost of these sensorsand devices currently allows only very feworganizations, agencies and institutions, toaccess them, hindering the investigation anddelaying the incorporation of these into society.The appearance of low-cost computationalmodels such Raspberry Pi is perfect for thiskind of projects. The combination of hardwareand software of this device is designed to beadaptive and really easy to use. Using thisdevice, the production cost would be cheaper,causing them to be manufactured massively.This would end in devices able to improvethe human-machine interaction using artificialintelligence with, for example, guiding robotsfor blind people or managing the traffic ofautonomous vehicles in city intersections. Theaction robot will be sensorless itself and tocover the absence of traditional sensors, therewill be an independent vision module locatedin the scene that will be the responsible onefor calculating, coordinating and submittingthe movements of the robots.

Page 2: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

2

The Computer Vision Center (CVC) of theUniversitat Autonoma de Barcelona is incharge of research projects related to robotics,computer vision and computational field, ingeneral. This project is part of a final degreeproject, and it is designed to be highly modular,to be reused as a practicum support materialfor the subject of Robotics, of the engineeringschool at the UAB. Particularly, this projectconsist of three hardware modules (a robot, aimage capturing device and smartphone) andfour software modules.

This is the structure of the document; FirstlyI am going to start explaining the system ar-chitecture, the work diagram, the methodologyof the project I used and the explanation ofthe different modules involved. Then I willdescribe the results and conclusions. Lastly Iwill end with the acknowledgments and bibli-ography.

2 METHODOLOGY

2.1 System Architecture

As we said before, this autonomous navigationsystem is able to generate a route between twopoints (Pstart and Pgoal). The figure ?? showsthe system scheme.

For the realization of this project threedevices will be used; the first one (Pluto fromnow on) will be equipped with only twowheels (provided by CVC) and a RaspberryPi. The second one (Zeus from now on) itsequipped with a webcam and other RaspberryPi. The third device will be an Androidapplication to control the entire system.

These hardware devices will work togetherin this way: Zeus will capture the image ofthe scene, will trace a route from Pstart toPgoal using computer vision and will be sentto Pluto wirelessly. Ive decided to do it thisway because of the scalability of the entiresystem: a second robot can be added withoutmodifying the entire scheme, for example.

To implement this scheme I purpose thecreation of four software modules:

• Action Module: This module will be re-sponsable for managing the control actua-tors of Pluto (its wheels and odometry).

• Vision and computing module: This mod-ule will be responsable to capture thescence, analyce it by detecting the obstaclesusing image processing and calculate thepath.

• Communication module: This modulewill be responsable to communicate, wire-lessly, the 3 devices together. This projectwill be an offline system. It means thatafter capturing the image and processingit once, Zeus ceases to have any practicaleffect in the system.

• User Control module: This module will beresponsable to perform the entire controlof the system. It will be achieved by anAndroid app connected to Zeus.

Due the modularity of the system, I couldwork independently over the modules with-out coupling risk. This made the developmentfaster. There is another final step or module.It is to check every module independently tocheck their proper operation and then connectthem together and ensure the proper operation

Fig. 1. General system squeme: (A) Zeus,equipped with a camera; (B) Pluto, connectedto Zeus wirelessly; (C) Markers to define theworkspace; (D) Obstacles to avoid; (E) Androiddevice.

Page 3: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

3

Fig. 2. Flowchart between Android device, Zeusand Pluto

Fig. 3. Main diagram of the system

of the whole system.

2.2 Action ModuleThis module is the responsible one for givingmovement to the autonomous robot. Its an em-bedded hardware system which correspondswith figure ??. It comprises an MD25 kit [2],which includes two wheels, encoders and aboard to process all the odometry. It is alsoequipped with a Raspberry Pi to make thewireless communication with Zeus possible.Due to the reduced size of all the components,I installed all of them into a Tupperware (mak-ing it a tupperbot), resulting in a lightweightrobot. I added a third non-actuated wheel at

Fig. 4. Pluto diagram: (A) Wheels; (B) MD25Wheels controller board; (C) Raspberry Pi; (D)non-actuated wheel.

the center to give more stability to the robot.Both wheels can go forwards or backwardsindependently, and the robot can rotate aboutits origin. This allows the robot to make apiecewise linear path.

2.2.1 Sending commands to the wheelsThe MD25 board is responsible to manage thecontrol of the motors. The board gives theoption to select which type of communicationwith the wheel we can have: I2C or Serial, soI selected the simplest one, I2C. This digitalcommunication is very common in this kindof devices. The advantages of I2C over Serialare that it is faster and it can receive or senddata to specific devices if you have connectedmore than one. In order to communicate theRaspberry Pi with the MD25 control board Ineed to connect them as shown in the figure ??.

I started using the Raspberrys GPIO ports(the input and output pins of the Raspberry).I found a library [7] by Adafruit (the samedistributors of Raspberry Pi) to work with I2C

Page 4: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

4

Fig. 5. Connection between Raspberry Pi andMD25 board.

protocol, so it was not difficult to have themworking together. I had to calibrate the en-coders to make the robot go forward and turnexactly as I wanted. To do that, first I coded therobot to go forward a determinate value. Then Imeasured how much the robot has moved. Thisgave me a ratio that now I can apply if I wantthe robot go forward a specific distance. Thiscalibration depends on the physical featuresof the robot (distance and diameter betweenthe wheels, mainly) but it can be extrapolatedto robots with similar characteristics. For thisreason, I have made a Python library (usingAdafruits I2C library also) to control MD25board using a Raspberry Pi.

2.3 Vision ModuleThis module is completely embedded in Zeusand it consist in a compatible Raspberry Piwebcam and a Raspberry Pi device. Much ofthe processing of the system lies in this module.It is responsible for interpreting the scene usingan image, detecting the obstacles and generat-ing a path from Pstart to Pgoal. This module canalso be decomposed into a few sub modules:

• Image Acquisition: This will capture theimage of the scene

• Object Detection: Will be the responsablefor detect the obstacles in the scene.

• Markers Detection: Will be the respons-able for detect the black markers on theground and establish a work area.

Fig. 6. Detailed diagram of the vision module

• Pstart and Pgoal Detection Will be the re-sponsable for detect the start and endingpoint of the path in the scnere.

• Image Composition After all the detec-tions, this module will combine them allto set up a workspace.

• Path-Finding This module will be the re-sponsable for trace a path from point A topoint B.

• Encoding Module This module will en-code the path resulting from PRM to sendit to Pluto.

2.3.1 Object Detection

I only consider the gray level image. This re-duces the number of layers to analyze withoutlosing a lot of information. Because of thisreason, the background is white and the fore-ground objects are black. We apply a thresh-old to the image and improve the result. Thethreshold level was chosen by testing differentvalues and selecting the one which provided

Page 5: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

5

the best result. After a first threshold process-ing, there were remaining artifacts (elementswhich werent obstacles but still remain in theimage). Mainly there were the wheels of therobot (which were also black) and scene fur-niture. To delete them I used two image pro-cessing techniques called opening and closing[?]. The opening is a morphology operationwhich reduces the noise of the image given astructuring element. It removes small objectsfrom the foreground by placing them in thebackground. The result of the opening is thesame image but without the artifacts, but itmakes the objects smaller. Closing is anothermorphology operation which, in this case, re-stores the original size of the obstacles. Bydoing this, I was able to filter only the obstaclesto avoid. To count the physical features ofthe robot and to prevent Pluto from crashingwith obstacles when it executes the path, Ihad to use another image processing techniquecalled dilation. This caused the obstacles de-tected to seem bigger than the real obstacles.With this operation, I am defining Qfree space(all available configurations of the robot).Todefine how much the object must be oversized,I must know the physical dimensions of therobot. Measuring it, the minimum radius is15cm, so, the objects must be oversized 15cm.To represent this 15cm in pixels I used theratio calculated when detecting the markers(explained in the next subsection). Then, weonly need to use this ratio to apply it to thedilation kernel.

2.3.2 Markers DetectionTo determine the area of the robot operation,I used two black square (2x2cm) cardboardmarkers located in opposite corners. To recog-nize them, I used a technique called templatematching [?]. Giving a marker template to thefunction, it returns all the x, y points whereit detects the template. Usually this functionreturns a lot of points, (not only 2) and I had touse a clustering method to clean it up. I choseto use a simple k-means algorithm I foundon the internet. After checking it, I had thetwo correct points. Now we add the diagonaldistance between the markers by code andcalculate the cm/px ratio.

2.3.3 Points DetectionTo determine the starting point and end pointof the scene I used black markers with differentshapes. The starting point is a black trianglelocated on the top of the robot and the endmarker is a circle located on the floor. I usedthe same method as explained before (templatematching). The only thing I had to change isthe template used. I transferred the image of atriangle/circle to the function, and the functionreturned the starting or ending point.

2.3.4 Image CompositionAfter detecting all the single parts in the sceneI had to combine them into one single image. Iset the black areas as working space and thewhite areas as obstacles. As you can see inthe figure [10], only the black area representswhere the robot can move.

Fig. 7. Raw image Fig. 8. Limit detection

Fig. 9. Object detection Fig. 10. Final composi-tion

2.3.5 Path-findingOnce I had the working space, I had to cal-culate the path from point A to point B. Oneway to achieve it is to scale down the imageto a 50x30 pixels image and then apply anAstar algorithm and then rescale the imageagain to its original size. This ends with a low-resolution path, and there were problems whenI tried to accommodate it into physical contextdue the pixelation of the path. Also, every timewe want to calculate a new path, we have to

Page 6: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

6

Fig. 11. Path generated by PRM using Lagartalgorithm.

execute the entire algorithm. Another way toaccomplish that is to use a PRM algorithm. Thisconsists in two phases; the first one (training)is to spread random points and connect thembuilding a graph. The second phase is to applya path-finding algorithm between two nodesof the graph (from the node which representsthe point A and the node which represents thepoint B). With this method the initial comput-ing cost is higher but once you have the graph,you can recalculate as many paths as you wantwith a lower cost.

2.3.6 Encoding ModuleIn this context, we can define a route as a set ofactions to go from Pstart to Pgoal using the nodesobtained in PRM module. Coding this path tobe sent to Pluto is not an easy thing. Mainlybecause the path calculated by Zeus is in 2D (x,y), while Pluto lives in 3D (x axis, y axis, theta).We can simplify the route by dividing it intoa series of segmented combinations of turningand moving. This way, we can represent thePluto trajectory with an array of pairs the ofdelta (representing how far it moves) and theta(representing how much it must turn) values.Of course the robot first needs to turn to aimand then go forward. By doing this, when Zeuscalculates the route, it breaks down into thenumber of nodes that the route has and sends

them to Pluto as a series of combinations ofturning and moving.

2.4 Communication moduleThe communication module is responsible tomanage the communication between the An-droid device, Zeus and Pluto.

2.4.1 Architecture and protocolFirst I had to choose under which communica-tion protocol the system would work. I thoughtabout Bluetooth first, but then I thought that itwould be difficult to connect three devices atthe same time. Finally I chose TCP-IP protocolbecause there are already Python-ready ap-proaches [11] that I could use in Raspberry Pi. Ineeded to give WiFi capability to both devices,so I decided to buy two (one for Pluto and onefor Zeus) WiFi USB compatible adapters (ordongles) and plug them into every RaspberryPi. As those devices are Plug&Play I didnt needto configure anything to start working.I decided that Zeus would act as a server, of-fering wireless connection, while Pluto wouldbe the client. I decided that because in futureworks, more robots like Pluto can be added tothe system, so they must connect to the samepoint, Zeus, and this is the only way that it canbe achieved. To do it, I used Python sockets.Zeus would keep listening to all the commandsthat an Android device would send and thenit would establish a parallel connection withPluto to be able to send it the path. To test thismodule, first I made a simple chat applicationbetween the two Raspberrys.

2.4.2 Android comunicationTo be able to control the entire system and tosee what Zeus is watching, I decided to buildan Android application (taking advantage ofthe fact that almost all smartphones have WiFiconnection). This application also acts as aclient (like Pluto) and must be connected toZeus via WiFi, like an internet access point.The purpose of the application is to connect tothe system using TCP-IP sockets (this time inJava) and request what the webcam is seeing.As the image is a matrix of values and we canonly send string via sockets, I had to manage

Page 7: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

7

Fig. 12. Android app interface

to convert the image into an array string. Iachieved it coding the image in the server usingbase64 algorithm, sending as an array stringand decoding it on the Android app side withbase64 again. There are also more methods toencode and decode an image to a string array,but this one is already implemented in Java, soits easier to work with Android. I also put abutton in the app to send a START commandto Zeus that starts the path-finding algorithmand a capture button to see what Zeus is watch-ing. With this application, the user can interactwith the system without having to connect anyexternal devices such as a screen, a keyboardor mouse to have control on both Raspberrys.

3 EXPERIMENTAL SETUP

For the realization of this project this is myexperimental setup: As seen in the image X Iuse a white paper for the background, blackcardboard boxes to be the obstacles (this makesthe detection easier) and I use black paperto identify the Pstart and Pgoal points in theground. For the implementation of the actionmodule I used a Tupperware to put inside allthe robot elements (a Raspberry Pi and the

MD25 control board) because its made of plas-tic and its very malleable. For the implementa-tion of the vision module I mounted a webcamin the ceil of the room to be able to catch theentire scene connected to an a Raspberry Piin a table. For the user control module I usedan HTC One smartphone installing the controlapplication on it.

4 RESULTS

For the implementation of the PRM algorithm,I tried to build the graph with several differentconfigurations, but I was interested in havingfewer nodes and more connections, rather thanhaving more nodes and fewer connections(The fewer nodes we have, the faster thepath-finding algorithm will be). My bestconfiguration parameters were 70 nodes with8 connections. I tried the 2 most commonpath-finding algorithms: Astar and Dikjstra.Due to the reduced computing resources in aRaspberry Pi I decided to create my own path-finding algorithm mixing them both. The resultis the Lagart algorithm, which is considerablyfaster than both of them. The Lagarts pseudo-code is attached in the appendix A. For theimplementation of the template matchingin this case I used the template matchingfunction cv2.matchTemplate of OpenCVlibrary. You need to pass the template you arelooking for and the detection method. For theimplementation of the opening transfomrationI used cv2.morphologyEx(img, cv2.open,kernel), for closing cv2.morphologyEx(img,cv2.close, kernel) and for dilationcv2.dilate(img,kernel,iterations = 1) For allthe cases I used a 7x7 pixel square kernel. Thissize was chosen after trying different values.For this project, the height of the location ofZeus is not relevant. The higher it is located,the lower resolution per cm we will get, so itis interesting to put it as close as we can, butbeing able to see the markers. A good pointto locate it is the ceiling. In this task, I alsohad to select the resolution of the capturedimage. Big resolution means more pixels toiterate, which leads to mode processing time,but more resolution per cm. I chose a 640x480

Page 8: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

8

px image resolution.

After analyzing the methodology of all sep-arated modules and combining them to makethis project successful, here are the results.Combining all the modules I accomplishedthe objective of this project: creating an au-tonomous robot only guided by computer vi-sion. This robot, equipped with 3 wheels, isable to avoid physical predetermined obstaclesto go from a point A to a point B without usingany sensor, except the encoders of the motorswhich give information about how much therobot has moved or turned. I have applied thecomputer vision, and algorithm optimizationand artificial intelligence knowledge learned atthe Universitat Autonoma de Barcelona aboutcomputing module Zeus, making it possibleto control all the robots available (one in thiscase). I’ve created an algorithm faster (for thiscase) than the most-used path-finding algo-rithms to optimize the Raspberry Pi resources,decreasing the execution time of the entirealgorithm due to the limited resources of a low-cost computational model such a RaspberryPi. It is true that maybe this new algorithmreduces the execution time only for this par-ticular case. I created a Python library to helpthe connection between a Rasp-berry Pi and aMD25 motor controller board using AdafruitsI2C protocol library. I created an Android ap-plication able to control all the system. In thefigure [13] we can see how much less pro-cessing time Lagarts algorithm takes for thisparticular case compared to 2 of the most-usedalgorithms in path-finding Astar and Dijkstra.

1255Astar1167Dijkstra

11Lagart0 500 1000

Fig. 13. Comparison between Astar, Dijkstraand Lagart execution time path-finding algo-rithms (in ms)

5 CONCLUSIONS

After working in the field of robotics and oncomputer vision for months and being able tobe in direct contact with this kind of technol-ogy, I am able to draw some conclusions: Asa first conclusion, from my point of view, Ican say that computer vision is totally appli-cable to the field of autonomous robots. Withcurrent technology (rising of computing power,big cameras resolution...) applying computervision in real time does not mean an obstaclewhen used on autonomous robots. Anotherconclusion is that every time you work withcomputer vision, the project must be done ina controlled environment. It can become aninconvenient when you want to work in realworld situations. The light varies its positionmaking shadows behind the obstacles and ob-jects, and they can be very difficult to identify,as you do not know how to distinguish thereal object from its shadow. There are a lot ofphysical uncontrolled conditions that make theprobability to fail in the project very high. Asthis project has been done in controlled con-ditions (white background, fixed light, blackobjects, markers...) I can say that the result issuccessful and the system is functional in allthe cases. Thanks to the appearance of new fastwireless network technology and the velocityof WiFi networks (very extended technology),nowadays it is possible to remove the maincomputation process of the autonomous robotsand centralize it in a remote place and com-municate them through wireless networks asI have demonstrated doing this project. Toconclude, I can say that this technology isgetting closer to the costumer every day andthe applications where autonomous robot canbe applied are increasing in all the fields.

6 APPLICATIONS

In this section I am going to explain the pos-sible commercial uses of this technology ex-trapolated to the real world. For example, thisproject could be implemented to help blindpeople. One robot could guide the person onthe street and a quadcopter could be used toinstall a webcam to follow and analyze thescene. With the help of this robot, blind people

Page 9: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

9

would not need to use Guide dogs. These areusually really hard to train and there are notenough of them for so many blind people.Another application can be to use this systemto manage autonomous vehicles in smart citiesintersections. Positioning one (or more, to makethe system more robust) the system can com-municate via wireless network with the carsand improve the interaction between them inthe conflictive intersection. The system couldalso take into account pedestrian movement toavoid accidents.The last example I can imagine is applying thesystem to security surveillance. One or morecameras can be watching the scene and one ormore robots could act if needed if a robber isdetected. This project has a high educationalvalue as teaching support. As this project isalso highly modulable, it will be used as apracticum support for some related subjects atthe Universitat Autonoma de Barcelona, suchas Robotics or Computer Vision. From thisproject, a student can get the acknowledgementof robotics, computer vision, image recogni-tion, protocol communication, Android devel-opment and hardware development combinedinto a single project, building up an entire fullyworking system that one can interact with andthat one can see working in real life. One maincontribution of this project is the definition ofsmall modules that can be used independently.These are:

• Action Module: Uses the python libraryI made for connecting both Raspberry Piand MD25 board. It also uses Adafruit I2Clibrary. Acts as a client of the communica-tion module.

• Vision module: Runs the PRM and Lagartpath-finding algorithm I made to generatethe path. Also acts as a server for thecommunication module.

• Communication Module: It’s split in 3 de-vices: Android phone, Pluto and Zeus. It’sbased on a server-client model via TCP-IPsockets.

You can download the source code of the entireproject here: mv.cvc.uab.es/robotrp

7 FUTURE WORK

To improve the system, a high capacity batterycan be added to Pluto. Now it must be pluggedin and it cannot move so far away due to thelength of the power cable.Also Zeus can be mounted on a quadcopteror another flying UAV flying over the scene todelete the need to be located in a fixed place(usually on the ceiling). In this case, the quad-copter could follow the robot (this case Pluto)and we would not need to use the delimitermarkers and a fixed workspace.A further future possibility could be to addanother action module (another Pluto, for in-stance) and make them interact amongst them-selves to achieve a concrete action that couldonly be made by cooperating. The system couldalso be online. In this case, Zeus would onlysend the first part of the path calculated (fromnode a to node a+1) and when the robot arrivesat the last node, recalculate the path and sendhim another part. This would improve the pre-cision of the robot and also avoid the obstaclesif the condition changed (for example if anotherobstacle is added to the scene while the robotis moving). The last improvement I would doto the system is to determine the threshold ofthe object detection module dynamic, decidingthe value depending on the conditions of thescene.

APPENDIX ALAGART ALGORITHM

Ive developed this path-finding algorithmdue to the limited computing resources of theRasp-berry Pi. I achieved it by mixing the 2most used path-finding algorithms, Astar andDijkstra. The idea of the algorithm is this:We have a node list representing the currentpath. At the beginning this list only containsthe node A (where the path starts). Then ado-while loop iterates until the list containsthe B node (the last node we want to go).For every iteration, we take the last node wevisited we take its neighbors and sort themout. The heuristic I used to sort them is thedistance from the neighbor to the last node.By doing this, the algorithm always selectsthe neighbor which is closest to the end node.

Page 10: Computer Vision applied to autonomous robots using ... · 1 Computer Vision applied to autonomous robots using Raspberry Pi Daniel Jimenez Mendoza Universitat Autonoma de Barcelona

10

The algorithm only has one path list. Thismakes it faster than Astar, which has morepath lists and sorts them every iteration ofthe loop. My algorithm does not ensure thefastest path but it is considerably faster. Thetrade off is accepted for this project. Here isthe pseudo-code of Lagart algorithm:

Lagart’s pseudo-code

path← [firstnode]while lastnode in path == False do

currentNode← path[lastElement]distances← []for i = 0 to len(currentNode.neighbours)do

neighbour ← currentNode.neighbours[i]distance ←euclideanDistance(neighbour, lastNode)distances.append(distance)

end forsorted← sort(distances, asc)selectedNeighbour ← selectF irst(sorted)path.append(selectedNeighbour)

end whilereturn path

ACKNOWLEDGMENTS

I would like to firstly thank Fernando Vilarino,who has been my tutor and who gave me someof the ideas for this project. He has guidedme for the project to be successful. I wouldalso like to thank the Computer Vision Center(CVC) for all the resources given. Thanks alsoto Jordi Pars, who provided me with some ofthe Pluto materials, to Centre Medic Martorellfor providing materials for the set up and toDani Torrescusa for programming help given.I would like to thank Felipe Lumbreras forhelping me to rapair the robot and Metze forcorrecting this text. Lastly I would like to thankall my family and friends who also have beeninvolved in the project, helping me from thebeginning.

REFERENCES

[1] OpenCV dev team. ”Introduction to OpenCV”http://docs.opencv.org/trunk/doc/py tutorials/pytutorials.html (Accessed June 25, 2014)

[2] MD25 - Dual 12Volt 2.8Amp H BridgeMotor Drive Documentation http://www.robot-electronics.co.uk/htm/md25tech.htm

[3] F. Mattias Holonomic versus nonholonomic constraints. Fac-ulty of Technology and Science, Karlstads universitet, Karl-stad, 2012.

[4] Cole,L Visual Object Recognition using Template Matching.Australian National University, Australia.

[5] Walker, H. M. Degrees of freedom.. Journal of EducationalPsychology, Vol 31(4), Apr 1940.

[6] M. Peter Are we making real progress in computer vision to-day?. Department of Electrical and Computer Engineering,Rutgers University, Piscataway, NJ 08854, USA.

[7] Rafael C, Richard E. Digital Image Proessing. Department ofElectrical and Computer Engineering, Rutgers University,Piscataway, NJ 08854, USA.

[8] Adafruit team Adafruit I2C Python library. https://github.com/adafruit/Adafruit-Raspberry-Pi-Python-Code(Accessed June 25, 2014)

[9] Elinux team Raspberry Pi compatible USB webcams. http://elinux.org/RPi USB Webcams (Accessed June 25, 2014)

[10] Choset,H, et al. Principles Of Robot Motion. Theory, algo-rithms and implementation. MIT Press. 2005.

[11] Python implementation of the K-means clustering algo-rithm http://pandoricweb.tumblr.com/post/8646701677/python-implementation-of-the-k-means-clustering

[12] TCP/IP Client and Serverhttp://pymotw.com/2/socket/tcp.html (Accessed June 25,2014)