6
Low-cost Wireless Mobile Ad-Hoc Network Robotic Testbed Christopher G. Wilson and Thaddeus Roppel Electrical and Computer Engineering Department Auburn University, AL 36849-5201, USA Phone: (334) 844-1814, Fax: (334) 844-1809 Email: [email protected] and [email protected] Abstract—Collaborating mobile robots equipped with WiFi transceivers are configured as a mobile ad-hoc network. Al- gorithms are developed to take advantage of the distributed processing capability inherent to multi-agent systems. The focus of this study will be to determine the optimal amount of communication which allows the robots to share a sufficiently detailed global map, while keeping their processing time and energy usage to a minimum. A hardware testbed is described, which will be used to examine these trade-offs in an indoor laboratory-scale test area. Single robot test results are presented. I. I NTRODUCTION Several research groups have developed, or are working toward implementing testbeds for studying wireless commu- nication among collaborating robots. Mobile Emulab at the University of Utah allows remote access to robots carrying motes and single board computers through a fixed indoor field of sensor-equipped motes, all running the users selected software. In real-time, interactively or driven by a script, remote users can position the robots, control all the computers and network interfaces, run arbitrary programs, and log data [1]. The Oklahoma State University Multi-Vehicle Coordina- tion Testbed implements a group of robots for testing and implementing hybrid control approaches for formation con- trol, multi-agent coordination, multi-robot learning, and real- time wireless networked control system design [2]. A testbed implementation at the City College of the City University of New York includes a total of 22 nodes emulating real life mobile robots, where two nodes are FPGA boards and the remaining ones are laptop and desktop PCs [3]. These are just a few of the many examples that can be found in the literature. A review is provided in [4]. Each of these implementations offers a unique approach to mobile robot communications. However, in each case, a group of mobile vehicles, either real or emulated, is provided, along with a means of RF communication and a method to determine and control position. The testbed reported here is similar in many respects to those just described. However, a unique objective is to study the trade-off between the amount of communication and the ability to find a target, while employing sensor fusion to aid in mapping and navigation. This requires many runs to conducted repetitively and repeatably. In order to support this effort, a large amount of the path planning and navigation is off-loaded Fig. 1. Experiment field with one robot (blue object near top center). to a centralized server, without compromising the real-time, real-hardware aspects of the experiment. The characteristic of the system as a mobile ad-hoc network is largely maintained, while the speed of testing is greatly increased compared with performing all processing on-board the robots. II. EXPERIMENTAL SETUP The experimental setup is designed to test the robots’ ability to cooperatively navigate and simultaneously map an environ- ment. An additional constraint is to construct and operate the testbed inexpensively. To keep costs reduced, the present test- bed is in an indoor, laboratory-controlled environment. A. The Field The experiment is housed in an 8 ft. x 16 ft. field designed to mimic an indoor environment consisting of a hallway with three rooms on either side. The walls are 12” tall, and the top is open. The field is shown in Figure 1. B. The Robots 1) Hardware: The Connex 400xm board from Gumstix, Inc. was selected as the primary mobile embedded computing platform. It is a 400 MHz Intel XScale ARM-based proces- sor with 64MB of RAM and 16MB Flash memory running embedded Linux 2.6.18 with the μClibc library. The Connex has two specific connectors, which allow it to connect to

[IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

Low-cost Wireless Mobile Ad-Hoc NetworkRobotic Testbed

Christopher G. Wilson and Thaddeus RoppelElectrical and Computer Engineering Department

Auburn University, AL 36849-5201, USAPhone: (334) 844-1814, Fax: (334) 844-1809

Email: [email protected] and [email protected]

Abstract—Collaborating mobile robots equipped with WiFitransceivers are configured as a mobile ad-hoc network. Al-gorithms are developed to take advantage of the distributedprocessing capability inherent to multi-agent systems. The focusof this study will be to determine the optimal amount ofcommunication which allows the robots to share a sufficientlydetailed global map, while keeping their processing time andenergy usage to a minimum. A hardware testbed is described,which will be used to examine these trade-offs in an indoorlaboratory-scale test area. Single robot test results are presented.

I. INTRODUCTION

Several research groups have developed, or are workingtoward implementing testbeds for studying wireless commu-nication among collaborating robots. Mobile Emulab at theUniversity of Utah allows remote access to robots carryingmotes and single board computers through a fixed indoorfield of sensor-equipped motes, all running the users selectedsoftware. In real-time, interactively or driven by a script,remote users can position the robots, control all the computersand network interfaces, run arbitrary programs, and log data[1]. The Oklahoma State University Multi-Vehicle Coordina-tion Testbed implements a group of robots for testing andimplementing hybrid control approaches for formation con-trol, multi-agent coordination, multi-robot learning, and real-time wireless networked control system design [2]. A testbedimplementation at the City College of the City University ofNew York includes a total of 22 nodes emulating real lifemobile robots, where two nodes are FPGA boards and theremaining ones are laptop and desktop PCs [3].

These are just a few of the many examples that can be foundin the literature. A review is provided in [4]. Each of theseimplementations offers a unique approach to mobile robotcommunications. However, in each case, a group of mobilevehicles, either real or emulated, is provided, along with ameans of RF communication and a method to determine andcontrol position.

The testbed reported here is similar in many respects tothose just described. However, a unique objective is to studythe trade-off between the amount of communication and theability to find a target, while employing sensor fusion to aid inmapping and navigation. This requires many runs to conductedrepetitively and repeatably. In order to support this effort, alarge amount of the path planning and navigation is off-loaded

Fig. 1. Experiment field with one robot (blue object near top center).

to a centralized server, without compromising the real-time,real-hardware aspects of the experiment. The characteristic ofthe system as a mobile ad-hoc network is largely maintained,while the speed of testing is greatly increased compared withperforming all processing on-board the robots.

II. EXPERIMENTAL SETUP

The experimental setup is designed to test the robots’ abilityto cooperatively navigate and simultaneously map an environ-ment. An additional constraint is to construct and operate thetestbed inexpensively. To keep costs reduced, the present test-bed is in an indoor, laboratory-controlled environment.

A. The Field

The experiment is housed in an 8 ft. x 16 ft. field designedto mimic an indoor environment consisting of a hallway withthree rooms on either side. The walls are 12” tall, and the topis open. The field is shown in Figure 1.

B. The Robots

1) Hardware: The Connex 400xm board from Gumstix,Inc. was selected as the primary mobile embedded computingplatform. It is a 400 MHz Intel XScale ARM-based proces-sor with 64MB of RAM and 16MB Flash memory runningembedded Linux 2.6.18 with the µClibc library. The Connexhas two specific connectors, which allow it to connect to

Page 2: [IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

Fig. 2. Wifistix (top card) and Gumstix (bottom card).

Fig. 3. IR Sensor turret assembly.

the Robostix micro-controller board and the Wifistix wirelesscard. The Robostix is an Atmel ATmega128 micro-controller-based interface board responsible for translating betweenthe Gumstix and the sensors and motors. Communicationto the Gumstix occurs via a 56kbps serial connection. Tocommunicate to other robots, the Gumstix uses the Wifistix,which is based on the Marvell DRCM81 module from WistronNeWeb Corporation with the Marvell 88W8385 chipset [5].The chipset supports 802.11b/g in both infrastructure and ad-hoc connection modes. An image of both the Connex and theWifistix can be found in Figure 2.

For robot interaction with the environment, each robot hastwo continuous drive servos arranged in a standard differentialdrive layout. Two adjustable height sliders keep the robot fromtipping forward or backward. To detect obstacles, each robothas a pair of Sharp GP2D12 IR analog range sensors mountedon a turret attached to a standard servo that rotates 180◦,allowing full 360◦ coverage by the two sensors. Figure 3 showsthe turret assembly. Two modified optical mice are mounted

underneath the robot to provide position and rotation throughodometry calculations.

Two 7.2V, 2800mAh batteries provide power to threeLM7805 5V/1A regulators, which in turn, power the motors,sensors, and Gumstix. Since the Gumstix stack can require apeak of more than 1.5 amps when the Wifistix is fully poweredand actively acquiring a network, two regulators are required topower the sensors and Gumstix stack. Both of these electronicsregulators are attached to the same battery supply. The motorpower supply is provided by the second battery and the thirdLM7805.

Additionally, a base station is used to perform data-loggingand computational support. The floating point algorithms de-veloped from a previous simulation experiment required over5 minutes to complete on each robot. These same algorithmsrun in less than 1 second on the base station computer.

2) Software: Much of the complex mapping algorithmswere developed in a previous software simulation experiment[6]. There are three code-bases used for this project: the Ro-bostix micro-controller code, the Gumstix ARM-based code,and the desktop base-station code. The ATmega128 micro-controller on the Robostix is programmed using C while theGumstix and desktop code are both C++.

The Robostix code is responsible for sending the PWMsignals to all three servos and acquiring the IR distancesensor and PS/2 mouse data. The software initially sets up thenecessary variables and timing parameters and then enters aninfinite loop, handling serial data as it comes in and checkingstatus flags. Each servo is controlled via a PWM register inthe ATmega128 micro-controller. Since the drive servos arecontinuous, changing the value in the corresponding PWMregister will allow control over forward and reverse. Thus,full drive control over the robot is maintained by modifyingthe two PWM registers. The turret servo is controlled via athird PWM register. Since the servo needs to have settled toits new position before the IR sensor readings are taken, anadditional timer is used for ensuring that the servo has enoughtime to move and settle to its new position. Once the turretservo has settled, the IR sensors are read via the ATmega128’sADC analog to digital converter inputs. Each IR is polledfour times with 32 milliseconds between readings. This data isthen sent to the Gumstix where a conversion factor determinedby calibration testing is applied to the raw data. The micro-controller polls the optical mice via Jan Pieter de Ruiter’sAtmel PS/2 library [7] for movement data every 1/8 second.The returned three byte packets are parsed and sent to theGumstix.

The robostix disables the turret servo while the robot ismoving. This serves to reduce movement errors caused by thetorques exerted on the robot via the turret servo. Once therobot stops moving, the turret is activated again.

The Gumstix code is responsible for processing the rawmouse data, extracting robot position and keeping the robotfollowing the planned paths. It is also responsible for sendingthe sensor data obtained from the Robostix back to the basestation for more efficient processing. The Gumstix code is

Page 3: [IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

programmed in C/C++ using Glib for its signaling and IOmonitoring system. Glib 1.2.10 was chosen due to its inclusioninto the build system of the Gumstix. Using Glib’s eventloop and IO notification system allows a single threadedimplementation to monitor both the wireless network IO fromother robots and the serial port IO from the Robostix. Theprogram is structured such that three channels are watchedwith corresponding functions designed to handle the respectivechannels. As mentioned before, the network connection isconverted into a Glib channel, and a function is specified tohandle parsing of the packets and to determine if any actionsshould be performed. The serial port is also monitored forcommunications with the Robostix with a function designedto parse the serial stream and execute any necessary functionsfor the robot based on that input. Lastly, stdin is monitoredfor user debugging of the code and the state of the robot.

The Gumstix code can also be thought of as a finite statemachine with a few operating modes: Calibration, Setup,Search, Homing, and Target Found. The robot initially bootsinto the setup mode. In this mode, the servos are disabledbut the sensors are initialized. For the calibration mode, thesensors are enabled, and the motors are put under manual usercontrol. The search mode is the active mode, with the robotactively scanning its sensors, transmitting data back to thebase station, and driving any paths sent from the base station.Once a robot finds the target, it is put into Target Found mode,whereupon it shifts into a state similar to the setup mode. Atthis point, the robot has effectively finished its task, but stillneeds to communicate with its neighbors to help them findthe target. When a robot reaches the Target Found mode, itthen tells the other robots to switch to the Homing mode andrelays the position of the target.

The desktop code is used for data logging and for compu-tation of the destination point and path. The code is writtenusing C++ and based on the gtkmm C++ wrappers for theGTK+ library. It is designed such that each robot is keptisolated in its own class instance, referred to as the robotClass,with any communication originally occurring between robotsnow takes place between the robotClass instances. In this way,the communication aspect is preserved and can be migratedback to the robots when more processing power is available.Additionally, this allows a variable number of robots to besupported: it is only necessary to instantiate a new classinstance per additional robot.

The key then becomes knowing what data coming in overthe network corresponds to which robot. The robots commu-nicate over an ad-hoc network via an XML-based data format.This format was chosen since it is desirable to have all datalogged to a MySQL database, and for the to data be in ahuman-readable format. In keeping with this idea, each packetis required to have a minimum of two tags. The source tagwhich serves as the root element is either <r> for transmissionfrom robot or <b> for transmission from the base station.This is followed by the robot number tag <n>#</n>. If thesource tag is <r>, then the robot number is the number ofthe robot sending the packet. If the source tag is <b>, then

it is the robot to which the packet is to be sent. If the packetis from the base station and has a number of zero, then allrobots will listen to the data. The element names were chosento be one letter long in an effort to reduce the amount ofdata transferred while still providing human readability. Afull listing of used element tags is presented in Tables I andII. Since the original implementation had all processing takeplace on the robots, a sensor data element was not added to thedata format. However, this information is sent back within thedebugging tag and is correctly parsed by the desktop program.

<r> Robot header(required)

<n> # </n> Source Robot’s Number(required)# = integer

<L> x y theta </L> Location (optional).<T>0</T> Target information<T>1 x y</T> (optional).

If 0, target not found.If 1, target found.x,y location.

<g> string </g> debugging values(Optional).

<P/> Path receivedacknowledgement

</r> End of the robot packet(required).

TABLE IROBOT TO ROBOT/BASE STATION PACKET FORMAT.

Using the source number from the robot packets, the desktopcode takes the sensor readings from each robot and distributesthe data appropriately to the specific robot class instanceassociated with that robot. The robotClass is responsible forconverting the raw IR sensor values to an approximate distancethrough a calibrated curve fit function. This distance data isthen placed on a two dimensional image centered around therobot, called the view map, taking into account the orientationof the robot. To keep sensed distances in scale with the field,the calculated distances are restricted to a maximum of 62 cm.Anything less then this distance is plotted as a black line onthe view map, while anything greater is clamped to 62 cm. Anexample view map can be found in Figure 4. For the user’sbenefit, the view map displayed has additional data on it. Thecenter red dot is the robot’s position with the vector betweenthe red and blue dots being the orientation. The smaller dotsaround are individual sensor readings.

Due to the errors that occur in the obtained mouse data,the orientation of the robot accumulates errors very quickly.In an effort to zero these errors as often as possible, the viewmap undergoes a line extraction algorithm based on the Houghtransform to determine the angles of the walls in relation tothe robot [8]. Using the a priori information that the robots

Page 4: [IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

<b> Base station header(required).

<n> # </n> Destination Robot’sNumber (required).# = integer.

<L> x y theta </L> Location (optional).<T>0</T> Target information<T>1 x y</T> (optional).

If 0, target not found.If 1, target found.x,y location.

<s> code </s> State to put the robot in.(Optional).Code: 0 Setup

1 Start Search2 Homing

<P n=#> x0 y0 x1 y1 Path to Destination. . .</P> (Optional).

n = the # of points in thepath.x0/y0 = the first point,followed by subsequentpoints.

</b> End of the robot packet(required).

TABLE IIBASE STATION TO ROBOT PACKET FORMAT.

are in an orthogonal indoor environment, the walls should atmultiples of 90◦ degrees. The algorithm then computes anangular offset by averaging the distance of extracted anglesfrom a multiple of 90◦. This angular offset is then added tothe robot’s orientation to obtain the corrected angular estimate.

The mapping component of the desktop software uses theidea of occupancy grids to probabilistically map obstacles ona shared map. A local map, a larger and lower resolution map,is used to perform destination point and path determination.In a previous simulation experiment, the robot was describedby a single point in the local map. However, in the presentphysical implementation, such a requirement would lead topoor resolution and dramatically increase the relative size ofobstacles. Thus, the map is recast so that the robot consists ofmultiple pixels in the local map. Without modification, therobot will plan paths where it will hit the obstacles, so afurther step of dilating the obstacles to slightly larger thanthe radius of the robot is implemented. In this manner, thepaths planned will keep the robot from driving into obstacles.The local map is updated at each iteration by merging it withthe view map, taking into account the robot’s position andorientation to properly place the view map in the local map’sreference frame.

If the robots are allowed to communicate, they share theirview map, position and orientation. Each robot then merges the

Fig. 4. Example robot view map.

Fig. 5. Robot view map with lines extracted.

other robot’s view maps into their local map. This combinedmap is then used by the robots to determine a number ofdestination points. Each robot then calculates the cost to reacheach possible destination point and shares the resulting listwith the other robots. The robot with the lowest cost ina particular region ”wins” that region, and all other robotsremove any points in that region from their list of possibledestinations. Once the destination point arbitration is over,each robot plans a path to its chosen point and proceeds toit. Currently, all of this arbitration occurs at the base station;however, it is coded in such a way that it will be easy toimplement in a distributed manner on the robots.

A robot is considered to have found a target when it is

Fig. 6. Merged local map with the view map in Figure 4.

Page 5: [IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

within line-of-sight at a pre-defined range.

C. Testing Procedure

To perform the tests, the robots are placed in a group onthe field and are given their respective starting locations by thedesktop code. It then directs the robots to begin scanning theirlocations via the IR distance sensors. During each revolutionof the turret, the robots send their data back to the base stationfor processing. All of the necessary processing occurs and thepaths are generated and transmitted to the robots. The robotsthen receive the path, and proceed to follow it. All robots aresynchronized per phase of scanning, communication, and trav-eling during one iteration. Each testing run consists of multipleiterations until the robots find the target, stop working, or getstuck.

Once the desktop software determines the robot can see thetarget, the robot is put into the Target Found mode. It thenrelays the target’s location to the other robots, which switchinto the Homing mode. The run is considered a success whenthe target is found and multiple robots cluster around it.

III. RESULTS

Currently, tests with one robot have been performed. Thelocal map series for one particular successful mapping butunsuccessful target detection run can be found in Figures 7,8, 9, and 10. In these maps, the blue dot is the current robot’sposition. The red line is the planned path to the destinationpoint. The green dot is the current target’s location. The robotsuccessfully found the target in roughly 75% of the tests. Insome of these instances, the robot luckily picked the properpath to find the target in very short run times. In the other tests,the robot lost its proper location due to accumulated errors andran into a wall. This was partially fixed by utilizing the imageprocessing routines discussed in section II-B2. However, theposition estimate does progressively deteriorate due to thesmall errors in angle accumulating during the movement phasewhile the IR sensors are not running. Additionally, the robotscurrently have no bump sensors, so algorithms are beingdeveloped to monitor the mice for movement data while therobot is supposed to be moving.

Fig. 7. Second iteration local map.

IV. FUTURE WORK

Experiments with multiple robots are currently in progress.More detailed data logging will be implemented to allowscenarios to be replayed offline for comprehensive analysis,

Fig. 8. Tenth iteration local map.

Fig. 9. Nineteenth iteration local map.

Fig. 10. Forty-first iteration local map.

merging the reported robot’s position with its actual position.Additionally, more powerful processing capabilities will bebrought on-board the robot, allowing all processing to occuron the robot and allowing more comprehensive testing of thepower trade-offs that occur in multi-agent systems betweencommunication, processing, and target-finding.

V. CONCLUSION

A testbed has been designed that provides for testing ofimportant communications-related trade-offs in multi-robot,collaborative networks. The testbed is currently undergoingevaluation with a single robot. Since each robot executes thesame code, the transition to multiple robots will be rapid.This testbed should soon be available to support a variety ofeducational and research opportunities that may be of interestto a broad user community.

ACKNOWLEDGMENTS

The authors are grateful for the sponsorship of this work inpart by Grant #W911NF-06-1-0334 from the Army ResearchOffice, in part by US Army Aviation and Missile ResearchDevelopment and Engineering Center (AMRDEC) and the Na-tional Science Foundation through Grant #NSF-IIS-0738088,and in part by the Wireless Education and Research Center(WEREC) at Auburn University.

Page 6: [IEEE 2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops - Washington, DC, USA (2009.04.6-2009.04.8)]

REFERENCES

[1] D. Johnson, T. Stack, R. Fish, D. M. Flickinger, L. Stoller, R. Ricci, andJ. Lepreau, “Mobile Emulab: A Robotic Wireless and Sensor NetworkTestbed,” in Proc. 25th IEEE International Conference on ComputerCommunications INFOCOM 2006, 2006, pp. 1–12.

[2] R. Fierro and E. Edwan, “The OSU Multi-Vehicle Coordination Testbed,”in Proc. of the 2002 45th Midwest Symposium on Circuits and Systems -MWSCAS-2002, vol. 3, 2002, pp. III–41–III–44.

[3] I. Hokelek, J. Zou, M. Uyar, A. Abdelal, J. Xiao, N. Chakravarthy,M. Fecko, and S. Samtani, “Testbed Experiments of Dynamic SurvivableResource Pooling Using FPGA-Based Robot,” in Proc. Third IEEEInternational Conference on Wireless and Mobile Computing, Networkingand Communications WiMOB 2007, 2007, pp. 65–65.

[4] I. F. Akyildiz, T. Melodia, and K. R. Chowdhury, “Wireless multimediasensor networks: Applications and testbeds,” Proc. IEEE, pp. 1–1, 2008.

[5] Gumstix, “Gumstix Wiki - Wifi,” http://docwiki.gumstix.org/Frequentlyasked questions/Wifi. [Online]. Available: http://docwiki.gumstix.org/Frequently asked questions/Wifi

[6] A. Ray, “Cooperative Robotics using Wireless Communication,” Master’sthesis, Auburn University, 2005.

[7] J. P. de Ruiter, “PS/2 library,” http://panda.cs.ndsu.nodak.edu/∼achapwes/PICmicro/ps2/ps2.html. [Online]. Available: http://panda.cs.ndsu.nodak.edu/∼achapwes/PICmicro/ps2/ps2.html

[8] A. K. Jain, Fundamentals of digital image processing. Englewood Cliffs,NJ : Prentice Hall, 1989.