Omnidirectional Mobile Robot Navigation in Virtual Environments

Embed Size (px)

Citation preview

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    1/15

    1

    OMNIDIRECTIONAL MOBILE R OBOT NAVIGATION IN VIRTUAL E NVIRONMENTS 

    Fawaz Y. Annaz, IEEE Senior Member, Kai Li Lim, Afiq Farees Jamil 

    Abstract

    Research in autonomous mobile robots is gaining much more attention in recent years, particularly in

    coordinating rescue missions and inspection of structures in disaster zones. It is the aim of this paper

    to contribute towards such advancements by introducing real time mobile robot merging to a virtual

    environment to allow for rapid navigation and control algorithms development with minimal cost

    and time.

    The proposed environment generates random missions that depict real hosting environments, thus

    facilitating for the robots to unrestrictedly move in any open surroundings, without the need for the

     physical presence of any obstacles. To achieve this, a GUI was developed to randomly create

    missions of different sizes and complexities.

    The omnidirectional robotino was featured as the rescuing robot, and it was programmed with

    various algorithms to allocate targets. To demonstrate real-time robot-VE merging, test results of two

     basic algorithms will be presented and compared, namely, the embedded wall-following and the

    Trémaux.

    I ndex Terms  — Keywords: Mobile Robot, Robotino, Rescue Missions, Virtual Environment (VE),Wall-Following (WF), Hardware (HW) Navigation 

    I.  INTRODUCTION 

    Autonomous robots are designed to discretely discover and interpret their surroundings, thus they areexpected to orient themselves around obstacles to reach a target location from an initial position.

    This is what is referred to as path planning. In real-time autonomous navigation, path planning becomes more challenging in an unknown-dynamically-changing environment [1] and [2].Many solving algorithms can be applied to solve static missions, with various success and efficiencylevels, depending on the mission nature, the way information is revealed and the goal location. Someof the approaches in autonomous robots include: WF method [3]; Potential field methods [4] and [5];Virtual target approach [6] and [7]; Landmark learning [8]; Edge detection and graph-based methods[9]; Pledge and Trémaux [10] and [11]; Vector field histogram methods, Dynamic windowapproaches, neural network based methods, and fuzzy logic methods [12], [13], [14], [15], [16] and[17] and many others.Regardless of the algorithm, traditionally, navigation has always been conducted in confinedenvironment, such as a factory floor or a maze. The common steps have been to physically developand build the environment, and then robots are programmed or trained to navigate their way to reachthe final goal. Mazes have to be substantially rugged to survive repeated tests. Therefore, they are

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    2/15

    2

    heavy, difficult to move around, and when not in use they occupy unnecessary space. Anotherdrawback is that, environments have to be reconstructed or modified to meet new static architectures,which could be costly and time consuming. Furthermore, it is highly likely that in the course oftraining or testing of new algorithms, the robots themselves might experience physical damage asthey erroneously and frequently collide with obstacles or walls.

    To overcome the above disadvantages and to handle dynamically changing environments (such asfactories floors or rescue mission scenarios), the VE was developed. Thus the approach does notsuffer from any of the above disadvantages, and it has the following further advantages:o  Has a high degree of portability and can be implemented in any open space.o  Allows for unlimited in-depth algorithm performance assessment, and can be effectively andefficiently modified to architectures with various degrees of complexity, including those mimickingreal rescue missions.o  Offers real-time-merging with robot-performance-trackingo  Permits for ‘on-the-fly’  change of navigation algorithms in fixed, variable or randomlygenerated (static or dynamic) environments.

    To inspect this concept and to validate this approach, only algorithms that does not require priorknowledge of the environment will be implemented. To demonstrate the capability of the VE, thesuperiority of the Trémaux as a robot navigation algorithm will be compared to the basic WFtechniques.

    II.  THE TRÉMAUX'S ALGORITHM 

    The Trémaux algorithm was invented by Charles Pierre Trémaux (portrayed in Fig. 1), who was a

    French architect, orientalist and photographer, and the author of numerous scientific and

    ethnographic publications, [18]. The navigation algorithm is an efficient method to find a way out ofa maze by drawing lines on the floor to conclude a solving route. While the method guarantees a

    solution to any maze with well-defined passages, it does not guarantee the solution optimality [19]

    and [20].

    Fig. 1: Charles Pierre Trémaux (20 July 1818 - 12 March 1895)

    When programed with the Trémaux algorithm, the robot records its path throughout navigation

    identifying possible routes and deadened. Here, the robotino robot was used to verify the real-time

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    3/15

    3

    hardware-VE concept. Shown in Fig. 2 with its specifications listed in Table 1, the robotino is

    mainly used in education, training and research. It is driven by 3 (120° apart) independent,

    omnidirectional drive units. Each unit consists of a DC motor, a 16:1 gear unit, an all-way roller, a

    toothed belt and an incremental encoder that compares desired and actual motor speeds. The robot

    main chassis houses rechargeable batteries, motors and sensors, with a mounted command bridge

    that houses the controller, I/O module and a LCD to display information and functions that allows

    the user to navigate and run built-in demonstrations. Other components include:

    o  9 built-in infrared distance sensors, which are placed 40° apart around the body frame. The

    sensors have a range of 4 to 30 cm and are mainly used in obstacles avoidance.

    o  Individual motors encoders for speed measurements

    o  Anti-collision switching strip chamber around the chassis circumference

    o  The connectivity options to computers and other devices are via a USB, Ethernet or Wi-Fi.

    Therefore, the robot can communicate wirelessly with a PC over an IEEE 802.11g WLAN

    standard network, functioning as an access point. Thus, a Wi-Fi enabled PC is required with a

    Microsoft Windows operating system platform to support programing in Visual Basic, which was

    the language used in developing the VEGUI.

    Fig. 2: The Robotino, developed by the FESTO Didactic

    TABLE I:  R OBOTINO SPECIFICATIONS Processor   PC 104 processor, compatible with MOPSlcdVE, 300 MHzOperating System  Linux OS with real-time kernelRAM  128 MB SDRAMStorage Memory  1GB Compact flash cardDiameter/Height  370mm / 210mmWeight 11KgConnectivity  Wireless LAN; VGA socket; 2 USB ports; Ethernet portI/O Interface  8 analogue and digital inputs; 8 digital outputs, and 2 actuator

    relaysSensors  9 Infrared distance sensor; Encoder; and Proximity sensor

    Voltage Supply  24VDC, 4.5A, 2×4Ah rechargeable batteriesDrive Unit  3 independent, omnidirectional drive units, 120° apart

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    4/15

    4

    III.  THE VIRTUAL ENVIRONMENT GRAPHIC USER INTERFACE, VEGUI:

    The VE GUI is shown in Fig. 3, which when it is executed, it allows the user to construct the VE,

    continuously communicates with the real hardware, and keeps track of navigation and hardware

    mapping until a solution is obtained.

    Fig. 3 Virtual Environment Graphic User Interface

    The VE may manually be drawn, loaded or randomly generated, with initial robot position and

    final goal location. Regular wireless communication between the robot and the VE facilitates for thegradual reveal of the immediate robot surroundings, as well as, the mapping of its movement onto

    the VE. This helps the user to assess the algorithm behaviour. Thus, for the HW to navigate the VE,

    two main processes take place, the first is computer based, and the other is mobile robot based,

    where the computer discretely reveals the surrounding locations to the robot, the robot decides on the

    next move and reports its new location to the computer, and the cycle repeats, until the goal is

    reached. This process will be explained in more details in section IV, below.

    The GUI is separated into three sections: the settings area (SA), environment layout (EL), and theinformation area (IA). SA is further divided into subsections responsible for VE generation, runtime

    settings, and connectivity. Table 2 lists some of the functions in the SA that are available to the user.

    Typical steps that users need to follow to utilise the VE concept are as listed below:

    1) If a VE has already been created and saved, the user may click on the ‘Load’ button and navigate

    to the directory where the VE is saved. Saved data normally includes the starting robot position

    and the goal location, all of which may be edited manually if necessary.

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    5/15

    5

    2) If a new VE is to be created, then he/she has to initially select the environment window size, and

    then place the walls (to draw the environment), select a starting point for the robot and identify

    the goal point.

    3) Alternatively, the VE may randomly be created (with the ‘Randomize’ button), which again could

     be edited and saved.

    It is important to note that in 1~3 above, the robot is not aware of any selections or settings.

    4)  Now that a VE is available, a navigation algorithm may be activated or downloaded by the robot

    5) A communication method is then selected, depending on the coupled robot.

    6) Connect the robot to the VE

    Once connection is established, navigation will start and live data will be displayed onto the

    information area. Data can include position, steps taken and time of the process.

    Table 2: Program Settings

    IV.  HARDWARE-VIRTUAL ENVIRONMENT NAVIGATION CONCEPT 

    For the Robotino to navigate, it needs to know its immediate surroundings (front, left and right),

    which are discretely revealed as open paths, blocked paths or goal locations. Regardless of thenavigation algorithm, it is assumed that (by default) the robot initially faces north. Its surroundings

    are discretely revealed as Cartesian states. Depending on the on-board navigation algorithm, the

    robot decides on and executes its next move before updating the VE of its new position. This repeats

    until a solution is reached, as illustrated in the flowchart of Fig. 4. Explicit stages of HW-awareness

    and VE-updating is shown in Fig.5 and are explained as follows:

    1.  At time :

    a) 

    Instant : the robot is assumed to be facing north, and it is blind to its surroundings.

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    6/15

    6

     b)  Instant 2: the VE reveals to the robot its immediate surroundings. In this case, blocks on

     both sides and an empty space ahead (north).

    c)  Instant 3: the robot makes its own decision to move forward by one step, where it

     becomes blind again to the remaining environment.d)  At 4 the robot updates its location onto the VE.

    2.  At time : a~d (above) repeat for instants ~4 

    3.  At time 2 onwards: the above repeats, until the goal is reached, otherwise, the algorithm is

    considered to have failed.

    Fig. 4: Hardware-VE Navigation Concept Flowchart

    To further explain navigation, a 3D virtual reality feature of the robot vision was added to the GUI,

    as shown in Fig 6a. With reference to the acronyms in Table 3 and Fig. 6, navigation could be further

    explained. The figure shows the instance when the robot is at |5,6. The VE reveals to the robot its

    immediate surroundings |6,6, |4,6  and |5,7 . The robot is blind to the

    remaining environment. If the robot is programmed with the Left-WF algorithm, then naturally it

    will proceed forward to |5,7 making it as its current location. The robot will then report back its

    location to the VE, and in turns, the VE will reveal to the robot its immediate surroundings, that is:

    |6,7, |4,7 and |5,. Once the information is received, the robot will decide on

    the next move, which is to move forward to |5,. This processes is repeated, until the target isreached, as illustrated in the set of illustrations (a~j) in Fig. 7. Each illustration in the set is made up

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    7/15

    7

    of the following three parts: left, a photo shot of the Robotino navigating the floor of the empty space

    of the Mechatronics Laboratory; centre, the VE with the robot movements tracked; and right, the

    robot virtual vision in 3D representation.

    Table 3: Program Settings

    Robot InitialPosition

    |,   Current Position Right |, 

    where xand y arethe robotcoordinates

    Current Position |,   Facing Current Position |,  Current PositionLeft

    |,  Target Position |,  

    Left Cell isBlocked

    |,  Left Cell is Open |, 

    V.  THE TREMAUXALGORITHM 

    As was explained earlier on, to demonstrate the HW-VE concept, WF was compared to the Tremaux

    algorithm, with the aim that the concept would verify the superiority of the latter, as well as, reveal

    any other associated observations.

    In this section the Tremaux will be explained, and as shown in the flowchart of Fig. 8, the robot

    records its paths throughout its navigation routine, as unmarked, marked once, or marked twice

     paths. Marked twice paths are bidirectional and often lead to dead ends, thus they will not be treaded

    on for a third time. In its navigation, the robot prioritizes marked once paths. If a solution is found,

    then the route is that of the marked once paths, otherwise, all paths will be marked twice, indicating

    that a solution was not found.

    To help developing the VE concept and to track the robot movements, the robot was programmed to

    instruct the VE to display paths that were treaded on once with thick blue lines, and those that were

    stepped on twice with thick dotted red lines, as shown in the example of Fig. 9. Navigation starts

    with robotino traveling forward, until it faces a wall, where it will randomly chooses to make a ±90 

    turn. If, however, the robot is trapped at a dead-end, robotino will make a 180  turn. This process

    continues until the robot arrives at a junction, where there will be more than one plausible path for

    the robot to take. The robot decision on whichever path to take is based on the number of times the

     path has been marked:

    o  If the path were unmarked, the robot would take that path without any contemplation

    o  If the robot meets a path that is already marked once, it checks the status of the junction: If the

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    8/15

    8

     junction is marked, the robot will take the path instead of the junction. If the junction is marked

    under three times, the robot will explore paths in that junction. Three times marked junctions

    indicate that the junction leads to a dead-end.

    Once a path is chosen, unmarked paths will be marked once, and marked once paths are markedagain. Similarly for a junction, when a path is marked twice, the robot will also check for the

    status of the junction. If the junction is marked three time, the robot will turn around to check the

     path behind it. If that path is also marked twice, then the robot has failed to find the target.

    This process continues until the goal location reached. At this point, a marked once path maybe

    traced back to the starting point. This path represents the solution, however, if there are more than

    one solution, then this path may not be the shortest solution.

    The above may be further demonstrated in the sequence shown in Fig. 9:

    o  Initially, the robot is facing north.

    o  Since the first four facing cells are all open, the robot moves north, until it reaches the point (1).

    o  At (1) the robot makes a random decision and turns east, and moves two cells, until it reaches (2).

    o  At (2) the robot has to turn south, as the north cell is blocked and the west cell has already been

    stepped on once. The robot heads south till it reaches (3).

    o  At (3) it makes a random decision to face east again. It then takes one step and has to head north,

    as it is the only path that was not stepped on. It continues north until it reaches (4).

    o  At (4) the robot again makes a choice of heading east. This is because the west and south cells

    have already stepped on once. The robot then takes one step before making a random decision to

    head north, to reach (5).

    o  At (5) robotino has no choice except to loop once before it heads back south again. It is here

    where the twice stepping takes place (shown as dotted route). This continuous till (6) is reached.This process repeats until the goal location is reached

    o  As the robot progresses from its current position (6) towards (3) the robot remembers that these

    will be double treaded paths and therefore instruct the VE to dot the route in a thick red line.

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    9/15

    9

    Fig. 5: Hardware awareness of its position and how it updates the VE on its new location

    Fig. 6: Hardware-VE Navigation Concept Illustration

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    10/15

    10

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    11/15

    11

    Fig. 7: Hardware-VE navigation and the Robot 3D Vision

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    12/15

    12

    Fig. 8: Flowcharts of the Trmaux algorithm 

    Fig. 9: A virtual environment

    VI.  R ESULTS AND COMPARISONS 

    The VE was used to test many scenarios to demonstrate the advantages and limitations of navigation

    algorithms. Initial tests proved that limitations in the simple WF algorithms maybe overcome by the

    Trémaux algorithm. This is clearly demonstrated in Fig. 10a, where the robot has to loop around a

    wall island. Fig.10b shows how this limitation is overcome with the Trémaux algorithm.

    Many other environments were also considered: simple environment such as the one shown in Fig.

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    13/15

    13

    11; or more complicated such as those of Figures 12 and 13. While Fig. 10 showed that Trémaux

    guaranteed a solution, regardless of the initial robot position, Fig. 11 shows that the Trémaux

    guaranteed a solution, regardless of the goal location and the initial direction the robot is facing. Fig.

    11 also clearly demonstrates the independent decision ability of the robot, where two different routes

    (a and b) were found for the same problem. Figures 12 and 13 show that although the Trémaux

    guaranteed solutions, these solutions by no means represent the shortest.

    VII.  CONCLUSION 

    The paper presented the concept of hardware navigation within a virtual environment. To develop

    the concept, a GUI was developed to create a virtual environment that a robot will be able to

    navigate. It was demonstrated that the approach provided VE-robot real-time merging, and allowed

    for tests to be conducted in randomly created environments. Thus, the concept offers an important

    and efficient capabilities in algorithm development and navigation. It is, therefore, encouraging to

    implement this approach in assessing the performance of more advanced navigation algorithms.

    Fig. 10: (a) WF and (b) Trémaux's algorithm

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    14/15

    14

    Fig. 11: Robot Navigation with: (a) WF, and (b) Trémaux's algorithm

    Fig. 12: Robot Navigation with: (a) WF, and (b) Trémaux's algorithm

    Fig. 13: Robot Navigation with: (a) WF, and (b) Trémaux's algorithm

    R EFERENCES 

    [1] O. Motlagh, T. Hong and N. Ismail, “Development of a new minimum avoidance system for a

     behavior- based mobile robot,” Fuzzy Sets and Systems, p. 1929 – 1946, 2009.

    [2] K. Goris, Autonomous mobile robot mechanical design, Brussels: Verije University, 2006.

    [3] Y. Owen and G. Andrew, Dynamical wall following for a wheeled robot, Baltimore: University

    Baltimore, 2006.

    [4] Y. Koren and J. Borenstein, “Potential field methods and their inherent limitations for mobile

    robot navigation,” in Proc. IEEE Conf. on Robotics and Automation, 1991.

    [5] M. Massari, G. Giardini and F. Bernelli-Zazzera, “Autonomous navigation system for planetary

    exploration rover based on artificial potential fields,” 2001. [Online]. Available:

    http://naca.central.cranfield.ac.uk/dcsss/2004/B24_navigationb.pdf. [Accessed December 2014].

  • 8/19/2019 Omnidirectional Mobile Robot Navigation in Virtual Environments

    15/15

    15

    [6] W. L. Xu and S. K. Tso, “Sensor -based fuzzy reactive navigation for a mobile robot through

    local target switching,” IEEE Trans. Systems, Man, Cybernet., Part C: Appl. Rev. 29 (3), p.

    451 – 459, 1999.

    [7] X. Yang, M. Moallem and R. V. Patel, “A layered goal-oriented fuzzy motion planning strategyfor mobile robot navigation,” IEEE Trans. Systems, Man, Cybernet., Part B: Cybernet, vol. 35 ,

    no. 6, p. 1214 – 1224, 2005.

    [8] K. M. Krishna and P. K. Kalra, “Perception and remembrance of the environment during real-

    time navigation of a mobile robot,” Robot Autonomous Systems, vol. 37, p. 25 – 51, 2001.

    [9] A. V. Kelarev, Graph Algebras and Automata, New York: Marcel Dekker, 2003.

    [10] H. Abelson and A. A. diSessa, Turtle Geometry, Cambridge: MIT Press, 1980.

    [11] M. A. Batalin and G. S. Sukhatme, “Efficient exploration without localization,” in In. Proc.

     IEEE Internat. Conf. Robot. Autom, 2003.

    [12] M. Wang and J. N. K. Liu, “Fuzzy logic based robot path planning in unknown environments,”

    in Proc. 2005 Internat. Conf. on Mach. Learn. and Cybernet , 2005.

    [13] A. Zhu and S. X. Yang, “A fuzzy logic a pproach to reactive navigation of behavior-based

    mobile robots,” in IEEE Internat. Conf. on Robotics and Automat , 2004.

    [14] J. Saylor, “Neural networks for decision tree searches,” in  Proc. of IEEE/EMBS-9 conference,

    1987.

    [15] X. Y. Simon, and M. Meng, “An efficient neural network approach to dynamic robot motion

     planning,” Neural Networks, vol. 13, p. 143 – 8, 2000.

    [16] R. Glasius, S. Komoda and S. Gielen, “Neural network dynamics for path planning and obstacle

    avoidance,” Neural Networks, vol. 8, p. 125 – 33, 1995.

    [17] A. B. Nayfeh, “Cellular automata for solving mazes,” Dr Dobbs J, vol. 18, no. 2, 1993.

    [18] J. S. Wilkins and G. J. Nelson, Trémaux on species: A theory of allopatric speciation (and punctuated equilibrium) before Wagner. Archives of Philosophy of the Science, Pittsburgh:

    University of Pittsburgh, 2008.

    [19] J. Pelletier, “Public conference,” in Thibert in Academie de Macon, Burgundy, 2010.

    [20] É. Lucas, Récréations Mathématiques , Volume I, 1882.