8
S. Omatu et al. (Eds.): Distrib. Computing & Artificial Intelligence, AISC 217, pp. 625–632. DOI: 10.1007/978-3-319-00551-5_74 © Springer International Publishing Switzerland 2013 A Practical Mobile Robot Agent Implementation Based on a Google Android Smartphone Dani Martínez, Javier Moreno, Davinia Font, Marcel Tresanchez, Tomàs Pallejà, Mercè Teixidó, and Jordi Palacín Computer Science and Industrial Engineering Department, University of Lleida, 25001 Lleida, Spain {dmartinez,jmoreno,dfont,mtresanchez,tpalleja, mteixido,palacin}@diei.udl.cat Abstract. This paper proposes a practical methodology to implement a mobile robot agent based on a Google Android Smartphone. The main computational unit of the robot agent is a Smartphone connected through USB to a control motor board that drives two motors and one stick. The agent program structure is implemented using multi-threading methods with shared memory instances. The agent uses the Smartphone camera to obtain images and to apply image processing algorithms in order to obtain profitable information of its environment. Moreover, the robot can use the sensors embedded in the Smartphone to gather more information of the environment. This paper describes the methodology used and the advantages of developing a robot agent based on a Smartphone. Keywords: Mobile robot, robot agent, Google Android Smartphone. 1 Introduction Agents are considered a reference for many robotic systems and applications. An agent can be defined as an autonomous system in which is capable to sense the environment, react to it, develop collaborative task, and take an initiative to complete a task [1]. According to this, agent objectives are in tight correlation with the objectives of artificial intelligence processes and algorithms. Usually, the implementation of an agent application requires several complex and heavy algorithms to make decisions in order to achieve its objective. For example, computer vision methods combined with other sensors usually exploit most of the computational resources delivered by the agent to obtain information of the environment. All this embedded features can raise considerably the economic cost of a robot and so, can make the project be non-viable. In this work we propose the

[Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

Embed Size (px)

Citation preview

Page 1: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

S. Omatu et al. (Eds.): Distrib. Computing & Artificial Intelligence, AISC 217, pp. 625–632. DOI: 10.1007/978-3-319-00551-5_74 © Springer International Publishing Switzerland 2013

A Practical Mobile Robot Agent Implementation Based on a Google Android Smartphone

Dani Martínez, Javier Moreno, Davinia Font, Marcel Tresanchez, Tomàs Pallejà, Mercè Teixidó, and Jordi Palacín

Computer Science and Industrial Engineering Department, University of Lleida, 25001 Lleida, Spain {dmartinez,jmoreno,dfont,mtresanchez,tpalleja, mteixido,palacin}@diei.udl.cat

Abstract. This paper proposes a practical methodology to implement a mobile robot agent based on a Google Android Smartphone. The main computational unit of the robot agent is a Smartphone connected through USB to a control motor board that drives two motors and one stick. The agent program structure is implemented using multi-threading methods with shared memory instances. The agent uses the Smartphone camera to obtain images and to apply image processing algorithms in order to obtain profitable information of its environment. Moreover, the robot can use the sensors embedded in the Smartphone to gather more information of the environment. This paper describes the methodology used and the advantages of developing a robot agent based on a Smartphone.

Keywords: Mobile robot, robot agent, Google Android Smartphone.

1 Introduction

Agents are considered a reference for many robotic systems and applications. An agent can be defined as an autonomous system in which is capable to sense the environment, react to it, develop collaborative task, and take an initiative to complete a task [1]. According to this, agent objectives are in tight correlation with the objectives of artificial intelligence processes and algorithms. Usually, the implementation of an agent application requires several complex and heavy algorithms to make decisions in order to achieve its objective. For example, computer vision methods combined with other sensors usually exploit most of the computational resources delivered by the agent to obtain information of the environment. All this embedded features can raise considerably the economic cost of a robot and so, can make the project be non-viable. In this work we propose the

Page 2: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

626 D. Martínez et al.

development of an agent system by using the computational power, the sensors and actuators, and the communication capabilities of an Android Smartphone.

The popularity and computational power of Smartphones are increasing significantly from the last years. This evolution has fostered many research initiatives [2,3] focused on the relatively new Google Android operating system for mobile devices. This user-friendly operating system is designed for low power consumption while having constant connectivity. The Android powered devices give access to its integrated peripherals such as cameras, wireless connectivity modules, embedded sensors and touch screen. Such devices usually require a high memory capacity and high computational power that is currently delivered by powerful multi-core processors. The computational resources offered by such devices can be directly applied in the development of new amazing applications and also to drive small mobile robots. The main advantage of a Smartphone based mobile robot is that the vision sense can be based on the onboard cameras of the Smartphones without wasting time in connections and in procedures to transfer the image of the camera to the Smartphone. Thus, the computational power of the Smartphone can be focused in the agent implementation required to drive the mobile robot.

This paper proposes a methodology to implement robot agents using the onboard Smartphone resources (Figure 1) accessible through by Google Android Software Development Kit [4]. The motivation of this research is the evaluation of the performances of a mobile robot agent based in a Google Android Smartphone. In this paper, the objective of the developed mobile robots will be playing a game inspired in the soccer competition but without following any specific standard [5] and without a centralized command host. The soccer game requires very specific and well developed agents and also enables the future development of collaborative agent methodologies [6] and strategies when playing with teams of several mobile robots.

Fig. 1 Soccer robot with a Google Android Smartphone as the main computational unit

Page 3: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

A Practical Mobile Robot Agent Implementation 627

2 Materials and Methods

The materials used in this paper are the set of physical components which form the soccer robot agent: the Smartphone, the mobile robot structure, and its internal control devices. The method used in this paper is mainly the vision sense required for gathering environment information in order to play the proposed game.

2.1 Central Processing Unit

The central processing unit used in this paper is an Android HTC Sensation Smartphone device which is powered by a dual-core 1.2GHz processor, 768MB of RAM memory, and Android 4.0.3. The Smartphone also integrates WIFI and Bluetooth connectivity modules and other embedded sensors such as GPS, ambient light sensor, digital compass, three-axial accelerometer, gyroscope, multi-touch capacitive touch screen, proximity sensor, microphone, and a frontal and a rear camera. In this case, the Android SDK [4] provides an easy method to manage such sensors and also to implement multi-threading applications.

2.2 Mobile Robot

The soccer mobile robot is composed by an external case made in fused ABS plastic material that is also very resistant. The plastic case has been colored in red and in blue to distinguish the soccer team and allow the differentiation of the team by the different agents. A feature that will be specially needed in future team implementations. This external case is designed to support the motion of the mobile robot and also a kicking mechanism with a motor to push a small ball. The Smartphone is installed horizontally in a support on the top of the mobile robot case to ensure an adequate angular view of its rear camera. Figures 1 and 2 show the external aspect of the complete mobile robot based on a Google Android Smartphone.

The robot mobility is accomplished with a motor control board plugged to the Smartphone through an USB interface. The board controls two small DC motors

Fig. 2 Soccer robot agent in its playfield environment

Page 4: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

628 D. Martínez et al.

and has a microcontroller which implements the methods required to establish a communication with the Smartphone. In addition, the board is powered by an auxiliary 5.000 mAh battery which also powers and charges the Smartphone.

2.3 Vision Sense

The application of the soccer robot agent uses some image processing algorithms to process the rear image of the Smartphone that is pointed to the playfield. The vision sense has to generate profitable information of the environment from the images acquired that will be used by the agent to play the game. The methodology used to extract the information of the images is explained in [7]. The images acquired are converted to the H and V color space layers to perform a color indexing by an established look up table. Once a new image is acquired, the pixels are indexed to reference the objects which represent the game environment elements such as the ball, the goals, the field, the lines, and the teammates and adversaries. Then, the different objects (or layers) of the classified image are analyzed to extract profitable information for the agent such as relative orientation and distance to the ball, to the goals, to the lines, and to the other mobile robots. Figure 2 shows the screen of the Smartphone of the mobile robot that has windows showing the image acquired, the indexed classified image were each object has an identifying color, and the estimated position of the mobile robot in the playfield.

3 Implementation

The software implementation of the agent in the Google Android Smartphone has been performed in Java language and executed internally in Dalvik virtual machine which is capable to execute several instances simultaneously with an optimization in process management and in use of memory. Such applications can use several context instances called activities which manage the lifecycle of an application. An Android activity is executed in a main execution thread in which can access and manage the user interface resources such as buttons, images, or touch input data.

The programming architecture proposed in this paper to implement the agent, requires several simultaneously execution threads. The Android-level programming enables the initialization and start of different thread instances and services from the main thread. Figure 3 shows our methodology, is structured as a distribution of distinguished tasks, and processed in separate threads using shared memory protocols for data demands.

3.1 Main Thread

The main thread has lifecycle activity methods such as onCreate() and onDestroy() that are called when the activity starts, when the activity goes to

Page 5: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

A Practical Mobile Robot Agent Implementation 629

background to focus the Smartphone resources on new activity created, or when is destroyed to free memory and resources. When the activity is initializing, the other threads and services that define the agent are also created and registered. The main thread firstly creates and configures the camera service that is associated with a surface class and could be configured with different parameters such as camera resolution, frame rate, and several image filters. However, the final effect of the parameters over the camera depends on the Smartphone used and not all Android devices support all parameter configurations.

Fig. 3 Threading and processes structure of the Android activity

Next, the robot agent thread is also initialized and started. When all execution threads of the agent are fully operative, the main thread manages a handler which is listening for asynchronous messages which will be sent between the threads to establish an inter-threading communication. The handler, thus, is designed to copy the profitable data processed from the vision sense thread to global shared memory in the activity, then, such variables from global shared memory are updated when new data are just processed in the vision sense thread. Moreover, this thread also initialize the USB interface in order to send motion orders to the motor control board, and also registers the services to obtain data from the available embedded sensors.

3.2 Vision Sense Thread

Android provides to developers the possibility of accessing to services offered by the integrated cameras of the device. In order to obtain an image from the camera,

Page 6: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

630 D. Martínez et al.

the camera service should be initialized and configured, then, it requires implementing a callback which calls a method each time the hardware delivers a frame image to the application. The callback method onPreviewFrame(…) is internally executed in a separate thread in order to avoid a block in the main thread and consequent performance loose. This separate thread is the vision sense thread of the mobile robot.

The color segmentation and indexing methods used in this thread are explained in [7]. After identifying and labeling the game elements, all objects of the image are analyzed to extract profitable information which would be useful for the robot agent. Such information depends on the object to analyze, for example, useful information about the ball would be the centroid, the relative diameter, the relative orientation, and the relative distance to the mobile robot. Each object or element of the soccer environment has his personalized object-oriented instance which contains all important data and methods about the object. When all object data is processed, the main thread is informed through the handler to copy the object instances in the global shared memory instances of the activity.

3.3 Mobile Agent Thread

The decision making of the robot agent is composed by several functions and by conditional branches. At this moment these functions are designed as an automaton state transfer methods to unify the action and to simplify this initial implementation. However, the multithreading architecture and the internal function procedures have been designed to expand such capabilities and create complex robot agents. The mobile agent thread has a main loop which is running constantly and contains all movement functions and conditions that defines the behavior of the robot agent. The thread, also manages the delivering of motion orders to the motor control board through the USB interface.

An example of basic functions implemented in the mobile agent thread is the “search_ball” procedure which retains mobile robot control until the ball has been detected in the image acquired by the camera of the Smartphone. Other example is the “approach_ball” procedure which retains mobile robot control until touching the ball but or until the ball goes out from the image, for example, when kicked by another player. The method will return true or false depending on the result of the robot movement. Figure 4 shows a code of a loop in the thread to perform a simple task:

The functions of this thread are considered as blocking methods until the objective of the function is completed or interrupted by external causes. The implementation of such functions are structured as methods in which a determined action or objective is done. However, these functions require environmental information to achieve its objective and have to perform data queries to global shared memory instances in order to obtain the profitable data processed in the vision sense thread.

Page 7: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

A Practical Mobile Robot Agent Implementation 631

while(!thread_stop) { if(motors.connected) { found = search_ball(); if(!found)continue; aimed = aim_ball(); if(!aimed)continue; caught = approach_ball(); if(!caught)continue; kick_ball(); wait(2000); } }

Fig. 4 Example code of a simple loop in which the robot searches the ball, goes to it and kicks it

Many agent functions generate relative order displacements for the robot which, in certain cases, uses the orientation sensors embedded in the Smartphone. Then, the mobile robot can execute certain tasks such as rotate a number of degrees based only on the orientation sensor and, alternatively, based on the information of the encoders in the wheels of the mobile robot. All motion functions can be altered by some defined situations such as detecting a collision or a whistle sound (used to start and to end the game). The detection of collisions is performed through the analysis of the information of the accelerometer of the Smartphone [8]. The detection of a starting whistle sound requires a more complex approach because it requires the implementation of a media recorder for requesting the sound and analyzing the amplitude and frequency of the sound captured by using the class MediaRecorder of Android SDK. For example, this implementation can be used to capture the amplitude of the ambient sound and applying a threshold intensity level to detect a large sound signal. The agent of the mobile robot is always aware of the results of these detections in order to adapt the strategy or to control the evolution of the game.

4 Conclusions and Future Work

This paper presents a practical methodology to implement a mobile robot agent based on a Google Android Smartphone device to take advantage of its mobile features and its computational power, as well as its integrated sensors and peripherals. This paper explains all main parts that conforms the mobile robot agent that is based on a multi-threading methodology to manage all the processes required to implement an effective robot agent. The agent uses shared memory instances to asynchronously communicate the different threads which compound the agent activity while maintain its own execution rhythm. The main conclusion of this paper is that a Smartphone offers extended possibilities in order to develop and implement mobile robot agent.

Page 8: [Advances in Intelligent Systems and Computing] Distributed Computing and Artificial Intelligence Volume 217 || A Practical Mobile Robot Agent Implementation Based on a Google Android

632 D. Martínez et al.

Future work will be focused in the improvement of the vision sense thread to include some spatial memory of the elements and objects detected in the playfield. The inclusion of spatial memory will affect the behavior of the agent and the evolutions of the mobile robot that are currently based only on what the agent sees in the image. Other future improvement will be the use of wireless communication to intercommunicate different mobile robot agents to develop collaborative team behaviors.

References

1. Wooldridge, M., Jennings, N.: Intelligent Agents: Theory and Practice. The Knowledge Engineering Review 10(2), 115–152 (1995)

2. Paul, K., Kundu, T.K.: Android on Mobile Devices: An Energy Perspective. In: IEEE 10th International Conference on Computer and Information Technology (CIT), pp. 2421–2426. IEEE Press, Bradford (2010)

3. Son, K., Lee, J.: The method of Android application speed up by using NDK. In: 3rd International Conference on Awareness Science and Technology (iCAST), pp. 382–385. IEEE Press, Dalian (2011)

4. Android Developers website, http://developer.android.com (accessed November 2012)

5. Robocup website, http://www.robocup.org/ (accessed November 2012) 6. Veloso, M., Stone, P.: Individual and Collaborative Behaviors in a Team of

Homogeneous Robotic Soccer Agents. In: Proceedings of the Third International Conference on Multi-Agent Systems, pp. 309–316. IEEE Computer Society, Paris (1998)

7. Martínez, D., Moreno, J., Tresanchez, M., Font, D., Teixidó, M., Pallejà, T., Palacín, J.: Evaluation of the color-based image segmentation capabilities of a compact mobile robot based on Google Android Smartphone. In: International Conference on Practical Applications of Agents and Multi-Agent Systems, Special Session in Agents and Mobility (accepted, 2013)

8. Yazdi, N., Ayazi, F., Najafi, K.: Micromachined inertial sensors. Proceedings of the IEEE 86(8), 1640–1659 (1998)