4
International Conference on Individual and Collective Behaviors in Robotics 2D visual servoYng of wheeled mobile robot by neural networs RaniaZouaoui 1 Hassen Mekki 1 , 2 l National School of Engineering of Sousse, University of Sousse, Tunisia 2Intelligent Control design and Optimization of complex Systems, University of Sfax, Tunisia rania zouaouihotmaiLfr; Hassen.mekkieniso.rnu.tn Abstract-We are interested in this paper in the 2D visual servoi'ng for a mobile robot type Koala using radial basis function (RBF) neural network (NN). Seen that the interaction matrix, expressing the relationship between the camera motion and the consequent changes on the visual features, contains parameters to be estimated ( depth) and requires a calibration phase of the camera. In more, the model of the robot can contain uncertainties engendered the movement with sliding. An online identification, using NN was proposed to overcome these problems. The RBF NN is used to estimate the block formed by the interaction matrix and the model inverts of the robot. The considered images are described by objects given by four points. Seen that the variables number of the estimated function is important, what can cause a problem of the use of an excessive number ofRBFs. As remedy, we used a new approach consists in considering that a single point is sufficient to solve the problem of the 2D visual servoing of the mobile robot. Key words: Visual servoing, mobile robot type Koala, neural networks. I. INTRODUCTION The Visual servoing is a strategy of effective command to control the movements of a robotic system using visual information om one or multiple cameras and a vision sensor embedded in the system or outside the system more generally. There are different approaches depending on the type of feedback: Visual servoing 2D where the retu information is defined in the image plane [1],[2]. The 3D Visual servoing based on an image to reconstruct information in 3D such as the installation of the robotic system and hybrid interlocks which combine information 2D and 3D. Another point of differentiation is the type of used Visual primitives: points, lines, and moments. In this study, we consider only visual primitives of type point. The visual servoing, science which allows to control the movements of a robot using information images if support on the calculation of an error between one and established 978-1-4799-2813-2/13/$31.00 ©2013 IEEE pose to achieve. This feedback loop attempts to express the relationship between the space of the visual features, and that control of the robot. This relationship is complex, nonlinear and dependent on several uncertain parameters servo chain, such as: the intrinsic parameters and / or extrinsic camera [3], [4], the pose of the camera [5] and the interaction matrix [6]. Starting with the fact that neural networks have a very high capacity to lea and to estimate the non-linear nctions, many researchers tried to design a system of Visual servong based on this technique to solve the various problems oſten encountered in classical Visual servoing [7],[8],[3]. Effectively the various works on this neural control helped to demonstrate the ability of a network to perform a visual positioning with a level of precision similar to that obtained by conventional analytical techniques. That also determined the uncertain parameters of control chain. This paper is organized as follows: In section II, visual servoing is introduced. In section III, the visual servoing by neural networks is presented. The experimental results are given in section IV. II. VISUAL SE RVOING 2D The basic idea of the 2D visual servong, that the feedback is defined directly in the image in terms of image features as given in figure 1: Set) Visual controller Image feature extraction Figure I. Visual Servoing 2D SCt). The current image features S'Ct). The desired image features. Robot + Camera visual feedback 130

[IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

  • Upload
    hassen

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

International Conference on Individual and Collective Behaviors in Robotics

2D visual servoYng of wheeled mobile robot by

neural networs

RaniaZouaoui1 Hassen Mekki1,2

lNational School of Engineering of Sousse, University of Sousse, Tunisia 2Intelligent Control design and Optimization of complex Systems, University of Sfax, Tunisia

rania zouaouilalhotmaiLfr; Hassen.mekkilaleniso.rnu.tn

Abstract-We are interested in this paper in the 2D visual

servoi'ng for a mobile robot type Koala using radial basis

function (RBF) neural network (NN). Seen that the interaction

matrix, expressing the relationship between the camera motion

and the consequent changes on the visual features, contains

parameters to be estimated ( depth) and requires a calibration

phase of the camera. In more, the model of the robot can contain

uncertainties engendered the movement with sliding. An online

identification, using NN was proposed to overcome these

problems. The RBF NN is used to estimate the block formed by

the interaction matrix and the model inverts of the robot. The

considered images are described by objects given by four points.

Seen that the variables number of the estimated function is

important, what can cause a problem of the use of an excessive

number ofRBFs. As remedy, we used a new approach consists in

considering that a single point is sufficient to solve the problem of

the 2D visual servoing of the mobile robot.

Key words: Visual servoing, mobile robot type Koala, neural

networks.

I. INTRODUCTION

The Visual servoing is a strategy of effective command to control the movements of a robotic system using visual information from one or multiple cameras and a vision sensor embedded in the system or outside the system more generally. There are different approaches depending on the type of feedback: Visual servoing 2D where the return information is defined in the image plane [1],[2]. The 3D Visual servoing based on an image to reconstruct information in 3D such as the installation of the robotic system and hybrid interlocks which combine information 2D and 3D. Another point of differentiation is the type of used Visual primitives: points, lines, and moments. In this study, we consider only visual primitives of type point. The visual servoing, science which allows to control the movements of a robot using information images if support on the calculation of an error between one and established

978-1-4799-2813-2/13/$31.00 ©2013 IEEE

pose to achieve. This feedback loop attempts to express the relationship between the space of the visual features, and that control of the robot. This relationship is complex, nonlinear and dependent on several uncertain parameters servo chain, such as: the intrinsic parameters and / or extrinsic camera [3], [4], the pose of the camera [5] and the interaction matrix [6]. Starting with the fact that neural networks have a very high capacity to learn and to estimate the non-linear functions, many researchers tried to design a system of Visual servoi"ng based on this technique to solve the various problems often encountered in classical Visual servoing [7],[8],[3]. Effectively the various works on this neural control helped to demonstrate the ability of a network to perform a visual positioning with a level of precision similar to that obtained by conventional analytical techniques. That also determined the uncertain parameters of control chain. This paper is organized as follows: In section II, visual servoing is introduced. In section III, the visual servoing by neural networks is presented. The experimental results are given in section IV.

II. VISUAL SERVOING 2D

The basic idea of the 2D visual servoi"ng, that the feedback is defined directly in the image in terms of image features as given in figure 1:

Set)

Visual controller

I mage feature extraction

Figure I. Visual Servoing 2D

SCt). The current image features S'Ct). The desired image features.

Robot +

Camera

visual feedback

130

Page 2: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

The 2D visual servoi'ng is based on the relationship between

the time variation of the visual information S and the camera

velocity V [9], [10]. This relationship is expressed by the

equation (1):

III. Visual servoi'ng by adaptive neural networks

The NN is used to estimate the block formed by the interaction

matrix and the model inverts of the robot. The considered

images are described by objects given by four points. Seen (1) that the variables number of the estimated function is

Where Ls is the interaction matrix, noted also by the image

Jacobian that links the time variation of 5 to the camera

instantaneous velocity V. If we note: e(t) = S*(t)-S(t) (2)

Using (1) and (2) it is possible to obtain the relationship between the camera velocity and the time variation of the error as:

(3)

Where Ls is the interaction matrix (image Jacobian), defined as:

o

1 Z

x

z y

Z

xy

-xy

x and y are the metric coordinates of the features in the 2D image space given by [4],[6]

where W = (X, Y,Z) is the 3-D point coordinates in the camera frame, m = (u, v) gives the coordinates of the image point expressed in pixel units, and a = (cu> c,,' f, au> a,,) is the set of camera intrinsic parameters: Cu and c" are the coordinates of the principal point, f is the focal length, au and ay are the horizontal and vertical scaling factors expressed in pixels/mm[ 11]. In this case, we take 5 = (x, y) the image plane coordinates of the point.

In the interaction matrix, the value Z is the depth of the point relative to the camera frame. Therefore, any control scheme that uses this form of the interaction matrix must estimate or approximate the value of Z. Similarly, the camera intrinsic parameters are involved in the computation of X and y. Thus, the interaction matrix Ls cannot be directly used, and

estimation or an approximation Is must be used. Moreover, the synthesis of the control law is based on the exact knowledge of the model robot. The considered type of robot is the mobile robot. To overcome the mentioned problems, concerning the parameters estimation of the interaction matrix or the estimation of the robot model, an adaptive neural network 2D visual servoi'ng is proposed.

important, what can cause a problem of the use of an

excessive number of RBFs. As remedy, we proposed a new

approach that consists in demonstrating that a single point is

sufficient to solve the problem of the 2D visual servoYng of a

mobile robot [12]. This demonstration is based on the flatness

concept. This contribution in for objective is to reduce the

variables number of the estimated function.

Theorem [12]: « 2D visual servoYng of a mobile robot with two degrees of freedom, can be ensure by the exploitation of the coordinates of a single point of four points with the description of the images of the article under consideration. »

The proposed visual servoYng is shown in figure 2: �daptatlOn algonthm

S*(t e

---7

Set)

Robot

camera

feature extraction

Figure2: Visual servoi'ng by neural networks

Visual feedback

The neural network that we have chosen for our study is RBF type that has two inputs (the coordinates of the center of the circle selected from the four circles) and two outputs (two control laws). The architecture of the neural network used is shown in figure 3:

hidden Input

e,

Figure3: architecture of the neural network

131

Page 3: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

The block 'image feature extraction' is accomplished in following flow diagram:

Visual features extracted

Figure 4: image feature extraction

We detect regions p2

0.8<--<1.2 so S*(pi*4)

(perimeters P, surfaces S, centers). If

the object IS a circle (P=2rrr; p2

S=rrr2 . --- = 1) . Then we extract the image features , S*(pi*4) S=[XvYl](center of the circle).

In this part, we exploit the properties of the NNs as being universal nonlinear function estimators to estimate the whole formed by the matrix interaction and the inverse model of the robot. The output vector U= [ U1; u2] is determined in terms of the input vector E = [e1; e2] by the formula:

(4)

where M is the number of hidden-layer's neurons; Wij are the

hidden-Iayer-to-outputs interconnection weights ; ¢l is the Gaussian activation function of the neurons of the hidden­layer given by :

Ilx-cl12 <D(x) = exp (----z;;z- ) (5)

Where c is the vector representing the function center and a is parameter affecting the spread of the radius. It is worth noting that when the objective function depends directly on the error output e =S*-S (quadratic error), the gradient method cannot be applied to adjust the network settings. Indeed, the error on the output does not explicitly appear in the expression of the objective function. To do this, we consider the criteria to minimize next:

1 2 1 (* )2 J =-e =- S -S 2 2 (6)

To find the weight that minimizes this criterion, the following expression is obtained:

� = (S* -S) � (7) aw aw

However the term!'!'" cannot be explicitly known, since the aw

reporting output of the neural network is the "U" variable and not the variable "S". However, an approximation will be made to obtain a direct relationship between the outputs of the network and the test to minimize.

Let: as

aw as au au' aw

From equation (14) we get:

� = ¢lee) aw

And the first order approximation may provide for: as Sk-Sk-l au Uk-Uk-l

Law of adaptation is then written in the following form:

W(k+ 1)=W(k)-8(S* -S) Sk -Sk-l ¢lee) Uk -Uk-l

IV. Experimental Results

(8)

(9)

(10)

In this section, the realized platform consists in envisaging a robot KOALA (see the robot parameters in table I) equipped with camera-in-hand configuration SONY DFWVL500 (see the camera parameters in table II) which tries to join an object described by four circles.

Robot parameter value Number of wheels 6 Volume 30 x30 cm Height 20 cm weight 4 Kg Speed (maximum) 0.6 mls Acceleration (maximum) 0.7 mls2

Table I: Robot parameters

Vision parameters Value Clock-wise rotation angle 1[12 rad Scale factor 77772 pixels I m Depth field of view Z 1.5 m Focal length 0.008 m

Table II: Camera parameters

The hidden-layer is composed of 121 neurons, the RBF centers c are choosing to cover the approximation region; a=0.15. The initial values for weights are randomly chosen and have a small value. Figures 5 and 6 present the evolutions of control laws.

132

Page 4: [IEEE 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR) - Sousse, Tunisia (2013.12.15-2013.12.17)] 2013 International Conference on Individual

�'�--�--�--�--�--���--�---,

2-9

20

l�

l

19

Iii

t �

Ii

Time (s)

100 1211 1«1 lE1l

Figure 5: Evolution of the control law u1

Time (s)

Figure 6: Evolution of the control law Uz

In order to avoid the trajectory divergence, we start by control it with low speeds until the necessary time for the learning is up then we switch to the neural control. Figures 7 and 8 present the evolutions of temporal errors

ei = Si - Si (i = 1,2) calculated between the image features S=[xl,yd and the desired image features S* = [x{,y{j.

0.1

o.oe

E 11,1:6 � v

\ Ill)(

om

Time (s)

Figure 7: Evolution of the temporal error e1

0

-o.m

.0.114

� E � .() {@ N " -lI1

-012

-IlIJ

�'�Q

Time (s)

Figure 8: Evolution of the temporal error e2

The experimental results show that the image features from initial position can coincide with the desired ones. We see very well the convergence to zero and we consider that the results are satisfying.

V. CONCLUSION

In this paper, adaptive controller based neural network learning is proposed for visual servoYng for robot with camera­in-hand configuration. This technique allows estimating the combination of matrix interaction and the model of the robot. To reduce the complexity of the network of neurons (the number of hidden layer RBFs), there is a new approach that proves that only one point among the four points of the image is sufficient. And finally they tried to confirm this theory by applying it to the neuronal visual servoi"ng.

REFERENCES [ 1] F. Chaumette, S. Hutchinson, "Visual Servo Control Part II", Advanced Approaches- IEEE Robotics & Automation Magazine- December 2007 [2] S. Hutchinson, P L Corke, "A Tutorial on Visual Servo Control";IEEE Transactions on robotics and automation,Yo1.l2, no. 5,October 1996 [3] R. Klobucar, .I. Cas, R. Safaric, "Uncalibrated Visual Servo Control with Neural Network", Journal of Mechanical Engineering VoL 54, no. 9,6 19-628, 2008 [4] X. Zong, Y. Xu, L. Hao, X. Huai, "Camera Calibration Based on the RBF Neural Network with Tunable Nodes for Visual Servoing in Robotics", International Conference on Intelligent Robots and Systems October 9 - 15, 2006, Beijing, China [5] G. Caron, « Estimation de pose et asservissement de robot par vision omnidirectionnelle », These, Universite de Picardie, 30 novembre 20 10

[6] D. L. Wang, Y. Bai, "Improving position accuracy of robot manipulators using neural networks", In Proceedings of the IEEE Instrumentation and Measurement Technology Conference, IEEE, Ottawa, Canada, pp. 1524-1526, 2005. [7] P. T. Cat, N. T. Minh, "Robust Neural Control of Robot-Camera Visual Tracking", IEEE International Conference on Control and Automation Christchurch, New Zealand, December 9- 1 1,2009 [8] P P Kumar, L Behera; "Visual servoing of redundant manipulator with Jacobian matrix estimation using selt�organizing map", India-Robotics and Autonomous Systems 58,2010 [9] E Malis, F. Chaumette, S. Boudet, "2- 1/2-0 Visual Servoing", IEEE Trans. On Robot. and Auto., Vol. 15, no. 2, April 1999 [ 10] F. Chaumette, S. Hutchinson, "Visual Servo Control Part I: Basic Approaches" IEEE Robotics & Automation Magazine- Decembre 2006 [ 1 1]S. Hutchinson, G. Hager, P. Corke, "A tutorial on visualservo control", IEEE Trans. Robot. Autom. VoL 12, no. 5, pp. 65 1-70, 1996. [ 12] Hassen Mekki, "Advanced techniques for visual servoing", Habilitation universitaire, National School of Engineering of Sfax

133